Total 286 Blogs

Created by - Orhan Ergun

Multicast BIER - Bit Indexed Explicit Replication

Multicast BIER - RFC8279 Bit Index Explicit Replication - BIER is an architecture that provides optimal multicast forwarding through a "BIER domain" without requiring intermediate routers to maintain any multicast-related per-flow state. BIER also does not require any explicit tree-building protocol for its operation. So, it removes the need for PIM, MLDP, P2MP LSPs RSVP, etc. A multicast data packet enters a BIER domain at a "Bit-Forwarding Ingress Router" (BFIR), and leaves the BIER domain at one or more "Bit-Forwarding Egress Routers" (BFERs). The BFIR router adds a BIER header to the packet. The BIER header contains a bit-string in which each bit represents exactly one BFER to forward the packet to. The set of BFERs to which the multicast packet needs to be forwarded is expressed by setting the bits that correspond to those routers in the BIER header. Multicast BIER Advantages The obvious advantage of BIER is that there is no per-flow multicast state in the core of the network and there is no tree building protocol that sets up trees on-demand based on users joining a multicast flow. In that sense, BIER is potentially applicable to many services where Multicast is used. Many Service Providers currently investigating how BIER would be applicable to their network, what would be their migration process and which advantages they can get from BIER deployment. By using the BIER header, multicast is not sent to the nodes that do not need to receive the multicast traffic. That’s why multicast follows an optimal path within the BIER domain. Transit nodes don’t maintain the per-flow state and as it is mentioned above, no other multicast protocol is needed. BIER simplifies multicast operation as no dedicated multicast control protocol for BIER is needed while the existing protocols such as IGP (IS-IS, OSPF) or BGP can be leveraged. BIER uses a new type of forwarding lookup (Bit Index Forwarding Table). It can be implemented by software or hardware changes. Hardware upgrade requirements can be a challenge for BIER but when it is solved, BIER can be the single de-facto protocol for Multicast.

Published - Wed, 25 May 2022

Created by - Orhan Ergun

What is NFV - Network Function Virtualization

Network Functions Virtualization (NFV) was founded by the European Telecommunication Standard Institute (ETSI) with Industry Specification Group (ISG) which contains seven of the world’s leading telecom network operators. A challenge of large-scale telecom networks is increasing the variety of proprietary hardware and launching new services that may demand the installation of new hardware. This challenge requires additional floor space, power, cooling, and more maintenance. With evolving virtualization technologies in this decade, NFV focuses on addressing the telecom problems by implementing network functions into software that can run on server hardware or hypervisors. Furthermore, by using NFV, installing new equipment is eliminated and it will be related to the health of underlay servers and the result is lower CAPEX and OPEX. There are many benefits when operators use NFV in today’s networks. One of them is Reducing time-to-market to deploy new services to support changing business requirements and market opportunities for new services. Decoupling physical network equipment from the functions that run on them will help telecom companies to consolidate network equipment types onto servers, storage, and switches that are in data centers. In NFV architecture, the responsibility for handling specific network functions (e.g. IPSEC/SSL VPN) that run in one or more VM, is Virtual Network Function (VNF). NFV Infrastructure Figure 1 - NFV Infrastructure As figure 1 depicts, the whole system of NFV that contains physical and virtual components is called NFV Infrastructure (NFVI). NFVI can be different based on deployments and the vision of a service provider. For example, NFVI can build upon Docker or maybe a kind of hypervisor or mixing of both of them. Service Provider NFV Deployment Service Providers may use their own OSS/BSS to provision their infrastructures and boost service hosting to their customers and users. Based on this approach, there should be other protocols and components that help Service Providers to build their end-to-end full automated services using NFV. To meet this demand, ETSI released a framework that shows functional blocks and reference points in the NFV framework. The main reference points and execution reference points are shown by solid lines and are in the scope of NFV. These are potential targets for standardization. The dotted reference points are available in present deployments but might need extensions for handling network function virtualization. However, the dotted reference points are not the main focus of NFV at present. Figure 2 illustrates the ETSI NFV framework architecture that is taken from the ETSI document. Figure 2 - ETSI NFV Framework A key component in the NFV architectural framework is the virtualization layer. This layer abstracts and logically partitions physical hardware resources and anchors between the VNF and the underlying virtualized infrastructure. The primary tool to realize the virtualization layer would be the hypervisors. The NFV architectural framework should accommodate a diverse range of hypervisors. On top of such a virtualization layer, the primary means of VNF deployment would be instantiating it in one or more VMs. Therefore, the virtualization layer shall provide open and standard interfaces towards the hardware resources as well as the VNF deployment container, e.g. VMs, in order to ensure independence among the hardware resources, the virtualization layer, and the VNF instances. VNF portability shall be supported over such a heterogeneous virtualization layer. The decoupling of a VNF from the underlying hardware resources presents new management challenges. Such challenges include end-to-end service to end-to-end NFV network mapping, instantiating VNFs at appropriate locations to realize the intended service, allocating and scaling hardware resources to the VNFs, and keeping track of VNF instances location, etc. Such decoupling also presents challenges in determining faults and correlating them for a successful recovery over the network. While designing the NFV Management and Orchestration, such challenges need to be addressed. In order to perform its task, the NFV Management and Orchestration should work with existing management systems such as OSS/BSS, hardware resource management system, CMS used as a Virtualized Infrastructure Manager, etc., and augment their ability in managing virtualization-specific issues. Also, SDN (Software Defined Network) can bring agile and lower provisioning time to the network alongside NFV.

Published - Tue, 24 May 2022

Created by - Orhan Ergun

Bilateral Peering and Multilateral Peering

Bilateral Peering is when two networks negotiate with each other and establish a direct BGP peering session. In one of the previous posts, Settlement Free Peering was explained, in this post, both Bilateral and Multilateral Peering will be explained and both are deployment modes of Settlement Free Peering. This is generally done when there is a large amount of traffic between two networks. Tier 1 Operators just do Bilateral Peering as they don’t want to peer with anyone, other than other Tier 1 Operators. The rest of the companies are their potential customers, not their peers. Multilateral Peering As mentioned above, Bilateral Peering offers the most control, but some networks with very open peering policies may wish to simplify the process, and simply “connect with everyone”. To help facilitate this, many Exchange Points offer “multilateral peering exchanges”, or an “MLPE”. An MLPE is typically an exchange point that offers a “route-server”, allowing a member to establish a single BGP session and receive routes from every other member connected to the MLPE. Effectively, connecting to the MLPE is the same as agreeing to automatically peer with everyone else connected to the MLPE, without requiring the configuration of a BGP session for every peer. Public Peering and MLPE are almost the same thing and are used mostly interchangeably. Objectives for an interconnection agreement to consider are: Provides for cost savings and performance improvements Ensure the exchange of traffic is secure, stable, and resilient Establish timely cooperation for security and network incidents Usually includes a non-disclosure agreement Business terms are needed as part of the objective. Business terms are negotiated by the network owner. Normally, the team of engineers and others are responsible for the health and welfare of the network. They negotiate utilization, capacity, and management parameters. Legal terms are negotiated by lawyers. While plain language is always best, legal language is what makes it an enforceable agreement. Networks should cover all the necessary business, technical and legal points in the scope, such as term, jurisdiction, and venue. Include all necessary parties in the conversation. Including a wide audience in the conversation helps to set realistic business goals.

Published - Tue, 24 May 2022

Created by - Orhan Ergun

What is CDN - Content Delivery Networks?

Content Delivery Network companies replicate content caches close to a large user population. They don’t provide Internet access or transit service to the customers or ISPs but distribute the content of the content providers. Today, many Internet Service Providers started their own CDN businesses as well. An example is Level 3. Level 3 provides its CDN services from its POP locations which are spread all over the World. Content distribution networks reduce latency and increase service resilience (Content is replicated to more than one location). More popular contents are cached locally and the least popular ones can be served from the origin Why CDN - Content Delivery Networks are necessary? Before CDNs, the contents were served from the source locations which increased latency, thus reducing throughput. Contents were delivered from the central site. User requests were reaching the central site where the source was located.Figure 1 - Before CDN With CDN Technology, the Contents are distributed to the local sites. Figure 2 - After CDN   Amazon, Akamai, Limelight, Fastly, and Cloudflare are the largest CDN providers which provide services to different content providers all over the world. Also, some major content providers such as Google, Facebook, Netflix, etc. prefer to build their own CDN infrastructures and become large CDN providers. CDN providers have servers all around the world. These servers are located Inside the Service Provider networks and the Internet Exchange Points. They have thousands of servers and they serve a huge amount of Internet content. CDNs are highly distributed platforms. As mentioned before, Akamai is one of the Content Delivery Networks. The number of servers, number of countries, daily transactions, and more information about Akamai’s Content Distribution Network are as follows: 150.000 servers Located in 92 countries around the world Delivers over 2 trillion Internet interactions daily Delivers approximately 30% of all Web traffic Their customers include: All top 20 global eCommerce sites, top 30 media companies, 7 of the top 10 banks, 9 of the largest newspapers, 9 out of 10 top social media sites  

Published - Tue, 24 May 2022

Created by - Orhan Ergun

What is OTT – Over the Top mean? OTT Providers

What is OTT – Over the Top and How do OTT Providers Work? Over the Top is a term used to refer to Content Providers. So, when you hear Over the Top Providers, they are Content Providers. Content can be any application, any service such as Instant messaging services (Skype, WhatsApp), streaming video services (YouTube, Netflix, Amazon Prime), voice over IP, and much other voice or video content types. This post is shared based on the information from my latest book ‘Service Provider Networks Design and Architecture First Edition‘. If you want to understand telecom (Distance communications) and Service Provider Business, I highly recommend you to purchase this book. An Over-the-Top (OTT) provider provides content over the Internet and bypasses traditional private networks Some OTT Providers distribute their content over their CDN over their private networks though (Google, YouTube, Akamai). They deliver the content over traditional ISP networks. The creation of OTT applications has created a conflict between companies that offer similar or overlapping services. The traditional ISPs and Telco have had to anticipate challenges related to third-party firms that offer OTT applications and services. For example, the conflict between a Content Provider company such as Netflix and a Cable Access Provider Company such as Comcast, which consumers still pay the cable company for having access to the Internet, but they might want to get rid of their cable TV service in favor of cheaper streaming video over the Internet. While the cable company wants to offer fast downloads, there is an inherent conflict of interest in not supporting a competitor, such as Netflix, that bypasses cable’s traditional distribution channel. The conflict between the ISPs and the OTT Providers led to the Net Neutrality discussion Net Neutrality is the principle that data should be treated equally by ISPs and without favoring or blocking particular content or websites. Those who are in favor of Net Neutrality argue that ISPs should not be able to block access to a website owned by their competitor or offer “fast lanes” to deliver data more efficiently for an additional cost. OTT services such as Skype and WhatsApp are banned in some Middle East countries by some Operators, as OTT applications take some part of their revenue. For example, in 2016, social media applications such as Snapchat, WhatsApp, and Viber were blocked by the two UAE telecoms companies, Du and Etisalat. They claimed that these services are against the country's VOIP regulations. In fact, UAE is not the only country blocking access to some OTT applications and services. Many countries in the Middle East have followed the same model. They either completely blocked access to some OTT applications or throttled them, so the voice conversation over these services became near impossible.If you liked this post and would like to see more, please let me know in the comment section below. Share your thoughts so I can continue to write similar ones.

Published - Tue, 24 May 2022

Created by - Orhan Ergun

What are New in Cisco CCDE v3 Exam?

Currently, in 2022, the CCDE exam version is version 3. There are many new changes in CCDE v3 compared to CCDE v2 and in this blog post, some are the new changes will be explained, also for the things that stay the same will be highlighted as well. Also, I will share my takes in the post about these changes. Before starting the technical changes, let's start with the exam result announcement change. CCDE v2 exam has been announced in 8-12 weeks. This was effectively allowing CCDE exam candidates to schedule the exam two times maximum in a year. Students wouldn't schedule the exam if they fail because the announcement date and new exam date were usually overlapping. This changed anymore. With CCDE v3, exam results are announced in 48 hours. It is almost like CCIE exams. The CCDEv3 Practical Exam will be in the Cisco CCIE Lab locations anymore CCDE v2 Lab/Practical exam was done in Professional Pearson Vue Centers. There were 300 of them and done in many different countries. Unfortunately, this change may not be good for many exam takers as Cisco CCIE Lab locations are not available in many countries and are not as common as Pearson Vue Centers. CCDE v3 exam Scheduling is done via the CCIE/CCDE portal • Registration opens 90 days before the exam date   CCDE v3 exam will be done every year, six CCDE exams are expected. (Previously with CCDE v2 it was 4, even sometimes 3) CCDEv2 exam was done every 3 months, usually 4 times a year. If you fail the exam, because of the exam result policy, you couldn't attend the next one but maybe the one after. So, effectively it was two times a year. Because the CCDEv3 exam will be announced in 48 hours and 6 times a year, if you fail the exam, you can attend the next one, because there will be enough time to schedule the next exam, find a hotel, flight ticket, etc, if travel is necessary. So, making the exam more frequently would increase its popularity of the exam, so I consider it a good move as well. Introduction of Core and Area of Expertise modules in Cisco CCDEv3 exam For many years, we have been hearing from many students about whether there will be Datacenter or Service Provider, Collaboration or Security expertise, etc. Cisco came up with this anymore. There will be 3 different Area of Expertise you can choose any of them and one of the practical scenarios will be based on your selection. Similar to CCDEv2, in CCDEv3 we will have in total of 4 scenarios and a total of 8 hours for all the scenarios. Each scenario will be limited to 2 hours maximum and even if you finish one of the scenarios faster than 2 hours, the remaining time won't be added to the next scenario. 3 scenarios will be named Core Module and 1 scenario will be the Area Of Expertise scenario. The core module covers technologies all candidates must know It comes with the Enterprise technologies (no Data Center/Service Provider) • Core module is vendor-agnostic The area of Expertise module covers specific technology areas: • More detailed knowledge expected • Cisco-specific technologies may appear in the Area of Expertise module CCDE v3 Area of Expertise Modules Area of Expertise options: Large-Scale Networks On-prem and Cloud Services Workforce Mobility You can select any of the above Area of Expertise and you will have 2 hours in the CCDEv3 exam, and around 15-25 questions in the scenario. For now, this is enough, for the other changes and CCDEv3-related content, please check the other posts and our free and paid courses.

Published - Mon, 23 May 2022

Created by - Orhan Ergun

BGP Allowas-in feature Explained in 2022

BGP Allowas-in feature needs to be understood well in order to understand the BGP loop prevention behavior, But also, why the BGP Allowas-in configuration might create a dangerous situation, and what are the alternatives of BGP Allowas-in will be explained in this post. What is the BGP Allowas-in feature? BGP Allow-as-in feature is used to allow the BGP speaker to accept the BGP updates even if its own BGP AS number is in the AS-Path attribute. By default EBGP loop prevention is, if any BGP speaker sees its own AS Number in the BGP update, then the update is rejected, thus the advertisement cannot be accepted. But there might be situations to accept the prefixes, thus there are two options to overcome this behavior. Either accepting the BGP update even if the AS number is in the AS-Path list, with the BGP Allow AS feature or changing the behavior with the BGP AS Override feature. Without BGP Allowas, let's see what would happen. In this topology, Customer BGP AS is AS 100. The customer has two locations. Service Provider, in the middle, let's say providing MPLS VPN service for the customer. As you can understand from the topology, Service Provider is running EBGP with the Customer, because they have different BGP Autonomous Systems. The service provider in the above topology has BGP AS 200. Left customer router, when it advertises BGP update message to the R2, R2 sends to R3 and when R3 sends to R4, R4 wouldn't accept the BGP update, When R4 receives that update, it will check the AS-Path attribute and would see its own BGP AS number in the AS Path. Thus is by default rejected, due to EBGP loop prevention.If the router sees its own BGP AS number, anywhere (Origin AS, any mid-AS, or last AS) in the AS Path, it doesn't accept the BGP update. But what if, like in the above picture, the customer wants to, or needs to use the same BGP AS number in every location that they have. In this case, they need to accept the BGP update, otherwise, end-to-end reachability cannot be achieved. There are two solutions to the above requirement. By the way, not accepting prefixes/BGP updates is not a problem. It is just how BGP works. One of the solutions is, that R2 receives a BGP update from R1 with AS 100, then R3 receives from R2, and in the BGP AS Path, it is still AS 100 at R3. With BGP AS Override feature, R3 can change customers' BGP AS numbers with its own BGP AS number. So, R3 during advertisement to R4 replaces BGP AS 100 with BGP AS 200. Change the AS number with its own AS number.And finally, when R4 receives it since it won't see its own AS number in the BGP update, R4 accepted the announcements, and end-to-end connectivity is achieved.With the BGP Allowas-in feature, R3 when it advertises the BGP update to R4, it doesn't change the BGP AS 100 and sends to R4, BGP AS 100 as Origin AS. The BGP configuration at R4 will allow the prefixes although the AS number in the BGP AS Path list shows as Origin AS as AS 100 which is the R4's own AS number as well.

Published - Mon, 23 May 2022

Created by - Orhan Ergun

CCNP ENCOR vs ENARSI

CCNP ENCOR vs ENARSI is it even related?, or should I ask "comparable?" Yes it is both actually, and in this blog, we will review both of the exams, talk about the agenda, which one should be taken before the other, and results of both of them. Relation between CCNP ENCOR vs ENARSI both the exam belongs to the certificate of Cisco CCNP Enterprise,  and taking each individually will grant you A Certificate! Cisco Certified Specialist - Enterprise Core Cisco Certified Specialist - Enterprise Advanced Routing and Service so it is a win-win scenario, but still, the question is which one should I take first, and that will be followed below Difference between CCNP ENCOR vs ENARSI Agenda ENCOR first, generally a Technology Core exam, focusing on 7 domains of knowledge: Architecture Virtualization (Device, Path, and Network Virtualization) Infrastructure (Switching, Routing, and IP Service) Assurance Security WLAN Automation and NO DEEP DIVE in any of these!!! while for ENARSI: Virtualization (Path Virtualization) Infrastructure (Routing and IP Services) Security and that's it! No Architecture In Virtualization, no Device nor Network Virtualization, and the path of Virtualization is different than ENCOR. As in the ENCOR you get GRE over IPSec, while in the ENARSI there is mGRE with IPSec Infrastructure routing in the ENCOR has (describe EIGRP, configure normal area OSPF, configure direct eBGP) For ENRASI routing, it is kind of unlimited, almost a CCIE level with all of EIGRP, OSPF, and BGP) IP Service and Security are similar NO WLAN nor Automation at all in the ENARSI Take the ENCOR before ENARSI, better! Yes as for the points covered in the ENARSI, all of the topics/protocols are also mentioned in the ENCOR, with of course much more shallow inspecting in the ENCOR, so the ENCOR will be a good introduction to the technologies that you will go deep dive within the ENARSI CCNP ENCOR vs ENARSI study plan The ENCOR will take more, than 7 modules, and 7 domains of knowledge, even though there is no deep dive, the agenda is large enough to take around 30% more of the time required to prepare for the 4 modules of the ENARSI CCNP ENCOR vs ENARSI exams from Cisco With the ENCOR: you will have 100+ written questions, to answer in the time of 120 Minutes + 30 Minutes for non-native speakers With the ENARSI: 60+ written questions, to answer in the time of 90 Minutes + 30 Minutes for non-native speakers *Both the exams can be taken on-site or from home/office *Both the exams do not support backward navigation *Each exam alone grants a different badge/certificate  

Published - Mon, 09 May 2022

Created by - Orhan Ergun

BGP AS Override Feature Explained in 2022

BGP AS Override needs to be understood well in order to understand the BGP loop prevention behavior, But why BGP AS Override might create a dangerous situation, and what are the alternatives of BGP AS Override will be explained in this post. What is BGP AS Override BGP AS Override feature is used to change the AS number or numbers in the AS Path attribute. Without BGP AS-Override, let's see what would happen. In this topology, Customer BGP AS is AS 100. The customer has two locations. Service Provider, in the middle, let's say providing MPLS VPN service for the customer. As you can understand from the topology, Service Provider is running EBGP with the Customer, because they have different BGP Autonomous Systems. The service provider in the above topology has BGP AS 200. Left customer router, when it advertises BGP update message to the R2, R2 sends to R3 and when R3 sends to R4, R4 wouldn't accept the BGP update, When R4 receives that update, it will check the AS-Path attribute and would see its own BGP AS number in the AS Path. Thus is by default rejected, due to EBGP loop prevention. If the router sees its own BGP AS number, anywhere (Origin AS, any mid-AS, or last AS) in the AS Path, it doesn't accept the BGP update. But what if, like in the above picture, the customer wants to, or needs to use the same BGP AS number in every location that they have. In this case, they need to accept the BGP update, otherwise, end-to-end reachability cannot be achieved. There are two solutions to the above requirement. By the way, not accepting prefixes/BGP updates is not a problem. It is just how BGP works. One of the solutions is, that R2 receives a BGP update from R1 with AS 100, then R3 receives from R2, and in the BGP AS Path, it is still AS 100 at R3. With BGP AS Override feature, R3 can change customers' BGP AS numbers with its own BGP AS number. So, R3 during advertisement to R4 replaces BGP AS 100 with BGP AS 200. Change the AS number with its own AS number. And finally, when R4 receives it since it won't see its own AS number in the BGP update, R4 accepted the announcements, and end-to-end connectivity is achieved. In the next post, we will look at what can be the problem if the BGP AS Override feature is used.

Published - Fri, 22 Apr 2022