Total 20 Blogs

Created by - Stanley Avery

QoS Models: Let's Identify and Compare Them

In recent years, Quality of Service (QoS) models have become increasingly important in networking. QoS models allow network administrators to identify and prioritize different types of traffic, ensuring that critical data is always delivered promptly and reliably. In this blog post, we'll take a look at some of the most common quality of service models used in networking. We'll compare and contrast these models, and we'll discuss their benefits and drawbacks. By understanding the different QoS models available, network administrators can make more informed decisions about which model is best for their specific needs. Let's Start with the Basics: What Are QoS Models? When it comes to networking and communication, Quality of Service (QoS) models act as a framework for ensuring that data transmission meets specific requirements. These models prioritize certain types of network traffic and can even perform bandwidth allocation to guarantee efficient operation. QoS models are essential in industries where reliable communication is crucial, such as healthcare or finance. Within a QoS model, both hardware and software components work together to provide the necessary support for network functions. Different QoS models include differentiated services, integrated services, and best-effort services. Each model offers unique benefits and may be more suitable for particular applications or networks. Different QoS Models: When setting up a network, Quality of Service (QoS) models can be utilized to prioritize and manage different types of network traffic. There are three main QoS models: best effort, integrated services, and differentiated services. Let's talk about these three main QoS models and compare them to each other. 1. Best-Effort Services: When it comes to QoS models, "best effort" is often misunderstood as meaning no effort at all. However, this term actually refers to the priority system in which services are provided on a first-come, first-serve basis. While this model may not prioritize certain types of traffic over others, it can still provide a reliable level of service. Additionally, this approach allows for maximum network efficiency and cost-effectiveness. In some cases, best effort may be the best choice for delivering quality service. It all depends on the specific needs and priorities of the network in question. 2. Integrated Services (IntServ): The Integrated Service (IntServ) approach is a QoS model that allows for individualized treatment of network traffic. It utilizes service-specific reservation protocols to reserve resources for particular data flows. This approach can accurately predict and guarantee defined performance levels, but it is not scalable and requires more administrative control than other QoS models, such as DiffServ. IntServ was developed as part of the recent efforts to meet the increased demand for real-time applications, such as voice and video conferencing, but it has been largely replaced by DiffServ in modern networks due to its limited scalability. However, IntServ may still be utilized in service provider networks for specific high-priority services. 3. Differentiated Services (DiffServ): Differentiated Services, or DiffServ, is a Quality of Service (QoS) model that allows network traffic to be managed and prioritized based on predetermined criteria. This can include factors such as the type of data being transmitted or the source of the transmission. DiffServ uses "traffic conditioning" techniques, such as marking and policing, to ensure that certain types of traffic receive the appropriate level of service. In contrast to other QoS models, DiffServ can handle a larger number of classes with relatively simple configurations. However, its effectiveness relies on proper implementation and adherence to agreed-upon standards. Comparing Different QoS Models When it comes to internet traffic management, there are a few different approaches. The best effort approach treats all traffic the same, without prioritizing any particular type. IntServ, on the other hand, prioritizes certain types of traffic by reserving network resources for specific connections. DiffServ takes it one step further by categorizing traffic into different "classes" and giving each class a certain level of priority. While best effort is the simplest approach, it can lead to slower speed and less reliable connections for some users. In contrast, both IntServ and DiffServ offer more customized and efficient handling of internet traffic. However, they require more sophisticated networking equipment and can be more costly to implement and maintain. Ultimately, the best option depends on the specific needs and resources of an individual or organization. Final Words We’ve looked at four different QoS models and their applicability in various situations. By understanding the different strengths and weaknesses of each model, we can more easily select the right one for our specific needs.

Published - Sun, 13 Nov 2022

Created by - Stanley Avery

IGMP Guide: How IGMP Works?

If you're interested in networking, you've probably heard of IGMP. But what is IGMP, and what does it do? This blog post will explain everything you need to know about IGMP. We'll cover what IGMP is, how it works, and why it's important. By the end of this blog post, you'll have a better understanding of this essential networking protocol. So, let's get started! What Does IGMP Mean? IGMP, or Internet Group Management Protocol, is a communication protocol used in IPv4 networks for multicast group management. It allows a host to inform its local router that it wants to receive messages for a specified multicast group. It also allows routers to identify which hosts on their network belong to specific multicast groups and ensure efficient delivery of multicast traffic. In other words, IGMP helps improve the performance and efficiency of IPv4 multicast networks by managing hosts' membership in various multicast groups. Without IGMP, IPv4 networks would not be able to efficiently support multimedia streaming or other applications that require multicasting. Multicast: In networking, multicast refers to transmitting a single data package to multiple recipients using a specific IP address at once. This differs from unicast, which sends a separate package to each individual recipient, and broadcast, which sends the same package to all possible recipients on a network. Multicast can be useful for efficiently sending large amounts of data to a selected group, such as streaming a video conference to multiple participants or updating software on multiple devices simultaneously. However, it does require that all receivers be members of the specified multicast group in order for the transmission to be successful. Let's Explain How IGMP Works Multicast IP addresses allow for a single package to be sent to multiple hosts at once, making it an efficient way of delivering information. However, in order for multicast to work, the host must first notify the multicast router that it wants to join a certain multicast group. This is where IGMP comes in. The Internet Group Management Protocol manages these multicast groups and ensures that the multicast routers only send packages to those hosts who have requested them. When a host wants to join a multicast group, it sends an IGMP membership report to its local router. The router then passes on this information to other routers in the network, allowing the host to receive multicast traffic for that group. Similarly, when a host wants to leave a multicast group, it sends an IGMP leave report which is propagated throughout the network. This ensures that hosts only receive multicast traffic for groups they have specifically joined, thus conserving network resources. In addition to managing membership, IGMP also periodically sends out IGMP query messages to gauge interest in certain groups and prune those with no active members. Types of IGMP Messages When managing membership in an Internet Group Management Protocol (IGMP) network, there are three types of messages that may be used. A membership query is sent by a multicast router to determine which hosts are members of a specific multicast group. A membership report is sent by a host to indicate its membership in a particular group. Finally, a leave group message is sent by a host when it wishes to leave a multicast group. These messages allow for efficient management of membership in multicast groups and help ensure that messages are only sent to interested recipients. Membership Query: When certain devices on a network want to join a multicast group, they send an Internet Group Management Protocol (IGMP) membership query message. These messages come in two forms: general membership queries and group-specific membership queries. General membership queries are sent by routers to determine which devices on the network want to join any multicast groups. Group-specific membership queries are used by routers to check if there are any members of a specific multicast group on the network. Both types of queries help routers maintain an efficient multicast system, ensuring that data is only sent to the devices that need it. Membership Report: As a component of Internet Group Management Protocol (IGMP), a membership report is a message sent by a host to indicate its membership in a particular multicast group. This message allows routers on a network to accurately track which hosts are part of which multicast groups to efficiently route multicast traffic. A membership report can take the form of an initial joining report or an active membership report, indicating that the host has just joined a group or is actively participating in it, respectively. Leave Group Message: A leave group message is sent from a host to a router when it wants to stop receiving multicast traffic for a certain group. This type of message allows for efficient communication within a multicast network, as hosts that no longer wish to receive data for a particular group can easily notify the network. This helps reduce unnecessary network traffic and improves overall network performance. IGMP Types The Internet Group Management Protocol (IGMP) is a communication protocol that manages the participation of hosts in multicast groups. There are three types of IGMP: IGMP version 1, IGMP version 2, and IGMP version 3. IGMP version 1 was the first iteration of the protocol, and it allows a host to signal its interest in joining a particular multicast group. IGMP version 2 builds upon the functionality of version 1 by allowing hosts to indicate their desire to leave a multicast group. IGMP version 3 adds the ability for a host to report its current Multicast Listener state, providing more efficient group membership reporting. IGMP v2 vs. IGMP v3 As technology progresses, new versions of IGMP are released to address specific issues and enhance overall functionality. IGMPv2, released in 1997, added the capability for a host to report leaving a group. IGMPv3, released in 2004, introduces source-specific multicasting and allows for a more efficient way to manage group membership by allowing hosts to report their interest in specific sources within a multicast group. Additionally, IGMPv3 adds support for "compatible mode," which allows for interoperability with older versions of the protocol during an upgrade process. Ultimately, both IGMPv2 and IGMPv3 serve the same purpose of managing multicast group membership, but IGMPv3 offers greater flexibility and efficiency. IGMP Applications One popular application of IGMP is internet television streaming. When watching a live broadcast or on-demand video, the streaming service will use IGMP to send the content only to those viewers currently tuned in, rather than transmitting it to every device on the network. Another common use for IGMP is in online gaming. In order to communicate with fellow players and join multiplayer games, IGMP is used for managing membership in gaming groups and sharing information within those groups. Finally, IGMP also has various office uses, such as sharing files among colleagues and holding virtual meetings via video conferencing software. Overall, the capabilities offered by IGMP make it a valuable tool across a variety of industries and applications. Final Words IGMP is a critical protocol for networking. It allows devices on a network to communicate with one another and determine which multicast groups they should subscribe to. By understanding how IGMP works, you can create smoother, more efficient networks that are better equipped to handle today's high-bandwidth applications and media streaming services. If you’re looking for a deeper dive into IGMP, or need help setting up your own multicast network, check this course – we’d be happy to help!

Published - Sun, 06 Nov 2022

Created by - Stanley Avery

Collision Domain and Broadcast Domain

Are you familiar with the terms collision domains and broadcast domains? If not, don't worry - you're not alone. In fact, many people are unsure of the differences between these two networking concepts. This blog post will define both terms and explain their key distinctions. So, if you're curious about the differences between collision domains and broadcast domains, keep reading! What is a Collision Domain? Before discussing the differences between collision and broadcast domains, let’s discuss them separately. A collision domain refers to a network in which transmission collisions can occur. These collisions occur when two devices in the same domain attempt to send a packet simultaneously, resulting in both packets being corrupted and needing to be resent. This can lead to slower network speeds and reduced efficiency. Because each port on a hub shares the same collision domain, collisions are more common in a hub environment. However, ports on bridges, switches, and routers have their separate collision domains. How to Avoid Collision Domains? As we said before, a collision domain occurs when two computers try to send data at the same time, causing a collision on the network and resulting in lost or corrupted data. However, several steps can be taken to avoid collision domains. One solution is to use a switch instead of a hub for your network connections. Switches only send data to designated ports, whereas hubs broadcast data to all connected devices, increasing the chances of collisions. Another option is to segment large networks into smaller ones using routers, reducing the number of devices on each network and reducing the risk of collisions. Lastly, proper cable management can ensure efficient and organized data transmission, preventing network congestion and minimizing collisions. Implementing these solutions allows you to avoid collision domains and maintain a smooth-running network. What is a Broadcast Domain? A broadcast domain is a logical division of a computer network, in which all nodes can reach each other through broadcast transmission. This is typically achieved through the use of network devices such as routers and switches, which divide larger networks into smaller segments. As a result, each segment operates as its own independent broadcast domain. In addition to increasing network efficiency and performance, segmentation also improves security by limiting the spread of broadcasts and potential malware infections. However, care must be taken to ensure that appropriate communication between segments is still possible with proper configuration. Overall, understanding and implementing effective broadcast domains can greatly improve the functioning of any computer network. How to Avoid or Handle Broadcast Domains? When designing a network, one important consideration to keep in mind is the creation of broadcast domains. Broadcast domains often lead to slowed network performance and can even cause network crashes. One way to avoid broadcast domains is by properly setting up VLANs or virtual local area networks. Limiting each VLAN to a smaller group of devices can reduce the size and impact of broadcast traffic. Additionally, it may be necessary to utilize a router to separate different broadcast domains and control traffic flow between them. In cases where a broadcast domain is unintentionally created, some options for handling it include adjusting the VLAN configuration, using a Layer 3 switch, or implementing Unicast Reverse Path Forwarding. By keeping an eye on broadcast domains and taking the necessary steps to avoid or handle them, we can ensure smooth and efficient network performance. Tips: We recommend to check our this course: CCIE Enterprise Infrastructure v1.0 What are the Differences Between Collision Domains and Broadcast Domains? Both collision domains and broadcast domains are pretty common terms in the networking world. While being closely related to each other, these terms have differences between them. Let's talk about some of the differences between collision domains and broadcast domains. The Collision domain allows traffic to flow in forward and backward directions, while the Broadcast domain encompasses the entire network so that traffic can travel anywhere. Packet collision can only occur between devices in the same collision domain, while a broadcast domain is a group of computers that can communicate with each other without using a router. A collision domain contains devices from other IP subnetworks, while a broadcast domain is never limited to the specific IP subnetwork for any type of broadcast. Packet collision is a common occurrence when multiple devices are trying to transmit data on the same wire link. Broadcast domains do not have any collision because they often use a switched environment. In the collision domain, switches will break. However, in the broadcast domain, switches never break. On a router, every port has its own separate collision domain. However, all the ports on a switch or hub are usually in the same broadcast domain. Tips: We recommend to check this course about networking: Network Fundamentals Course Summary Collision domains and broadcast domains are two different ways of networking. It’s important to understand their differences so you can choose the right one for your needs. We hope this article has helped clarify any confusion and given you a better understanding of these terms. Thanks for reading!

Published - Sat, 05 Nov 2022

Created by - Stanley Avery

NAT Overloading

Are you curious about NAT overloading? Do you want to learn more about what it is and how it works? If so, you're in luck. In this blog post, we'll discuss everything you need to know about NAT overloading. We'll talk about what it is, how it works, and why it's a valuable tool for your network. So if you're ready to learn more, keep reading! What Is NAT Overloading? Before we talk about NAT overloading, we should talk a little about what NAT is. NAT allows businesses and homes to use a single IP address for multiple devices on their network. This is done by modifying the network address information in the IP header of packets while they are in transit across a traffic routing device. This allows for a single device, such as a router, to act as an intermediary between the private network and the public Internet. NAT overloading, also known as port address translation, is a technique used in computer networking. It allows for multiple devices on a private network to access the internet using a single public IP address. This is achieved by translating the private IP addresses of each device into a single shared public IP address.  How does NAT Overloading Work? The only way client devices in a local area network can communicate with the internet is to direct them to a router with a public IP address and allow the router to act as an intermediary. NAT overloading allows a router to arbitrate between client devices by replacing the private IP addresses and port numbers of those devices with its own public IP address and available port number. This method primarily uses TCP and UDP port numbers. When traffic passes from the local area network to the internet, each packet's source address is altered automatically from a private address to a public address. The router uses the NAT table to find each active connection's destination address and port number. When it receives a response, it looks at the data from the original outbound phase of that connection to figure out which private IP address on LAN should get this new response. Pros and Cons of NAT Overloading One advantage of network address translation (NAT) overloading is the ability to conserve IP addresses. This allows for more devices to connect to a network, effectively reducing the need for additional IP addresses. Another benefit is enhanced security, as NAT can hide the identity and location of devices on a private network from external networks.  However, there are also some drawbacks to using NAT overloading. It can result in slower communication speeds, as well as conflicts with certain applications that require specific types of communication. Additionally, it can make it difficult to troubleshoot any issues on the network due to hiding the identities of devices. Overall, it is important to weigh the benefits and drawbacks when deciding whether or not to use NAT overloading in a particular network setting. Configuring NAT Overload on a Cisco Router Configuring NAT overload on a Cisco router involves declaring the inside and outside interfaces, creating an access list to define what traffic should be translated, and specifying the overload command. First, using the ip nat inside and ip nat outside commands, the router is instructed which interface connects to the internal network and which connects to the external network. Then, an access list is created using the access-list command to define what type of traffic should be translated. Finally, using the show ip nat translation and debug ip nat, you have to verify the NAT translations. Following these steps will successfully configure NAT overload on a Cisco router.

Published - Sun, 16 Oct 2022

Created by - Orhan Ergun

What is OTT – Over the Top mean? OTT Providers

What is OTT – Over the Top and How do OTT Providers Work? Over the Top is a term used to refer to Content Providers. So, when you hear Over the Top Providers, they are Content Providers. Content can be any application, any service such as Instant messaging services (Skype, WhatsApp), streaming video services (YouTube, Netflix, Amazon Prime), voice over IP, and much other voice or video content types. This post is shared based on the information from my latest book ‘Service Provider Networks Design and Architecture First Edition‘. If you want to understand telecom (Distance communications) and Service Provider Business, I highly recommend you to purchase this book. An Over-the-Top (OTT) provider provides content over the Internet and bypasses traditional private networks Some OTT Providers distribute their content over their CDN over their private networks though (Google, YouTube, Akamai). They deliver the content over traditional ISP networks. The creation of OTT applications has created a conflict between companies that offer similar or overlapping services. The traditional ISPs and Telco have had to anticipate challenges related to third-party firms that offer OTT applications and services. For example, the conflict between a Content Provider company such as Netflix and a Cable Access Provider Company such as Comcast, which consumers still pay the cable company for having access to the Internet, but they might want to get rid of their cable TV service in favor of cheaper streaming video over the Internet. While the cable company wants to offer fast downloads, there is an inherent conflict of interest in not supporting a competitor, such as Netflix, that bypasses cable’s traditional distribution channel. The conflict between the ISPs and the OTT Providers led to the Net Neutrality discussion Net Neutrality is the principle that data should be treated equally by ISPs and without favoring or blocking particular content or websites. Those who are in favor of Net Neutrality argue that ISPs should not be able to block access to a website owned by their competitor or offer “fast lanes” to deliver data more efficiently for an additional cost. OTT services such as Skype and WhatsApp are banned in some Middle East countries by some Operators, as OTT applications take some part of their revenue. For example, in 2016, social media applications such as Snapchat, WhatsApp, and Viber were blocked by the two UAE telecoms companies, Du and Etisalat. They claimed that these services are against the country's VOIP regulations. In fact, UAE is not the only country blocking access to some OTT applications and services. Many countries in the Middle East have followed the same model. They either completely blocked access to some OTT applications or throttled them, so the voice conversation over these services became near impossible.If you liked this post and would like to see more, please let me know in the comment section below. Share your thoughts so I can continue to write similar ones.

Published - Tue, 24 May 2022

Created by - Orhan Ergun

What does PE-CE mean in MPLS?

What does PE-CE mean in the context of MPLS? What are CE, P, and PE devices in MPLS and MPLS VPN? These are foundational terms and definitions in MPLS. MPLS is one of the most commonly used encapsulation mechanisms in Service Provider networks and before studying more advanced mechanisms, this article is a must-read. In order to understand PE-CE, we need to understand first what are PE and CE in MPLS. I am explaining this topic in deep detail in our CCIE Enterprise and Self-Paced CCDE course. Let's take a look at the below figure. Note: If you are looking for a much more detailed resource on this topic, please click here. Figure -1 MPLS network PE, P, and CE routers In Figure-1 MPLS network is shown. This can be an Enterprise or Service Provider network. MPLS is not only a service provider technology. It can provide segmentation/multi-tenancy for the enterprise environment as well. Three different types of routers are shown. CE, PE, and P routers. CE devices are located on the customer site. PE and P devices are located on the Service Provider site. If it is an Enterprise network, WAN routers can be considered PE routers, and the switches can be CE devices. PE routers don’t have to be connected to P routers. PE routers can be directly connected to each other. CE devices don’t run MPLS. PE devices run both IP and MPLS. P devices don’t run IP but only MPLS. You find the above sentence everywhere when you study MPLS. It actually means CE devices don’t switch the MPLS label to function. PE devices, when a packet arrives first lookup IP destination address and then use MPLS label to function. P devices don’t do IP lookup at all but only switch MPLS labels. Otherwise, P devices of course have IP addresses on their interfaces as well. In MPLS, the service can be Layer 2 or Layer 3. In Layer 3 MPLS VPN, IP routing is enabled between PE and CE devices. These devices have their own roles. Provider Edge and Customer Edge. The provider Edge device is attached to the customer site and the MPLS network, on the other side Customer Edge device, is at the customer site and doesn't require MPLS protocol for its function. Routing protocols in theory can be Static Routing, RIP, EIGRP, OSPF, IS-IS, and BGP. All of them are IETF standards. But in real life, most service providers only provide Static Routing and BGP as a routing protocol with the customer. PE-CE interface is only IP and not MPLS. This interface is the boundary between the MPLS network and the IP network. If Enterprise purchases MPLS VPN service, this means, the customer is receiving VPN service from the MPLS backbone service provider and the customer doesn't run MPLS with the Service Provider. MPLS is only enabled in the Service Provider network. A specific application of PE-CE as MPLS is called CSC (Carrier Supporting Carrier). But in basic MPLS Layer 3 MPLS VPN, the PE-CE link is always IP. To have a great understanding of SP Networks, you can check my newly published Service Provider Networks Design and Perspective Book. It covers the SP network Technologies also explains in detail a factious SP network.

Published - Thu, 21 Apr 2022

Created by - Orhan Ergun

What is IRU? Indefeasible Right of Use?

If you are working in the Operator, Service Provider, or Telco/Carrier networks, you probably heard this term. If you haven't, you need to learn it. To have a great understanding of SP Networks, please check our 150 hours, detailed, CCIE SP Training Service Providers use the transport network of others. This is very common, in fact, even the biggest networks use other carriers' transport/transmission infrastructure, especially outside of their main location. For example, an operator might provide services mainly in the U.S, but want to extend its network to Europe. Instead of setup a fully-fledged telecom environment in Europe to provide a service, let's say to Business and Residential customers, one option is to use local carrier networks in Europe. Indefeasible Right of Use (IRU) Is a permanent contractual agreement That cannot be undone, between the owners of a cable and a customer of that cable system. The cable is mostly a fiber cable as fiber can carry more data than any other type of media. Buying a fiber can be in two ways, either, Leasing or IRU (Indefeasible Rights of Use) based. Indefeasible means ‘not capable of being voided or undone. The Customer purchases the right to use a certain amount of capacity of the fiber system for a specified number of years. Customer who purchases IRU can lease the capacity to other companies. Let me give you an analogy. Think of it in this way, if you are renting an apartment, you sign a contract with the Landlord as a tenant. You cannot rent that apartment to someone else. This is similar to leasing. But if you are the landlord, you can rent it to anyone you want. This is an example of an Indefeasible Right of Use-based agreement. Let's have a look at the differences between Leasing and IRU-based contracts in detail. There will be some technical terms, be ready. IRU vs. Leasing a Fiber IRU contracts are almost always long term such as 20 to 30 years (Cable lifetime is generally considered as 25 years) Leased fiber doesn't have to be a long term contract The most common leased service is IPLC which is Internal Private Leased Circuits. IPLC can be a half circuit or full circuit. (I will explain the half and full circuits IPLC in a separate post) IPLC unlike IRU doesn't dictate the buyers to pay the cost of fiber upfront, IPLC is not a prepaid service Leasing is very flexible (In terms of contract duration, speed option, etc.) but IRU can be very cost-effective Indefeasible Right of Use based contract gives the purchaser the right to use some capacity on a telecommunications cable system, including the right to lease that capacity to someone else But is an Indefeasible right of use-based contract suitable for every company? Why people don't buy if it has a cost advantage? Why bother with MPLS? Should smaller companies purchase an IRU-based fiber? Smaller companies that need a leased line between, say, London and New York do not buy an IRU. They lease capacity from a telecommunications company that themselves may lease a larger amount of capacity from another company (and so on), until at the end of the chain of contracts there is a company that has an IRU, or wholly owns a cable system. Buying an IRU compare to other types of circuits such as MPLS, Metro Ethernet and Internet is much more costly. Thus smaller companies generally don't buy IRU capacity.

Published - Sun, 10 Apr 2022

Created by - Orhan Ergun

Tier 1, Tier 2 and Tier 3 Service Providers

What is tier in the first place? If you are dealing with Service Provider networks, you hear this term a lot. But how do we define Tier 1, Tier 2, and Tier 3 Service Providers? I am explaining this topic in deep detail in my specialized BGP Zero to Hero course. What should be their infrastructure to be seen as Tier 1 for example? Which tier is bigger in scale? Which one is better for the customers to purchase a service from? Why do Service Providers claim that they are Tier 1 or Tier 2? Note: If you are looking for a much more detailed resource on this topic, please click here. Let’s start with the definition first. Tier 1 Service Provider A network, which does not purchase transit service from any other network, and therefore peers with every other Tier 1 network to maintain global reachability. They are the biggest guys geographically, but not always from the number of customers' points of view. Tier 2 Service Provider A network with transit connections, customers, and some peering, but that still buys transit service from Tier 1 Providers to reach some portion of the Internet. Tier 3 Service Provider A stub network, typically without any transit customers, and without any peering relationships. They generally purchase transit Internet connection from Tier 2 Service Providers, sometimes even from the Tier 1 Providers as well (I know some non-profit organizations which have a transit connection from Tier 1) Tier 1, Tier 2, and Tier 3 Service Providers The above picture shows the general idea behind Tier 1, Tier 2, and Tier 3 Service Providers' connections and relationships. Tier 2 Providers generally can be a peer with another Tier 2 and Tier 1 Service Providers only peer with other Tier 1. The logic behind is actually very simple. Tier 1 Service Providers don’t peer with Tier 2 because Tier 2 providers are potential customers of Tier 1 Service Providers. If they can be a customer and pay the money for the transit connection, why would give them peer connectivity (Peering is free, at least in theory) Unless the customer changes their path preference with communities, service providers almost always choose customer over peering links vs. transit links. They want to utilize the customer links because they pay for the transit service. Even though peering is free thus SPs don’t pay each other for the service, peering brings them some cost. (They need to have a connection to the IX and have a router and port in the IX). There are just 11 or 12 Tier 1 Service Providers in the world and some Tier 2 level Service Providers always claim that they are Tier 1. By doing it, they target to have a free peering relation with the other Tier 1 of course so they wouldn’t pay transit costs and have other Tier 2 SPs as their customers. The same thing is valid for the Tier 3 Service Providers as well. They might try to show them as Tier 2 to get free peering from the other Tier 2 Service Providers. But often the Service Providers put strict requirements for the peering so claiming may not help! Last but not least, some thoughts for my more advanced readers; if an ISP is Tier 1 for IPv4, is it also Tier for IPv6?

Published - Sun, 10 Apr 2022

Created by - Orhan Ergun

EIGRP Feasible Successor

One of the advantages of EIGRP Feasible Successor is that it speeds up the EIGRP. In fact, if there is a Feasible Successor in the EIGRP network, such network converges faster than OSPF or IS-IS. But what is EIGRP Feasible Successor and how can we find one? If there is EIGRP Feasible Successor, how does EIGRP converges faster than OSPF or ISIS? In this post, I will explain the answers to the above questions. EIGRP Feasible Successor is a backup node that can satisfy the EIGRP feasibility condition. Feasibility condition simply means that the backup router should be loop-free. Let’s examine the topology shown below (Figure-1) to understand how EIGRP finds loop-free alternate/backup node.   Figure-1 EIGRP Feasibility ConditionFrom the Router A’s point of view, Router B and Router C are the equal cost routers; as a result, both ABD and ACD path can be used in the network. What’s more, Router A installs both Router B and Router C not only in the EIGRP topology table but also in the routing table. There is no backup router in the above topology since Router A uses both Router B and Router C to reach the destination behind Router D. Let’s increase the link cost between Router C and Router D.   Figure – 2 EIGRP Feasible SuccessorThe link cost of Router C–D is 15. In order to satisfy the feasibility condition for Router A, the link cost of Router C–D should be smaller than that of Router A–B–D total cost. Since 15 < 10 + 10, Router C can be used as a backup – router by Router A – to reach Router D. Router C is installed in the EIGRP topology table of Router A.   I will explain what will happen to the route if it is installed in the EIGRP topology table, instead of the routing table. Also, let’s examine one more example so that we can understand when Router C cannot be installed in the routing table or EIGRP topology table.    Figure-3 EIGRP feasibility condition is not satisfied The link cost of Router C–D is 25. In order to satisfy the feasibility condition for Router A , the link cost of Router C –D should be smaller than that of Router A –B–D. Since 25 > 10(A–B) + 10(B–D), Router C cannot be used as a backup router – by the Router A – to reach Router D. In Figure-3, Router C is not a feasible successor (Backup Router) simply because it doesn’t satisfy the EIGRP feasibility condition. What if Router C–D link is 20? In that case, since 20 = 10(A–B) + 10(B–D), Router C cannot be used as an EIGRP Feasible Successor. The link cost of Router C–D has to be smaller than that of Router A–B–D to be Router C EIGRP FS of Router A. Now that we have learned how to find EIGRP Feasible Successor, I will explain why if there is EIGRP feasible successor, EIGRP converges faster than OSPF or ISIS. When Router C satisfies the EIGRP feasibility condition, it is installed as a backup router in the EIGRP topology table of Router A. In order to understand this concept, I will first examine how EIGRP converges if there is a failure without feasible successor. Let’s assume that in Figure-3, Router A–B link fails. Router A, since there is no feasible successor (remember that Router C didn’t satisfy the EIGRP feasibility condition), will send an EIGRP query to Router C. Router A with the EIGRP query will ask Router C whether it has an alternate route. Router C’s successor (Primary path) is Router D, which is the destination. That’s why Router C answers Router A’s query. But obviously, there is a delay in this process. Now, let’s examine what happens if Router C satisfies EIGRP feasibility condition. In Figure 2, Router C is EIGRP FS of Router A. So, let’s use that topology. If Router A–B link fails in Figure -2, Router A doesn’t send an EIGRP query to Router C anymore. Rather, Router A immediately takes all the routes from EIGRP topology table and installs them on the routing table without running EIGRP Dual algorithm. Without running EIGRP, Dual algorithm’ statement is important. This is because if there is a failure, OSPF or IS-IS can run SPF algorithm again to find a backup route. Thus, in the case of failure, EIGRP FS reduces the convergence time by avoiding running EIGRP dual algorithm again.   Conclusion:   EIGRP FS is a backup loop-free EIGRP router. EIGRP FS avoids sending EIGRP query. EIGRP FS reduces convergence time in the case of failure (Link or Node). EIGRP node doesn’t run the dual algorithm to find a backup path if there is a failure of if there is a feasible successor. That’s why arranging the link cost accurately is very important for capacity planning, fast convergence, and availability.

Published - Mon, 14 Feb 2022