Total 288 Blogs

Created by - Stanley Avery

OSPF Cost - Everything You Need to Know

OSPF (Open Shortest Path First) is a link-state routing protocol that uses a cost metric to calculate the best route between two nodes. In this article, we'll look at how OSPF calculates costs and discuss some of the factors that can influence them. So, if you're curious about how OSPF's cost calculation works or want to learn how to tweak its settings for your network, read on! What is "OSPF Cost"? OSPF cost is a metric used by the OSPF routing algorithm to calculate the best path between two OSPF-enabled devices. The cost of a path is determined by adding up the costs of the individual links that make up the path. The cost of a link is typically expressed in terms of bandwidth. In most cases, the cost of a link is inversely proportional to its bandwidth. As a result, OSPF-enabled devices will typically prefer paths with high-bandwidth links. How is OSPF Cost Calculated? The OSPF cost is calculated based on the link's bandwidth and is used by the OSPF algorithm to determine the best path between two routers. The higher the bandwidth, the lower the cost. The lower the cost, the more preferable the link. In most cases, the OSPF cost is calculated automatically by the router. However, sometimes, it may be necessary to set the OSPF cost manually. To do this, use the ip ospf costcommand. The value entered with this command will be used as the link's OSPF cost. Keep in mind that this value must be an integer between 1 and 65535. If it is not, the router will revert to using the default OSPF cost calculation. The OSPF Cost is calculated by using this formula: reference bandwidth /interface bandwidth. Here Are the Default Costs Bandwidth - Cost Gigabit Ethernet Interface (1 Gbps) - 1 Fast Ethernet Interface (100 Mbps) - 1 Ethernet Interface (10 Mbps) - 10 DS1 (1.544 Mbps) - 64 DSL (768 Kbps) - 133 A Little Problem As you can see, OSPF considers all interfaces with a bandwidth of 100 Mbps or more as equal. The best possible cost is always 1. So no matter the speed of the internet, it will all have a default cost of 1 when it is over 100 Mbps. This can create subpar routing in up-to-date networks that use current high-speed ethernet interfaces. It is possible to force a router to use a faster route manually. To change the reference bandwidth in an OSPF network, use the following command: router ospf 1 reference-bandwidth Where is the desired reference bandwidth in Mbps. For example, to configure the reference bandwidth to 1 Gbps, use the following command: router ospf reference-bandwidth 1000 By doing this, you can ensure that your routers will pick the fastest route possible. You must to check CCIE Enterprise training about all useful commands of OSPF. Under What Circumstances Might You Need to Manipulate OSPF Metrics? In a typical OSPF network, the cost is automatically calculated based on the bandwidth of the link. However, there are times when it may be necessary to manipulate the cost metric manually. For example, if two links have different bandwidths, but the router considers them to be equally reliable, it may make sense to increase the cost of the lower bandwidth link to reduce traffic on that link. In another example, if one link is consistently congested while another is not, it may be necessary to decrease the cost of the congested link to encourage more traffic to use that link. These are just a few examples of when it might be necessary to manipulate OSPF cost metrics; in general, any time there is an imbalance in traffic or reliability between two links, manipulating the cost metric can help to restore balance. You can learn more about OSPF costs and other topics on Cisco's Design Guide

Published - Wed, 12 Oct 2022

Created by - Stanley Avery

OSPF Redistribution: Quick Guide

Redistribution is a process that allows you to share routing information between different routing protocols. This can be a helpful tool if you want to use more than one protocol on your network.In this article, we will take a quick look at how OSPF redistribution works. We will also discuss some benefits and drawbacks of using redistribution in an OSPF network. What Exactly is OSPF Redistribution? Before going into OSPF redistribution, we should briefly discuss what OSPF (Open Shortest Path First) is. It is a routing protocol to find the shortest path between two devices on a network. It is a link-state protocol, which means that it keeps track of the state of the network links and calculates the best route based on that information. OSPF is a widely used routing protocol. It is known for being stable and reliable and can be used in networks of all sizes. Redistribution between different routing protocols is a complex process, but in general, it allows for sharing information between networks that use different protocols. In the case of OSPF, redistribution occurs when routes from other protocols are injected into the OSPF network or vice versa. There are several benefits to redistributing routes in this way. First, it allows for greater flexibility in terms of routing. In particular, it can be used to connect disparate networks that would otherwise be unable to communicate with each other. Second, redistribution can improve network performance by allowing for more efficient routing of traffic. And finally, it can provide redundancy if one routing protocol fails or is unavailable. However, there are also some challenges associated with redistribution. In particular, it can lead to routing loops if not correctly configured. Additionally, it can add complexity to the network and make it more difficult to troubleshoot problems. For these reasons, careful planning is required before implementing redistribution in an OSPF network. How to Redistribute OSPF into BGP? One way to redistribute OSPF into BGP is using the redistributing OSPF command. This command will instruct the router to send a copy of the OSPF routing table to the BGP process. The router will then add the OSPF routes to the BGP routing table. How to Redistribute BGP into OSPF? There are several ways to redistribute BGP into OSPF, but the most common method is to use a route map. A route map is a set of instructions that tells a router how to handle the traffic that matches specific criteria. Creating a route map and applying it to BGP allows you to control which routes are redistributed into OSPF. You may like this post: BGP vs OSPF How to Redistribute OSPF into EIGRP? The process of redistributing routes from EIGRP into OSPF is relatively straightforward. First, you will need to edit the EIGRP configuration file and specify which routes you want to redistribute into OSPF. Next, you will need to edit the OSPF configuration file and add a rule that tells OSPF to accept routes from EIGRP. Finally, you will need to restart both the EIGRP and OSPF services for the changes to take effect. Following these steps can easily redistribute routes between these two protocols. How to Redistribute EIGRP into OSPF? One way is to use the redistribute command. This command can be used with several different routing protocols, including EIGRP and OSPF. When entering the redistribute command, you must specify the process ID for both EIGRP and OSPF. Another way to redistribute EIGRP into OSPF is to use route maps. Route maps can control which routes are redistributed and how they are redistributed. You will need to create a route map and then apply it to the redistribution process. Finally, you can use distribution lists to control which routes are redistributed. Distribute lists can be applied to both outgoing and incoming routes. You may like this post: EIGRP vs OSPF Final Words OSPF redistribution can be complex, but understanding the basics can make things easier for you.You can find more information about OSPF on CCIE Enterprise Infrastructure Course also.We hope this article has helped you understand the concept of OSPF redistribution.

Published - Sun, 09 Oct 2022

Created by - Stanley Avery

Address Resolution Protocol (ARP): Everything You Should Know About

If you're like most people, you take the convenience of the internet for granted. You probably don't think about how the addresses used to route your packets to their destinations are resolved. In this blog post, we'll take a closer look at address resolution protocol (ARP), what it is, and how it works. We'll also discuss some of the security implications of ARP and ways to protect yourself against them. Stay tuned! What Is ARP? ARP (Address Resolution Protocol) is a communication protocol used for mapping a 32-bit IPv4 address to a 48-bit MAC address or the other way around. It is used on Ethernet and WiFi networks, and it is also supported by many other network types. It is a crucial part of how IP address works and is responsible for resolving address conflicts and maintaining address tables. When two devices in a network need to communicate, they first use ARP to resolve each other's address. Address resolution protocol is a part of the TCP/IP stack, and it is used by almost all modern networking devices. By understanding how it works, you can troubleshoot many networking problems. How Does ARP Work? The Address Resolution Protocol (ARP) is fundamental to Internet communication. It is used to map a network address, such as an IP address, to a physical device like a NIC. This mapping is necessary because data packets can only be sent to physical devices using their MAC address. It complements the IP protocol by providing a way to determine the MAC address of a remote device when all that is known is its IP address. When a device wants to send a packet to another device on the same network, it first looks up the destination IP address in its ARP table. An ARP table is a data structure used by network devices to store information about the mapping of IP addresses to physical MAC addresses. ARP works by sending out a broadcast message (ARP request) that contains the IP address of the target device. An ARP request is a type of packet that is used to request the Media Access Control (MAC) address of a specific computer on a local area network (LAN). The request is sent to all computers on the LAN, and the computer with the matching MAC address responds with an ARP reply packet. All devices on the network will receive this message and compare the IP address to their own. The device with the matching IP address will respond with its MAC address, which will be added to the sender's ARP cache. Now, whenever the sender needs to communicate with the target device, it can look up its MAC address in the cache and send data directly to it. This process happens automatically and is transparent to users. Thanks to this protocol, we are able to communicate seamlessly with devices on our local network without having to worry about their MAC addresses. For further information, you can read this ARP configuration guide by Cisco. Security Implications of ARP and Protection Methods Address Resolution Protocol (ARP) is a powerful tool that can be used for both legitimate and malicious purposes. When used correctly, it can help improve network performance and stability. However, it can also be exploited to execute denial-of-service attacks, insert false entries into the ARP cache, and sniff network traffic. As a result, it is essential to be aware of the potential security implications of the protocol before using it on a network. By understanding the risks associated with ARP, administrators can take steps to mitigate them and protect their networks. Denial-of-Service Attacks: A denial-of-service attack occurs when an attacker sends a large number of false ARP messages to a target device. This flooding of the ARP cache causes the target to become overloaded and unable to process legitimate traffic. As a result, the target is effectively cut off from the network. While denial-of-service attacks are generally challenging to carry out, they can be devastating in terms of their impact. The good news is that there are a number of steps you can take to protect yourself from this type of attack. First, make sure that your devices are running the latest software and security patches. Second, consider using anti-spoofing measures such as static ARP or port security. Finally, make sure that your network is segmented correctly and that those critical devices are placed on separate subnets. Taking these precautions can help ensure that your network is better protected against denial-of-service attacks. Spoofing: Spoofing allows the attacker to redirect traffic or perform other man-in-the-middle attacks. It is relatively easy to carry out and can be difficult to detect. As a result, it is a serious threat to network security. To protect against spoofing attacks, organizations should implement security measures such as port security and MAC (Media Access Control) filtering. In addition, users should be aware of the risks posed by spoofing and take steps to protect their own devices from attack. By understanding and mitigating the risks posed by spoofing, organizations can help to ensure the security of their networks. Man-in-the-Middle Attacks: A man-in-the-middle attack occurs when an attacker intercepts traffic between two victims and impersonates both victims to each other. The attacker can then read, alter, or even inject data into the communication. One way to protect yourself from man-in-the-middle attacks is to use a VPN, which encrypts your traffic and makes it more difficult for attackers to sniff or tamper with your data. You can also use a firewall to filter traffic and prevent unwanted ARP requests from reaching your computer. Finally, make sure that you have the latest security patches installed on your system to close any potential vulnerabilities that attackers could exploit. By taking these precautions, you can help to keep your data safe from man-in-the-middle attacks. Final Words ARP is an important aspect of networking that you should be aware of. You can troubleshoot networking issues, optimize your network performance, and protect yourself from several attacks by understanding how it works. Start today and check out our IT courses about ARP and more...

Published - Sun, 09 Oct 2022

Created by - Stanley Avery

BGP Path Selection: Quick Guide

BGP (Border Gateway Protocol) is a widely-used routing protocol that plays an important role in the internet's infrastructure. One of the key functions of BGP is to select the best path to route packets to their destination. In this article, we'll provide a quick guide on BGP path selection and some of the factors that it takes into account. What Is BGP Path Selection? The path a packet takes through the network is determined by the routing protocol in use. The Border Gateway Protocol (BGP) is a routing protocol that is used to exchange routing information between different autonomous systems (AS). BGP path selection is the process of determining which route to take when there are multiple routes to the same destination. The route that is selected must meet certain criteria, such as being the shortest or having the lowest cost. BGP path selection can be difficult to configure, but it is essential for ensuring that packets are routed efficiently through the network. What Is the Importance of BGP Path Selection? The Border Gateway Protocol (BGP) is a critical part of the Internet's infrastructure. It helps to route traffic between different networks and ensures that data packets are delivered to their intended destination. BGP path selection is a key part of this process, and it is essential for ensuring that traffic is routed efficiently and effectively. There are several factors that contribute to BGP path selection, including network latency, congestion, and reliability. By considering these factors, BGP can help ensure that traffic is routed along the best possible path. As the Internet continues to grow and evolve, BGP path selection will become even more important. With billions of devices now connected to the Internet, it is essential for routers to be able to quickly and reliably find the best path for each data packet. By understanding the importance of BGP path selection, we can ensure that the Internet continues to function effectively. How Does BGP Path Selection Work? The Border Gateway Protocol (BGP) is the standard exterior gateway protocol used to route traffic on the Internet. BGP path selection is the process of choosing the best route for traffic between two BGP-speaking routers. BGP path selection is based on several factors, including: The length of the AS path: BGP will prefer routes with shorter AS paths, as these are typically faster. The origin of the AS: BGP will prefer routes that originate from within the same country or region, as these are typically more reliable. The preference of the AS: BGP will prefer routes that have been explicitly configured by an administrator, as these are typically more reliable. The stability of the AS: BGP will prefer routes that pass through stable ASes, as these are typically more reliable. The performance of the AS: BGP will prefer routes that pass through high-performing ASes, as these are typically faster. BGP path selection is a complex process, and it is constantly evolving to adapt to changing conditions on the Internet. However, by understanding the basics of how BGP works, you can ensure that your traffic is always routed along the best possible path. Final Words BGP path selection is a crucial technology for the internet. It allows routers to select the best path to send traffic along, and it’s essential for ensuring that packets reach their destination quickly and efficiently. We hope this guide has helped you understand how BGP works and why it’s so important. If you have any questions or would like more information, please don’t hesitate to contact us.Also you can check our courses that has BGP topics;CCIE Enterprise InfrastructureCCDE v3 Certification Training

Published - Sun, 09 Oct 2022

Created by - Orhan Ergun

Multicast PIM Dense Mode vs PIM Sparse Mode

Multicast PIM Dense mode vs PIM Sparse mode is one of the most important things for every Network Engineer who deploys IP Multicast on their networks. Because these two design option is completely different and the resulting impact can be very high. In this post, we will look at, which situation, which one should be used, and why. Although we will not explain PIM Dense or PIM Sparse mode in detail in this post, very briefly we will look at them and then compare them fo clarity. First of all, you should just know both PIM Dense and PIM Sparse are the PIM Deployment models. PIM Dense Mode PIM Dense mode work based on push and prune. Multicast traffic is sent everywhere in the network where you enable PIM Dense mode. This is not necessarily bad. In fact, as a network designer, we don't think there is bad technology. They have use cases If Multicast receivers are everywhere or most of the places in the network, then pushing the traffic everywhere is not a bad thing. Because when you push, you don't build a shared tree, you don't need to deal with the RP - Rendezvous Point because Multicast Source is learned automatically. Thus, PIM Dense Mode is considered a push-based control plane and it is suitable if the Multicast receiver is distributed in most of the paces if not all, in the network. Otherwise, it can be bad from a resource consumption point of view, bandwidth, sender, and receivers process the packets unnecessarily. PIM Sparse Mode PIM Sparse Mode doesn't work based on the push model. Receivers signal the network whichever Multicast group or Source/Group they are interested in. That's why, if there is no Multicast receiver in some parts of the network, then Multicast traffic is not sent to those locations. There are 3 different deployment models of PIM Sparse Mode. PIM Sparse Mode Deployment Models PIM SSM - Source-Specific Multicast PIM ASM - Any Source Multicast PIM Bidir - Bidirectional Multicast All of these PIM Sparse mode deployment methods in the same way which Multicast receivers send join message to the Multicast Group or Multicast Source and Group. Difference between Multicast PIM Sparse Mode vs PIM Dense Mode Although technically there are so many differences, from a high-level standpoint, the biggest difference between them, PIM Dense mode works based on push-based and PIM Sparse mode works based on the Pull-based model. Multicast traffic is sent by Multicast Source to everywhere in PIM Dense mode, but Multicast traffic is sent to the locations where there are interested receivers in PIM Sparse mode. Then, we can say that, if there are few receivers, PIM Sparse mode can be more efficient from a resource usage point of view, but if there are receivers everywhere in the network, there is no problem using PIM Dense mode from a resource usage point of view.

Published - Tue, 14 Jun 2022

Created by - Orhan Ergun

How Does Satellite Internet Work?

The orbiting satellite transmits and receives its information to a location on Earth called the Network Operations Center (NOC). NOC is connected to the Internet so all communications made from the customer location (satellite dish) to the orbiting satellite will flow through the NOC before they reached the Internet and the return traffic from the Internet to the user will follow the same path. How does Satellite Internet work? Data over satellite travels at the speed of light and Light speed is 186,300 miles per second. The orbiting satellite is 22,300 miles above earth (This is true for the GEO-based satellite) The data must travel this distance 4 times: 1. Computer to satellite 2. Satellite to NOC/Internet 3. NOC/Internet to satellite 4. Satellite to computer Satellite Adds latency This adds a lot of time to the communication. This time is called "Latency or Delay" and it is almost 500 milliseconds. This may not be seen so much, but some applications like financial and real-time gaming don’t like latency. Who wants to pull a trigger, and wait for half a second for the gun to go off? But, latency is related to which orbit the satellite is positioned. Let’s have a look at different Satellite Orbits to understand the satellite latency and its effect on the communication Geostationary (GEO) Satellites Geostationary satellites are earth-orbiting about 22,300 miles (35,800 Kilometers) directly above the equator Picture - GEO-Based Satellite Distance They travel in the same direction as the rotation of the Earth. This gives the satellites the ability to stay in one stationary position relative to the Earth Communication satellites and weather satellites are often given geostationary orbits so that the satellite antennas that communicate with them do not have to move to track them, so they can be pointed permanently at the position in the sky where they stay. The latency in GEO Satellites is very high compared to MEO and LEO Satellites. The geostationary orbit is useful for communication applications, because ground-based antennas, which must be directed toward the satellite, can operate effectively without the need for expensive equipment to track the satellite’s motion. There are hundreds of GEO satellites in orbit today, delivering services ranging from weather and mapping data to distribution of digital video-on-demand, streaming, and satellite TV channels globally. The higher orbit of GEO based satellite means greater signal power loss during transmission when compared to a lower orbit Medium Earth Orbit Satellites MEO is the region of space around the Earth above low Earth orbit and below the geostationary orbit. Historically, MEO constellations have been used for GPS and navigation applications, but in the past five years, MEO satellites have been deployed to provide broadband connectivity to service providers, government agencies, and enterprises. Current applications include delivering 4G LTE and broadband to rural, remote, and underserved areas where laying fiber is either impossible or not cost-effective – such as a cruise or commercial ships, offshore drilling platforms, backhaul for cell towers, and military sites, among others In addition, Service Providers are using managed data services from these MEO satellites to quickly restore connectivity in regions where the service has been lost due to undersea cable cuts or where major storms have occurred MEO satellite constellations can cover the majority of Earth with about eight satellites. Because MEO satellites are not stationary, a constellation of satellites is required to provide continuous service. This means that antennas on the ground need to track the satellite across the sky, which requires ground infrastructure which is more complex compared to GEO-based satellites Low Earth Orbit (LEO) Satellites Unlike geostationary satellites, low and medium Earth orbit satellites do not stay in a fixed position in the sky. Consequently, ground-based antennas cannot be easily locked into communication with any one specific satellite. Low Earth orbit satellites, as their name implies, orbit much closer to earth. LEOs tend to be smaller in size compared to GEO satellites, but require more LEO satellites to orbit together at one time to be effective. Lower orbits tend to have lower latency for time-critical services because of the closer distance to earth. It’s important to reiterate that many LEO satellites must work together to offer sufficient coverage to a given location Although many LEOs are required, they require less power to operate because they are closer to earth Picture - Low Earth Orbit - LEO Satellite Choosing to go with more satellites in the LEO orbit on less power, or using fewer larger satellites in GEO, is the biggest decision to make here Due to the high number of satellites required in LEO constellations, LEO satellites systems are expected to be high initial manufacturing and launch costs and more expensive ground hardware compared to GEO

Published - Tue, 14 Jun 2022

Created by - Orhan Ergun

BGP RTBH - Remotely Triggered Blackholing

BGP RTBH - Remotely triggered blackholing is used for DDOS prevention for a long time by many companies. DDOS - Distributed Denial of Service Attacks have an economic impact. According to an NBC News article, More than 40% of DDOS Attacks cost $1 million per day. Remote Triggered Blackhole is a technique that is used to mitigate DDOS attacks dynamically. Before RTBH, customers used to call the Operator when there is an attack, Operator NOC engineers used to connect to the attacked network, trace the source of the attack, place the filters accordingly and the attack goes away. Manual operation is open to configuration mistakes, cannot scale in large networks, and between the attack and the required action, services stay down There are two types of RTBH Destination based RTBH Source-based RTBH Let's have a look at both of them in this blog post. Destination-Based BGP RTBH - Remotely Triggered Blackholing The first RTBH idea was Destination-based RTBH.With this technique, SP and the Customer agree on the discard community. When there is an attack on the server, the victim (customer) sends the server prefix with the previously agreed community value. When SP receives the update with that community, action is set to next-hop to null, so the packet is dropped before reaching the customer link Picture - Destination-based RTBH - Remotely Triggered Blackholing The problem with this attack is the server will not be reachable from legitimate sources too. The attack is completed but at least the other services might stay up Also, a customer might change the IP address of the attacked server in DNS, which might take time to propagate this though. RFC 3882 covers Destination based RTBH Better than manual processing. Requires pre-configuration of the null route on all edge routers in the SP network Source-based BGP RTBH - Remotely Triggered Blackholing RFC 5635 brings the idea of Source RTBH. Instead of the customer specifying the attacked system IP address to the SP, the customer calls SP that they are under attack By combining uRPF and discard route (null route) configuration, based on the attack source, DDOS is mitigated (In theory)

Published - Tue, 14 Jun 2022

Created by - Orhan Ergun

CCIE Service Provider v5.0 What, Why, When?

In this post, we will look at what is CCIE Service Provider v5.0, what comes with it, which technologies we need to learn, what is the difference between CCIE SP v4 and CCIE SP v5, why you should study for CCIE Service Provider v5, when you should study for CCIE SP exam, after which certificate you should aim it for, we will look at all of these questions. What is the Cisco CCIE Service Provider v5 Exam? The CCIE Service Provider v5 lab exam is testing skillsets related to the service Provider solutions integration, interoperation, configuration, and troubleshooting in complex networks. CCIE SP v5 is the latest version of the CCIE Service Provider lab exam. When the candidates pass this exam, they get their CCIE number. This certification syllabus covers most, if not all real-life Service Provider network technologies. What is the difference between CCIE SP v4 and CCIE SP v5? From the technology standpoint, the biggest difference between CCIE SPv4.1 and the CCIE SP v5.0 exam is Network Programmability and Automation Module. It is 20% of the entire exam, thus very important in the CCIE Service Provider exam. CCIE SP Network Automation topics are as below: CCIE SP Network Automation Topics Design, deploy and optimize NSO service packages (Yang model, template-based, python-based, fastmap, reactive fastmap, CLI NEDs, NETCONF NEDs, NSO northbound integration using REST and RESTCONF). Design NFV orchestration (NFVO) using NSO and ESC in an ETSI NFV architecture. Design and deploy Model-driven telemetry on XR devices (Yang models, gRPC, GPB, device configuration, collection architecture) Deploy and Optimize Ansible playbook scripts that interact with NSO, IOS-XE, and IOS-XR devices The design module is the new section in CCIE SP v5 Exam Another big difference between CCIE SP v4 and CCIE SP v5 is the Design Module. In the previous exam, there was no Design section. It was the biggest problem with the CCIE certifications in general. Because candidates were not tested whether they know Why they do the things they do. Why was not asked? Only what and how were covered in the CCIE Exams. But similar to the CCIE Enterprise Infrastructure exam, the CCIE SP v5 exam as well comes with the 3 hours Design section anymore. Candidates will encounter business and technical requirements and the constraints of the company, and they will translate those requirements and the constraints into technical solutions. How long does it take to study for the CCIE SP v5 Exam? It depends on the candidate's current level of knowledge, how many hours he/she can spend on study daily basis, how is the understanding capability, and whether they receive professional help in terms of the online courses, community, and so on. Basically, if you are a CCNP level engineer and can spend a daily basis average of 2 hours, it would take less than 8-12 months to be ready for the CCIE SP v5 exam. If you spend more time and you have a CCNP level background, for both traditional technologies and the evolving ones such as Assurance, Network Programmability, and Automation, then time could be reduced to a couple of months only, especially if you receive a good CCIE SP v5 training too. But make sure, they just don't cover the traditional technologies from an operational point of view but cover the traditional and the evolving technologies both from an operational and design point of view. How much money CCIE SP v5 certified Engineer can make? Again it depends on some criteria, such as which country, years of experience, which company they work for, and so on. It can start from 2k USD and can go up to 20k USD based on the above criteria but those who have CCIE SP v5 certification, because they will know most of the Enterprise network technologies as well, will be able to work both in Service Provider, Mobile Operators, but also they can work in the Enterprise companies as well. Why you should study for the CCIE SP v5 Exam? Because the CCIE SP v5 blueprint covers most of the technologies that Service Provider companies use, studying for the CCIE SP exam will prepare the students to the real-world SP network environments as well. And because the design module of the CCIE Service Provider requires design knowledge, CCIE SP candidates won't only learn the practical hands-on Cisco configuration but also they will have s good design mindset, of course, this might require professional help. This quote explains it well I think: Design and Architecture is a thing that you can't find on Google. What is unique about the CCIE SP v5 Exam? Network Programmability and Automation definitely very unique as technology and you have to know them very well because the big percentage - 20% of the CCIE SP exam is based on them. Also, as mentioned above in this post, Design is unique in the CCIE SP v5 exam, and any other previous version of the CCIE SP exam was not coming with the Design questions. When you should study for CCIE SP Exam? You will be able to show that you have not only operational experience but also you have good design knowledge for the SP network technologies. Also, CCIE SP-certified engineers will have a bigger chance to find a job in the market. Of course certificate and knowledge should go hand in hand. Last but not least CCIE SP study is a good starting point for the CCDE exam as well and the natural path for the CCIE SP certified engineers is the CCDE certificate. Which Online Training Course for the CCIE SP v5 Exam do you recommend? Orhan Ergun's CCIE SP v5.0 certification preparation program covers all the CCIE SP v5.0 exam topics both from theory, hands-on practice, and DESIGN points of view as well. Most, if not all CCIE SP training courses only cover the practical aspect without the DESIGN part, and they definitely don't cover or cover different things than what CCIE SP Automation and Programmability part requires. Also, check traditional parts as well, such as MVPNs, Segment Routing, and many other technologies should be covered in great detail with real-life information point of view too for complete understanding.

Published - Mon, 13 Jun 2022

Created by - Orhan Ergun

BGP-LS BGP Link State - What is it? Why BGP LS is used?

BGP LS, BGP Link-State is used to distribute Link state information and traffic engineering attributes from the network nodes to the Centralized TE controller. RSVP-TE has been providing resource allocation and providing an LSP with the distributed path computation algorithm (CSPF) for decades. It requires topology information from the network and only link-state IGP protocols such as OSPF and IS-IS can carry the topology information required for the controller to set up a shortest from each node to each destination prefix. In order to overcome Bin Packing, Dead Lock, or Network-wide optimal traffic engineering, centralized controllers have been used for a long time. Because with the distributed computation for Traffic Engineering, the above issues might arise. RFC 7752 specifies the details of North-Bound Distribution of Link-State and Traffic Engineering (TE) Information Using BGP. PCE (Path Computation Element) is an SDN controller which provides optimal path computation in Multi Area and Multi AS (Autonomous System) deployments. It requires Link State and Traffic Engineering attributes such as Link coloring, SRLG, reserved bandwidth, etc., from the network. Link state IGP protocols (OSPF, IS-IS) can be used for this purpose but they are considered chatty and non-scalable, thus BGP with the new NLRI for the Link state was defined to carry IGP link-state information to the controller. RFC 7752 contains two parts: New BGP link-state Network Layer Reachability Information – BGP-LS NLRI defines three objects – links, nodes, and prefixes. We can reconstruct IGP topology with the combination of Node and Link objects. IP prefix objects provide network reachability information. New BGP path attribute (BGP-LS attribute) that encodes properties of link, node, and prefix objects, such as IGP metrics information as well. We recommend you take a look at this video which explains the history of BGP-LS, its use case, and its usage of it in real networks. Ethan Banks and the inventor of the technology, Hannes Gredler are discussing it in the video. Why does BGP Need Link State?

Published - Mon, 13 Jun 2022