Total 369 Blogs

Created by - Orhan Ergun

Segment Routing Fast Reroute

Segment Routing Fast Reroute – Traffic Engineering with Segment Routing uses LFA mechanism to provide 50 msec fast reroute capability. Current Segment Routing implementation for the OSPF uses regular LFA (Loop Free Alternate) for fast reroute in Cisco devices. Because LFA (Loop Free Alternate) has topology limitations, it does not include many faulty scenarios. On the other hand, the IS-IS supports topology independent LFA and TI-LFA covers every faulty scenario. As of today, Segment Routing is enabled on ASR9000 and CRS1/3. Cisco NXOS software supports Segment Routing. Indeed, you do not need to configure tunnels, complex link, or node protection configuration to use LFA to provide fast reroute capability. At the background, SPF runs twice for the destination prefixes, which must be protected to calculate the loop free path. First, SPF finds the shortest path by calculating the primary path from the local router to the final router; second, SPF runs on the same router to find a loop free backup path. The backup path is installed and used on the FIB as soon as any glitch is detected. This is how LFA works, as it is not specific to Segment Routing. You may think that all these steps are quite intensive for the CPU. On the contrary, it is not. After the first SPF runs on the local router, the same router runs the second SPF from its adjacent neighbor point of view (OSPF and IS-IS have complete topology information with an area). The disadvantage of MPLS TE-FRR is not only its complexity, but also its similarity with SONET/SDH ring protection.     Figure - MPLS traffic engineering vs segment routing In the topology shown above, R1-R2-R4-R7 is the primary path for the traffic between R1 and R8. If we set up MPLS Traffic Engineering link protection for the R2-R4 link, R2-R3-R4-R5-R6 will be the protection path. When the link between R2 and R4 fails, PLR (Point of Local Repair) will send the traffic to the alternate TE tunnel. In addition, the tunnel will travel to the MP (Merge Point), R4, and continue towards its final destination. Immediately IGP converges, Head End/R1 signals the new optimized LSP – R1-R2-R3-R5-R6-R8 – and the traffic travels to the new optimized LSP. If you use Segment Routing in the above topology, R2 could use R3 as LFA because R3 will not send the traffic to R2 or to the destination before R8. What’s more, the traffic would follow the R1-R2-R3-R5-R8 path instead of R5-R6-R4-R7-R8. Besides, two additional hops would not be passed. MPLS Traffic Engineering cannot use ECMP.To have ECMP capability, you need to create two parallel TE tunnels between the Head and the Tail End. As for the Segment Routing, Node/Prefix SID is flooded throughout the domain, and all the intermediate devices use ECMP paths. Node and Prefix SID is the same thing and assigned for the device loopbacks.

Published - Tue, 26 Nov 2019

Created by - Orhan Ergun

Segment Routing Key Points

Segment Routing (SR) leverages the source paradigm. A node. steers a packet through an ordered list of instructions, called ‘ segment ‘.State is kept in the packet header, not on the router, with Segment Routing.   Resources such as the CPU and Memory are saved.   If you have 100 Edge Routers in your network and if you enable MPLS Traffic Edge to Edge, you would have 100×99/2 = 4950 LSP states on your Midpoint LSR. This is prevalent in many MPLS TE enabled network.   If you enable Segment Routing and if you evaluate the same midpoint case (since you assign a Prefix/Node SID for every Edge router), Midpoint LSR would have 110 entries instead of 4500 entries. As for the scalability, everything is perfect. However, there is a caveat.   Segment list can easily get big if you use explicit routing for the purpose of OAM. If you do that, you may end up with 7-8 segments. In that case, it is pertinent that you check the hardware support.   Cisco claims that they have performed the tests on a number of service provider networks and that their findings show that two or three segments would be enough for the most explicit path scenarios.   You can use Segment Routing to provide MPLS VPN service without using LDP for the transport label distribution. Segment Routing provides Traffic Engineering without having soft state RSVP-TE protocol on your network. Soft state protocols require a lot of processing power.Although Segment Routing does not have permission control, you can use routers to specify, for instance, 50Mbs LSP path for traffic A and 30 Mbps for traffic B using centralized controller, a process that allows you to use traffic engineering. Segment Routing provides Fast Reroute without RSVP-TE, and you do not need to have thousands of forwarding state in the network, as it uses IP FRR technology, specifically Topology Independent LFA. Segment Routing has many use cases. This article explains MPLS VPN, Traffic Engineering, and Fast Reroute even though Dual Plane topologies are other use cases for the operators. With Traffic Engineering, you can have ECMP capability, a task that is very difficult to achieve with MPLS Traffic Engineering. This is because you need to create two tunnels. There are other use cases such as Egress peering engineering. Today, this can be achieved by the complex BGP policy or LISP . However, with Segment Routing, BGP Egress peer engineering is much easier. I will explain this process and other use cases in a separate article. Major vendors – including Alcatel, Ericson, and Juniper – support segment Routing. If you have devices not supported by Segment Routing but by LDP, you can use Segment Routing to interwork the LDP enabled devices. Also, the Segment Routing Mapping Server provides interworking functionality. One of Cisco’s objectives is to allow Segment Routing to provide native IPv6 transport.Today, Segment Routing supports IPv6 more than MPLS.

Published - Tue, 26 Nov 2019

Created by - Orhan Ergun

Segment Routing Traffic Engineering

Segment Routing Traffic Engineering - First, you need to remember MPLS-Traffic engineering operation. MPLS-traffic engineering requires four steps, as shown below, for its operation. Link information such as bandwidth, IGP metric, TE metric, and SRLG is flooded throughout the IGP domain by the link state protocols. The path is calculated either with CSPF in a distributed manner or with offline tools as a centralized fashion. If a suitable path is found, it is signalled via RSVP-TE and the RSVP assigns the label for the tunnels. The traffic is placed in the tunnels.     Figure - IP MPLS Traffic EngineeringIn the diagram shown above – if the traffic flows between R1 and R5 when the packet travels to R2 – the IGP chooses the top path as the shortest path. This is because the cost of R2 to R5 through R3 is smaller than that of R2 to R5 through R6. As you must have observed, R2-R6-R7-R4 link is not used during this operation. With MPLS-traffic engineering, both the top and bottom path can be used. The top path has high latency and high throughput path; as a result, it can be used for data traffic. On the other hand, the bottom path has low latency, low throughput path, and expensive link; thus, it can be used for latency sensitive traffic, including voice and video. To complete this operation, we need to create two MPLS-traffic engineering tunnels: one tunnel for data and the other tunnel for voice traffic. After doing that, we can place CBTS (Class based traffic selection) option of MPLS TE and voice traffic into voice LSP (TE tunnel). Next, we can identify data traffic and place it into LSP (TE tunnel). How can we achieve the Traffic Engineering operation with Segment Routing? Segment routing traffic engineering I have explained Node/Prefix SID in one of the previous sections. Now, you know that Node/Prefix SID is assigned to the loopback addresses of all segment router enabled devices, and SID is unique in the routing domain. Also, there is another SID type flooded with IGP packet. Adjacency Segment ID While Adjacency SID is unique to the local router, it is globally not as unique as Node/Prefix SID. Routers automatically allocate an Adjacency Segment ID to their interfaces, especially when the segment routing is enabled on the device.     In the topology shown above, R2 allocates Adjacency SID to the interface of R6. Label 22001 is the adjacency SID of R2 towards R3 interface, and it is used for steering traffic from the shortest path (perhaps, you do not desire to use only the shortest path). Label 16005 is the Node/Prefix SID of R5. If the packet is sent from R1 to R5 with two SID, 22001 and 16005 (since R2 usually send 22001 for its local adjacency), R1 will send the packet to R2; R2 will pop 22001, sending the remaining packet towards R6 with16005 – which is Node/Prefix SID of R5. R6 will send the packet to R7 because it is the shortest path to R5. Node/Prefix SID is used in the shortest path routing, and it has ECMP capability. What’s more, Adjacency SID is used in explicit path routing. NOTE: While Adjacency SID is used for Explicit Path Routing, Node/Prefix SID follows the shortest path. I will provide more examples so that you can understand how to use node and Adjacency SID to provide an explicit path for the traffic flows.   Figure - Node and adjacency segment idOur aim is to send traffic between router A and router J; however, we do not want to use E-G link. In this operation, we will use the A-C-E-F-H-J path. To achieve our aim, we need to reach E. After that, we will divert the traffic to the E-F link. Next, F will transfer the traffic to J, which is the final destination. Router A should put three label/Segment ID on the packet. SID 1600, the first SID, will travel to router E. The second SID is 16002, which is the Adjacency SID for the R2-R3 interface. This SID is unique, and it is known only by the ingress router, not by C. The third SID is 16003, which is the Node/Prefix SID of Router J. Router C receives the packet with three SID, pops the 16001, and sends the remaining two labels to router E. Router E receives the packet with 16002 SID, which is the Adjacency SID towards router F. Thus, router E pops it, and sends the remaining packet to router F. Router F receives the packet with SID 16003, which is the Node/Prefix SID of router J. So, router F follows the shortest path, sending the packet to router H as well as swapping 16003 with 16003 without changing it. If router J sends implicit null label, router H pops the 16003 and undergoes PHP, sending the IP packet to the router J. If we want to carry out this operation using MPLS-TE, we can create an explicit path by providing ERO. Also read : Segment routing fundamentals

Published - Tue, 26 Nov 2019

Created by - Orhan Ergun

What is Deadlock situation in MPLS Traffic Engineering ?

What is deadlock situation in MPLS Traffic Engineering ? What happens when deadlock occurs ? Is there any mechanism to prevent deadlock ? I will explain all the details in this post. Deadlock occurs when LSP needs to move to the other link but due to lack of available bandwidth cannot move to the other links. I will show you the case with the below topology. LSP Dead Lock Figure -1 Dead Lock Problem in Distributed MPLS Traffic EngineeringThere are two RSVP signaled MPLS TE LSP in the above figure. Red LDP is signaled with 400 Mbps and Blue LSP is signaled with 400 Mbps capacity. Link capacities are 1000 Mbps , except P1 – P3 , P2 – P3 , they are 500 Mbps. Reservation with RSVP is done at the control plane. RSVP doesn’t take actual link utilization in the dataplane. Which means, if you send 600Mbps traffic over 400Mbps signaled LSP, traffic is not dropped. If there is no utilization on the physical link, you wouldn’t see any problem. Control and data plane is not synchronized by default in MPLS Traffic Engineering deployment with RSVP signaling. On the above figure, let’s say actual traffic utilization reaches to the 800 Mbps on the Red LSP. And usage on Blue LSP is still 400 Mbps.  Two of these LSPs combine, 1200Mbps traffic is sent down over 1000Mbps link. Thus , the traffic over both of the LSPs will be affected. So, RED LSP’s traffic increase, affects Blue LSP as well.   In distributed Traffic Engineering, two futures are used to avoid deadlock problem.   LSP priorities and the Auto Bandwidth.   With Auto Bandwidth, routers check the actual interface usage, so the data plane traffic and adjust the RSVP control plane accordingly. So, LSP is resized. But without priority, even when resizing the Red LSP to 800 Mbps, Blue LSP is not moved to the bottom path.   Both priority and Auto bandwidth would work in this topology. But what if Blue LSP’s priority is better and it tries to force Red LSP to move to an alternate link. So solution gets very complex.   Thus, the better solution for the deadlock problem is centralized approach. If you would have a centralized node which knows the real time topology information , traffic demand and the active LSPs in the network, centralized nodes would move the LSPs accordingly to an alternate links. Centralized nodes would take the latency, bandwidth and the many other constraints for the LSPs into an account while placing the LSPs.

Published - Tue, 26 Nov 2019

Created by - Orhan Ergun

Bin Packing Problem of Distributed Traffic Engineering

Bin Packing Problem ? What is Bin Packing ? I will explain in this post Bin Packing Problem in MPLS Traffic Engineering. Very complex post normally but I will make it simple for you. And trust me, it is important to understand. Before I start explaining Bin Packing problem, let’s just remember the purpose of MPLS Traffic Engineering. Very easy, MPLS Traffic Engineering is deployed to use all available capacity on the network, or maximize the capacity usage of the entire network, so at the end ‘ Money ‘ ! If there will be any idle link, due to SPF shortest path selection maybe, MPLS Traffic Engineering gives us to ability to utilize those under utilized or no utilized links as well. I explain couple other main reasons of MPLS Traffic Engineering in this post, but just remember that, Maximizing the capacity usage of our networks is primary technical goal. Bin Packing problem is just the opposite. Although the purpose of MPLS Traffic Engineering is to maximize the usage by sending the traffic to the all possible paths optimally, as you will see from the below example, this is not always the case unfortunately.     Figure -1 LSP Blocking Due to Bin PackingIn the above topology, PE1 wants to signal RSVP LSP to the PE3. PE1 wants to signal with 300Mbps capacity. 300 Mbps from PE1 to PE3, can be signal through two paths. First one is PE1 – P1 – P2 – PE3 and the second one is PE1 – P1 – P3 – P2 – PE3 Obviously the second one is longer path (IGP cost is higher over the bottom path), thus PE1 to PE3 LSP is setup over the red LSP (PE1 – P1 – P2 – PE3) When this LSP signaled, over the top path, only 700 Mbps link capacity remains. Bottom path haven’t been used yet, thus bottom path has 500 Mbps capacity. So far, so good, no problem ! But, PE2 will not stay there without any traffic forever. And here it is. PE2 wants to setup 800 Mbps RSVP signaled LSP to PE3. From PE2 to PE3, physically two paths are available.These are; PE2 – P1 – P3 – P2 – PE3 and PE2 – P1 – P2 – PE3. Although there are two available paths from PE2 to PE3, none of them can be used. Because, top path capacity is only 700 Mbps, because 300 Mbps is used by the RED LSP (PE1 LSP). Bottom path cannot be used, because maximum available capacity is 500Mbps. This is Bin Packing Problem. If PE2 LSP request would come first, it could use only top path and when the PE1 request comes, PE1 LSP could be setup over the bottom path. But there is no coordination between the PE 1 and PE2. They don’t talk each other and say, hey here is my traffic demand, you should use the top path and I will use the bottom path, so on and so forth. PE1 request came first, it’s demand is satisfied over the IGP shortest path and Red LSP is signaled. If order would be different, at least in this topology, we wouldn’t have an issue. This request (Bandwidth in this case) ordering problem is called ‘ race condition ‘ problem. Whoever comes first, He gets the bandwidth baby ! How Bin Packing Problem can be avoided ? Do we have a solution for this ? Fortunately yes. First solution is LSP Priority. By giving a higher priority to the PE2 LSP, when the request comes, PE1 LSP is moved to the bottom path. Second one is changing the computation mechanism. Note: Below part will be a bit more technical, promised to keep it as simple, but might be harder, let’s try. Distributed path computation with MPLS is done with CSPF (Constrained Based Shortest Path First Algorithm). Each router knows the traffic demand of itself and it attacks to the available bandwidth. They don’t know the traffic demand of the other routers. But don’t confuse traffic demand with network topology. They of course know the each other topology, because every router in an Area or Level (OSPF and IS-IS respectively) has the same LSDB (Link State Database) and TED (Traffic Engineering Database, TED is only when Traffic Engineering is enabled) Since they don’t know each other traffic demands, if there would be a centralized node which talk with these routers and learn the traffic demands and the network topology, it could calculate the path on behalf of all the routers and tell to the routers which path they should use for their LSPs. I think some of you are saying that it is SDN approach and that’s correct. In case of MPLS networks, PCE (Path Computation Element) with some extensions, exactly does that. By having centralized traffic demand view, we would avoid Bin Packing Problem, thus would be able to signal required LSP, timely and over an optimal paths. End result would be as below for the above topology. Figure – 2 Optimal Bin Packing – No Problem for the LSP DemandsPE1 is signaled through the bottom path (Red LSP) and PE2 is signaled through the top path (Blue LSP) 200Mbps (1000- 800Mbps demand of Blue LSP) remains at the top path , 200 Mbps (500 – 300 Mbps demand of Red LSP) remains at the bottom path. No need to have complex priority schema in this case.

Published - Tue, 26 Nov 2019

Created by - Orhan Ergun

IS-IS Suboptimal Routing Design

IS-IS Suboptimal Routing - If you design multi level IS-IS network and if you have more than one exit (L1-L2 routers) from the Level 1 domain, you will likely create a suboptimal routing. Multi-level IS-IS design is for large-scale network. What’s more, most of the real life networks use only flat Level 2 IS-IS as their interior gateway protocol (IGP).In the figure shown above, Router A is in the Level 1 IS-IS domain. While Router B is in the Level 1 IS-IS domain, it is in different area. Router A has two exit point/default gateway (L1-L2 Routers) to reach Router B; however, unlike Router B, Router A operates in different domain.   In IS-IS Level 1-Level 2, routers don’t send anything except default route (with ATT bit in the Level 1 LSP) towards the internal Level 1 Routers. Thus, Router B only trust it’s Level 1-Level 2 router information. In addition, Router A doesn’t know the entire IS-IS topology would be in the flat Level 2 IS-IS topology. Furthermore, both L1-L2 routers advertise the same subnet – towards Router A – for the Router B network. In fact, only the metric of Router B is different. Left L1-L2 gateway sends the route of Router B with metric 5; right L1-L2 sends the route of Router B with metric 10.   Because of that, Router A chooses the Left L1-L2 router as its exit point/default gateway to reach Router B. Obviously, this creates sub-optimal routing since the left L1-L2 router sends the packet to top routers so that more hops packet can travel – an effect that we don’t want in our design.   Sub-optimal routing is appealing if you know the requirements of the application. Some applications can tolerate suboptimal routing since their timeout, delay, and jitter expectation may not be sensitive. In sum, putting the low-end devices into an L1 domain provides fault isolation, which in turn provides scalability.

Published - Tue, 26 Nov 2019

Created by - Orhan Ergun

OSPF Design Discussion

OSPF Design Discussion – In the below picture, where should you place an OSPF ABR (Area Border Router) to scale OSPF design ? Why ? Please share your thoughts in the comment box below. First 5 correct answers will get my CCDE Preparation Workbook for free. Please subscribe to email list so I can see your email address for communication.

Published - Tue, 26 Nov 2019

Created by - Orhan Ergun

OSPF Design Challenge

OSPF Design Challenge - OSPF and MPLS is most commonly used two technologies in an MPLS VPN environment. In this post I will share a mini design scenario with you and ask couple questions about the fictitious company architecture. When you attend to my CCDE class ,we will work on tens of scenarios similar to this. I published last week my first mini design scenario about MPLS VPN and DMVPN, and I am thinking to publish every Thursday a new one.In the topology above Company A has a core ring topology between R1 through R8.   There is a huge direct traffic between R3 and R4 core routers thus network engineers decide to connect them directly and turn the topology to partial mesh.   There is no east-west traffic between R9 – R10 edge routers and almost all traffic is north-south.   Company A sends only default route from the core to the edge routers. They know that this might cause suboptimal traffic pattern but it is not an issue for the applications of Company A.   Company wanted to create a Multiple Area since the edge routes such as R9 and R10 and the other routers which are not included in the topology has a resource (CPU/Memory) concern.   Company A network engineers knows that flapping links , even adding a loopback interface on any router would trigger a full SPF run on the poor edge routers.   For the simplicity other routers which are connected to the ring is not shown.   CompanyA network engineer has some question to orhanergun.net readers.     Question 1 : Is it good idea to separate Core routers in two Areas (Area 0 and Area 10) Update: No. Company already has small amount of core routers, and even if the company had thousand of core routers, you can have them all in one area. Using prefix suppression feature, infrastructure links can be removed from Router LSA, so they only have loopback addresses of each other. Question 2 : Which area should R2-R4 and R3-R4 links should be placed in, Why ? Update : In order to prevent sub optimal routing, enabling OSPF Multi Area Adjaceny is better. Also if you would put both links in non-backbone area, R4 no longer wouldn’t be an ABR. Question 3 : Should I have a direct link between R9 and R10 ? Update: No. In the background information section, we are told that between those routers, there is no traffic,so the traffic pattern is north-south. Although in the topology is not shown,company has many edge routers as it is stated in the background informations, having a direct link would just increase the LSA database of poor routers,it would make troubleshooting harder as well. Network Manager of Company A thanks to you and send an email to you, here it is: Hi, We want to have a BGP free core design. On all our core routers we have BGP running. In this stage, We don’t want to have BGP Route Reflector since we want to have path visibility. Question 4 : What would you suggest for Company A’s BGP solution? I would suggest them to enable MPLS. In this stage in real exam you might be asked whether you need additional information. If company wants to have scalable VPN solution, then having an MPLS provides them to have mpls vpn. If they enable MPLS on the network, Core devices don’t have to run BGP. You can’t use single area/Flat IGP in this network since in the requirements, you are told that edge devices have resource problem,so you need to create boundary and put the edges in different areas to protect them. Question 5 : Would your solution work with the all above requirements ? Update : No. In the background information sections, you are told that Company A sends only default route towards an Edge routers. If you run MPLS, unless you enable RFC 5283 or have Seamless MPLS design, you need to have /32 addresses of loopback interfaces of edge devices in non-backbone area. If you receive only default route, you need to leak loopback addresses from Core to Edge in IGP. To have a great understanding of SP Networks, you can check my new published “Service Provider Networks Design and Perspective” Book. It covers the SP network Technologies with also explaining in detail a factious SP network. Click here

Published - Tue, 26 Nov 2019

Created by - Orhan Ergun

MPLS VPN and DMVPN Design Challenge

MPLS VPN and DMVPN Design - MPLS VPN is used mostly as primary connectivity and DMVPN as a backup in the small medium business. You might see in some cases DMVPN is the only the circuit between remote offices and the datacenter/HQ, or for some applications MPLS VPN might be the primary,DMVPN for the others.   As an example high throughput, high latency DMVPN link might be used for data traffic, low through,low latency MPLS VPN link for voice and video.   In this post I will give you a mini network design scenario and ask some questions, we will discuss the answers in the comment box below.   When you attend to my CCDE class,we will work on tens of scenarios similar to this.   I will update the scenarios every week with my answer. Update : I updated the post with my answers. Also I published a new scenario which you can reach from here.Background Info: In the above topology,customer wants to use MPLS L3 VPN (Right one) as its primary path between Remote office and the Datacenter. Customer uses EIGRP 100 for the Local Area Network inside the office. Customer runs EIGRP AS 200 over DMVPN. Service Provider doesn’t support EIGRP as a PE-CE protocol, only static routing and BGP. Customer selected to use BGP instead of static routing since cost community attribute might be used to carry EIGRP metric over the MP-BGP session of service provider. Redistribution is needed on the R2 between EIGRP and BGP (Two ways) Since customer uses different EIGRP AS numbers for the LAN and DMVPN networks,redistribution is need on R1 too.   Question 1 : Should customer use EIGRP same AS on the DMVPN and the LAN ?   Update : No it shouldn’t. Since Customer requirement is to use MPLS VPN as primary path and nothing specified for specific application only use MPLS VPN and other should use DMVPN, if the customer runs same EIGRP AS on Local Area Network and over DMVPN, EIGRP routes is seen as internal from DMVPN but external from MPLS VPN. Internal EIGRP is preferred over external because of Admin Distance, customer should use different AS numbers.   Question 2 : What is the path between remote office and the datacenter ?   Update : Since redistribution is done on R1 and R2, remote switch and datacenter devices see the routes both from DMVPN and BGP as EIGRP external. Then the metric is compared. If the metric ( Bandwidth and Delay in EIGRP) is the same, both path can be used (Equal Cost Multipath-ECMP).   Question 3 : Does result fits for the customer traffic requirement ?   Update : Yes. Because if customer uses different EIGRP AS on LAN and DMVPN, with just metric adjustment, MPLS VPN path is used as primary.   Question 4 : What happens when the primary MPLS VPN link goes down ?   Update : It depends. If you redistribute the data center prefixes which are received by R1 on R2, R2 sends the traffic towards switch and switch uses only R1. Traffic from remote to datacenter go through Switch – R1- DMVPN path. From datacenter, since those will not be known through MPLS VPN, only DMVPN link is used. So DMVPN link is used as primary when the failure happens.   Question 5 : What happens when failed MPLS VPN link comes back ?   Update : This is tricky part. R2 receives the datacenter prefixes over MPLS VPN path via EBGP, also from R1 via EIGRP . When R2 receives the prefixes from R1 as an EIGRP route those prefixes shouldn’t be redistributed on R2 to send through MPLS VPN path. If you don’t redistribute them, once the link comes back, datacenter prefixes will still be received via DMVPN and MPLS VPN and appears on the office switch as an EIGRP external. If you redistribute them on R2, when the link comes back, R2 continues to use MPLS VPN path, so switch can do load sharing or with metric adjustment you can force to use MPLS as primary. If it is Cisco switches or from other vendor which uses BGP weight attribute into consideration for the best path selection, then redistributed prefixes weight would be higher than the prefixes which are received through MPLS VPN so R2 uses Switch-R1 DMVPN path. To have a great understanding of SP Networks, you can check my new published “Service Provider Networks Design and Perspective” Book.It covers the SP network Technologies with also explaining in detail a factious SP network.

Published - Tue, 26 Nov 2019