Discussions

Total 22 Blogs

Created by - Orhan Ergun

Do You Really Need Quality of Service?

Quality of service (QoS) is the overall performance of a telephony or computer network, particularly the performance seen by the users of the network. Above is the Quality of Service definition from Wikipedia. Performance metrics can be bandwidth, delay, jitter, packet loss, and so on. Two Quality Of Service approaches have been defined by the standard organizations. Namely Intserv (Integrated Services) and Diffserv (Differentiated Services). In this post, I will not explain each method or the special tools used in each method. Instead, which method makes sense in a particular design and which Quality of Service Tools can solve user needs without compromising the network design goals. Intserv demands each and every flow request bandwidth from the network and the network would reserve the required bandwidth for the user during a conversation. Think this is an on-demand circuit switching, each flow of each user would be remembered by the network. This clearly would create a resource problem (CPU, Memory, Bandwidth) on the network thus never widely adopted. Although with RSVP-TE (RSVP Traffic Engineering) particular LSP can ask for bandwidth from the network nodes, and in turn nodes reserve a bandwidth, the number of LSP between the Edge nodes of the network is orders of magnitude less than individual flows of the users. The second Quality of Service Approach is Diffserv (Differentiated Services) don’t require a reservation instead flows are aggregated and placed into the classes. Then each and every node can be controlled by the network operator to treat differently for the aggregated flows. Obviously, it can be scalable compared to the Intserv Quality of Service model. When you practice Quality of Service, you learn Classification, Marking, Queueing, Policing, and Shaping tools. And you are also told that in order to have the best Quality Of Service for the user, you need to deploy it from end to end. But where are those ends? The name of the nodes might differ based on business. On the Enterprise campus, your access switch is one end, and the branch router, data center virtual or physical access switches, and internet gateways might be on the other end. Or in the Service Provider business, the Provider Edge router is one end, other provider edge routers, data center virtual or physical access, internet gateways, service access devices such as DSLAM, CMTS devices might be another end. So an end-to-end principle will fail since the end-to-end domain might be too broad and too many devices to manage. But definitely, some tools make sense in some places in some networks. For example ” Policing ” in the Service Provider Networks. It can be used for billing purposes. The provider can drop the excess usage or charge for the premium service. Policing is deployed together with classification/marking But you don’t need to deploy QoS tools on the other nodes so those classifications and marking will locally make sense. This tool is also used for the Call Admission Control purpose. Imagine you have 200Mb links and each Telepresence flow requires 45Mb traffic. You can place 4 calls onto the link. If the 5th call is set up, all other 4 calls suffer as well since packets have to be dropped. ( 45 * 5 – 200 – buffer size) Another Quality of Service tool is Queueing; And in particular, it is used whenever there is an oversubscription. Oversubscription can be between the nodes ( On the links ) or within the nodes. If the congestion is within the node, queueing in the ingress direction is applied to protect some traffic (maybe real-time ) from the Head of Line Blocking in the switching fabric of the node. Or in the egress direction between the nodes to protect selective traffic. The problem is if there is enough traffic, buffers (queues) will get full, and eventually, all the traffic will be dropped no matter what queueing method ( LLQ, WFQ, CBWFW ) is used. So if you try to design end-to-end Quality of Service by enabling queueing to cover all possible oversubscription in the network you fail. When the congestion happens, some flows will just die a couple of milliseconds after another. The design tradeoff here is to add more bandwidth vs engineering all possible congestion points. I am not talking only about the initial QoS design phase but the complexity brought by the QoS in the design as well. Network Operator needs to manage, understand, and troubleshoot QoS during steady-state and in the case of failure as well. Bandwidth is getting cheaper and cheaper every day but the complexity of Quality of Service will stay there forever.

Published - Sun, 10 Apr 2022

Created by - Orhan Ergun

OSPF Prefix Suppression helps company to use 200 routers!

OSPF Prefix Suppression helps to company to use 200 routers in their network without any problem. You can think that, some companies use more than 200 routers in their OSPF network, why this post is special? You will understand why in 10 minutes.   Yes that is true but those companies have either multi-area OSPF design or multiple process to separate the flooding domains. Lets have a quick look at OSPF Prefix Suppression then I will tell you special about this company. OSPF Prefix suppression is an IETF Standard. RFC 6860 explains the details of the Prefix Suppression feature.   Basically it is a method for hiding Transit-Only Networks in OSPF.   A transit-only network is defined as a network connecting routers only. In OSPF, transit-only networks are usually configured with routable IP addresses, which are advertised in Link State Advertisements (LSAs) but are not needed for data traffic. In addition, remote attacks can be launched against routers by sending packets to these transit-only networks.   By hiding transit-only networks, network convergence time and vulnerability to remote attacks can be reduced.‘Hiding’ implies that the prefixes are not installed in the routing tables on OSPF routers. Cleaner routing table, less amount of prefixes, that’s why, in case of failure, troubleshooting will be much easier. The company which I mention in this post is a fixed DSL provider, has approximately a million DSL customers. 2000 DSLAM, 750 DSLAM POPs, 200 IP Routers, 50 BNG. I will not explain the access network of this company in this post, but just will mention from the OSPF network setup. Interestingly, they don’t have an MPLS in the network. They don’t provide MPLS VPN service currently. Their OSPF is a flat design, meaning all the 200 routers in a backbone area. OSPF Prefix Suppression is enabled as you can understand from the title. They just advertise the /32s of the loopback interfaces for the BGP neighborship. Since they don’t have an MPLS on their network as I said above, they are running BGP everywhere on their network.  They have an Access , Aggregation and Core POPs. So three tier of hierarchy.   It is common to have an hierarchical network setup in the optical layer as It is common to have hierarchy in IP layer.  They don’t have any problem with the current OSPF network setup.    In fact, there is a plan to enable MPLS with LDP and RSVP on their network.   There is a project to enable MPLS Traffic Engineering and MPLS Traffic Engineering Fast Reroute on their network, and having a flat OSPF network design will be an advantage for them, since they won’t need to deal with Inter-area MPLS Traffic Engineering. So, OSPF Prefix Suppression is used in real life deployment with 200 routers, and when next time you read from somewhere that, don’t place 40 – 50 routers in OSPF area, remember this post.

Published - Wed, 09 Feb 2022

Created by - Orhan Ergun

What is MPLS Traffic Engineering and Why do you need MPLS-TE?

Why do you need MPLS Traffic Engineering? - MPLS TE is a mechanism that provides cost savings in an MPLS networks. How cost saving can be achieved ? How traffic is steered to the paths which wouldn’t be used in normal circumstances ? I will explain in this post. For more detail about MPLS TE and any other MPLS topics, you can check our MPLS Training. Let’s look at below topology. Figure - MPLS Traffic EngineeringIf traffic will be sent from R2 to R6, without MPLS TE, always R2-R5-R6 which is the top path is used. Because the IGP cost of top path is 15 +15 = 30 , but the IGP cost of down path between R2 and R6 (R2-R3-R4-R6) is 45.   Even if you enable MPLS (Without Traffic Engineering, which mean without RSVP) on the above topology, MPLS follows the IGP shortest path decision, thus , top path is used and the down path stays as an idle.   So although you have the capacity on the below path between R2 and R6, it is not used because of higher total IGP cost. (LDP always follows IGP’s shortest path for the traffic between source and the destination)   MPLS Traffic Engineering is enabled by RSVP (Resource Reservation Protocol). RSVP has been extended to support MPLS TE and it is called RSVP-TE. Important extensions are label advertisement and source routing capabilities.   So, if you have an MPLS network today, maybe to provide VPN (LDP is the label distribution protocol in many cases) , you can’t have MPLS TE without enabling RSVP-TE (There is a centralised MPLS Traffic Engineering approach which doesn’t require RSVP-TE, but this is the topic of another blog post) you can’t have Traffic Engineering.   As you can see, overall idea (At least the initial use case) of MPLS Traffic Engineering is to use all the available paths between source and the destination in your network, efficiently.   In the above topology, if you wouldn’t send the traffic over down path, you had to increase the bandwidth capacity of the top path although you have an idle capacity in your network. By enabling MPLS TE, you achieve cost savings.    Today (As of June 2017), MPLS-TE is used by many companies either for above reason or just for traffic protection (Fast Reroute) purpose which I will explain in a separate post.   What kind of topologies require MPLS TE? What are the other advantages? How this solution technically works ? I will answer all these questions in the individual posts, check MPLS category from the right sidebar menu of the orhanergun.net/blog   By the way, food for thought. Do you think is MPLS Traffic Engineering only mechanism to send the traffic to the alternate path (down path in the above topology) ? Check ‘ Why you should place less emphasis on MPLS TE  post after you think about it, at least a minute.

Published - Wed, 09 Feb 2022

Created by - Orhan Ergun

Network Engineer Salary

Network Engineer Salary, Average Network Engineer Salary, and Senior Network Engineer Salary Many people have been searching these words on OrhanErgun.Net for some time. Many people also have been asking me, how much they can earn monthly if they start their Network Engineering career or if they change the country, as an experienced Senior Network Engineer how much they can get. Check these courses on  CCNP Course and   CCIE course content for becoming a better Network Engineer and definitely getting a higher salary as well.  I think the answer depends on many criterias. Since this post will be read by people all around the world, it is important to share some insights on the topic. Before talking about dependencies, you should know some facts about the CCNA, CCNP, and CCIE certification. These are some of the most popular certifications which help you to get or change jobs. Of course, as of 2022, Cloud Computing and Network Automation jobs are getting very popular and there are some certifications for those technologies as well. But I will use Cisco examples in this post. Unlike CCDE, Cisco CCNP and Cisco CCIE Certification is known by the recruiters very well as it has been posted as a job requirement for decades. There are thousands of them in the world, especially CCNP Enterprise and Cisco CCIE Routing and Switching, the new name Cisco CCIE Enterprise Infrastructure  , for example, only CCIE Enterprise certification is around 50000+ people we are talking about. At the beginning of his post, I said that CCIE salary depends on many criteria. These are in general affect the Network Engineer Salary: Country Level of Network Engineer Position Changing the company Years of experience Most probably there are other things that would affect the Network Engineer Salary but these are my observations. The country is important because even if you are CCIE and as a technical person (not as a manager or C level) if you work in Turkey, you can’t get more than 2500 USD on average. If you are CCNA level, we can't talk more than 1k USD/month. If you work in GCC (Dubai, Qatar, Kuwait, etc.) you can get around 8-15k USD per month with CCIE and it is very hard to job with CCNA. Maybe as a CCNP if you can get a job, would be between 5k- 8k USD/month. If you are living in Africa, even if you get CCIE, you won’t get more than 5000 USD as a technical person. (There are so many countries in Africa, but on average, almost all African country salaries are roughly the same). CCNA would be around 500 USD - 5k USD depending on the Country and years of experience etc.The position is important because as I said above, as a technical person, you can get probably 2500 USD in Turkey but as a manager in an IT company let’s say, it can be around 6000usd. This would be 20000- 25000 USD in GCC countries. I won’t list each and every country's Network Engineer Salary here, but at the engineering level, CCIE salary would be around 6000 to 12000 USD in Canada, the U.S, Australia, Singapore as well. CCNP would be 30% of it and CCNA would be around 40- 50% of this range. These are the countries I am aware of. We have students all around the world and when we do an interview with them, these are real numbers. Also, in general, Network Engineers working in design and architecture position gets more salary than people working in Operation. So, you can expect a little bit higher salary with Cisco CCDE Salary than Cisco CCIE certified person. Changing the company: If you are working in a company for years, having CCIE won’t increase your salary suddenly if you stay in the same company. In fact, many of my friends, students, after passing the Cisco CCIE exam, changed their job. Last but not least, years of experience is important. You can have 10 years of experience but probably stayed in the same company for 10 years and your domain knowledge is limited but you passed the CCIE exam somehow. On the other side, there is a person with 10 years of experience but worked as a consultant and designed 10s of networks from the different domains (Enterprise, Service Providers, Telcos and so on), the second person would be preferred by the hiring managers even both of these people have CCIE. And the second one would be offered a higher salary. Orhan Ergun’s CCNP Enterprise and  CCIE Training Course Summary: Depending on the country, position, years of experience, and whether you change the company after you pass some of the higher-level IT certifications, such as CCNP, Cloud certifications, or CCIE, the Network Engineer salary can change between 2000 USD to 20 000 USD. We can help you to pass these exams, Cloud Computing, Network Automation, CCNP, CCIE, and even CCDE Level exams, as We’ve helped thousands of our students, just in a couple of years. You can take Video-On-Demand CCNP or  CCIE Course immediately to start your professional or expert level journey. Please don’t hesitate to share your comments in the comment box below and if you have a question regarding CCNP Training or CCIE Certification, you can reach us via [email protected]

Published - Tue, 08 Feb 2022

Created by - Orhan Ergun

Segment Routing vs. LDP vs. RSVP

When it comes to transport network, these days very hot discussion in the networking community is the comparison between Segment Routing, LDP and RSVP. All of these three protocols can provide transport infrastructure for the network and on top of these protocols MPLS L2 VPNs, L3VPN and even Internet services are provided. More and more network deploy Segment Routing. LDP was the most common one since years and RSVP finds it's place in the networks commonly for the Fast Reroute capability of the protocol. I recorded a video with Jeff Tantsura, and  discussed the pros and cons of different transport mechanisms and comparing Segment Routing , LDP and RSVP from many different perspectives. If you would choose one of these protocols, or considering migration, watch this video first. Also, if you have any question or comment, you can share the comment section of this post.     [embed]https://www.youtube.com/watch?v=db74MCBizvU&t=5s[/embed]

Published - Thu, 19 Mar 2020

Created by - Orhan Ergun

Is Protocol Independent Multicast (PIM) really Protocol Independent?

Is Protocol Independent Multicast (PIM) really Protocol Independent? What is that dependency? Does PIM require an IP or can it work with non-IP? If you don’t know about PIM, please have a look at here. One of my students asked, is PIM require an IP (Internet Protocol), which triggered me to share the answer with you. PIM uses unicast routing for loop prevention, creating Tree (Shared or Source Based Tree), and send Join and Prune messages towards to Source of the Tree (Either Sender or RP), as you will understand when you read the above blog posts on the website. In terms of routing protocol, any unicast routing protocol can be used by PIM. In contrast to early successor of PIM, which is DVMRP (Distance Vector Multicast Routing Protocol), PIM can use link state, path vector or distance vector routing protocol for all of the above protocol functions. Thus, to be precise, PIM is “unicast routing protocol independent,” as compared to DVMRP. But on the other side, PIM is very much bound up with the IP (Internet Protocol)—it is not protocol independent in terms of network-layer protocols. As a summary, PIM (Protocol Independent Multicast) requires IP in Layer 3, but can use any unicast routing protocol (Not like Multicast OSPF or DVMRP) for the tree formation, loop prevention, join and prune messages to receive or not to receive multicast packet. Thanks for Naresh Kumar Pendem to send a message for the typo mistake correction ‘ Loop preservation > Loop Prevention ‘. If you liked this post and would like to see more, please let me know in the comment section below. Share your thoughts so I can continue to write similar ones.

Published - Wed, 27 Nov 2019

Created by - Orhan Ergun

IPv6 in Enterprise , Should we still talk about it?

5 years ago Jeff Doyle and I recorded a podcast on IPv6 in Enterprise. We talked about IPv6 addressing plan , adaption and growth rate of IPv6 in Enterprise. In this post I would like to talk about IPv6 deployment status,  challenges and the possible business drivers in IPv6 and I will share my thoughts on IPv6 deployments in 2019. Comments are always welcome, please don’t hesitate to share it in the comment box below. Despite what IPv6 evangelists say, unfortunately, I see IPv6 is a big failure. Both for many Service Providers as well as Small and Medium Scale Enterprise networks. Based on Google Statistics, IPv6 traffic is around 15 – 20% and this is mostly because large content providers and CDN companies IPv6 deployments. When we consider the individuals , such as home users or small and medium size businesses, more than 90% of the networks are still not IPv6 enabled, as of 2019. Biggest business drivers of IPv6 in Enterprise networks are business continuity and incoming traffic performance. Let me explain these two points a bit and let’s analyze whether they are really a driver that can scare the companies or do they see value , or do they have an easy and cheaper alternatives. Business Continuity as a Business Driver of IPv6 deployment in Enterprise Networks:  There are IPv6 only networks. When these networks need to reach IPv4 Internet, NAT is required. When the content is on IPv6, IPv6 only network can reach to the content without NAT. If an Enterprise want to continue three businesses for the IPv6 only networks, Enterprise need to enable IPv6 on their network. That was the idea behind this business driver but unfortunately theory doesn’t match with the practice. Still Enterprises don’t deploy IPv6 and waiting source traffic to be NATed , maybe true NAT 64 +DNS 44 or other alternative translation mechanisms. Second business driver is performance of having the content both on IPv4 and IPv6 , so basically a Dual Stack. Before continue reading this business drivers, please read my post ‘ Is Dual Stack really the best IPv6 transition mechanism? ‘  Incoming Network Traffic Performance as a IPv6 Business Driver for Enterprise Networks When Service Providers need to do NAT on their network to connect IPv4 Only and IPv6 Only network, NAT added extra latency and possibly break some applications. Application which may work though one level of NAT may not perform well or at all with multiple level of NAT. If Enterprise companies content is available on IPv6 and the Service Providers don’t have to perform NAT but passes IPv6 traffic natively, performance can be improve by removing NAT from the path. Although this can be another valid business case, probably lack of analysis to justify performance improvement ,  other business priorities or many other reasons, unfortunately this business driver doesn’t push Enterprise networks to deploy IPv6.Other than competition or regulatory mandates, IPv6 in Enterprise don’t have realistic business drivers. If your competitor deploy IPv6, this pushes you to deploy it too whether there is a true benefit for your network or not.

Published - Wed, 27 Nov 2019

Created by - Orhan Ergun

OSPF Design Discussion

OSPF Design Discussion – In the below picture, where should you place an OSPF ABR (Area Border Router) to scale OSPF design ? Why ? Please share your thoughts in the comment box below. First 5 correct answers will get my CCDE Preparation Workbook for free. Please subscribe to email list so I can see your email address for communication.

Published - Tue, 26 Nov 2019

Created by - Orhan Ergun

MPLS VPN and DMVPN Design Challenge

MPLS VPN and DMVPN Design - MPLS VPN is used mostly as primary connectivity and DMVPN as a backup in the small medium business. You might see in some cases DMVPN is the only the circuit between remote offices and the datacenter/HQ, or for some applications MPLS VPN might be the primary,DMVPN for the others.   As an example high throughput, high latency DMVPN link might be used for data traffic, low through,low latency MPLS VPN link for voice and video.   In this post I will give you a mini network design scenario and ask some questions, we will discuss the answers in the comment box below.   When you attend to my CCDE class,we will work on tens of scenarios similar to this.   I will update the scenarios every week with my answer. Update : I updated the post with my answers. Also I published a new scenario which you can reach from here.Background Info: In the above topology,customer wants to use MPLS L3 VPN (Right one) as its primary path between Remote office and the Datacenter. Customer uses EIGRP 100 for the Local Area Network inside the office. Customer runs EIGRP AS 200 over DMVPN. Service Provider doesn’t support EIGRP as a PE-CE protocol, only static routing and BGP. Customer selected to use BGP instead of static routing since cost community attribute might be used to carry EIGRP metric over the MP-BGP session of service provider. Redistribution is needed on the R2 between EIGRP and BGP (Two ways) Since customer uses different EIGRP AS numbers for the LAN and DMVPN networks,redistribution is need on R1 too.   Question 1 : Should customer use EIGRP same AS on the DMVPN and the LAN ?   Update : No it shouldn’t. Since Customer requirement is to use MPLS VPN as primary path and nothing specified for specific application only use MPLS VPN and other should use DMVPN, if the customer runs same EIGRP AS on Local Area Network and over DMVPN, EIGRP routes is seen as internal from DMVPN but external from MPLS VPN. Internal EIGRP is preferred over external because of Admin Distance, customer should use different AS numbers.   Question 2 : What is the path between remote office and the datacenter ?   Update : Since redistribution is done on R1 and R2, remote switch and datacenter devices see the routes both from DMVPN and BGP as EIGRP external. Then the metric is compared. If the metric ( Bandwidth and Delay in EIGRP) is the same, both path can be used (Equal Cost Multipath-ECMP).   Question 3 : Does result fits for the customer traffic requirement ?   Update : Yes. Because if customer uses different EIGRP AS on LAN and DMVPN, with just metric adjustment, MPLS VPN path is used as primary.   Question 4 : What happens when the primary MPLS VPN link goes down ?   Update : It depends. If you redistribute the data center prefixes which are received by R1 on R2, R2 sends the traffic towards switch and switch uses only R1. Traffic from remote to datacenter go through Switch – R1- DMVPN path. From datacenter, since those will not be known through MPLS VPN, only DMVPN link is used. So DMVPN link is used as primary when the failure happens.   Question 5 : What happens when failed MPLS VPN link comes back ?   Update : This is tricky part. R2 receives the datacenter prefixes over MPLS VPN path via EBGP, also from R1 via EIGRP . When R2 receives the prefixes from R1 as an EIGRP route those prefixes shouldn’t be redistributed on R2 to send through MPLS VPN path. If you don’t redistribute them, once the link comes back, datacenter prefixes will still be received via DMVPN and MPLS VPN and appears on the office switch as an EIGRP external. If you redistribute them on R2, when the link comes back, R2 continues to use MPLS VPN path, so switch can do load sharing or with metric adjustment you can force to use MPLS as primary. If it is Cisco switches or from other vendor which uses BGP weight attribute into consideration for the best path selection, then redistributed prefixes weight would be higher than the prefixes which are received through MPLS VPN so R2 uses Switch-R1 DMVPN path. To have a great understanding of SP Networks, you can check my new published “Service Provider Networks Design and Perspective” Book.It covers the SP network Technologies with also explaining in detail a factious SP network.

Published - Tue, 26 Nov 2019