Total 254 Blogs

Created by - Orhan Ergun

Multicast PIM Dense Mode vs PIM Sparse Mode

Multicast PIM Dense mode vs PIM Sparse mode is one of the most important things for every Network Engineer who deploys IP Multicast on their networks. Because these two design option is completely different and the resulting impact can be very high. In this post, we will look at, which situation, which one should be used, and why. Although we will not explain PIM Dense or PIM Sparse mode in detail in this post, very briefly we will look at them and then compare them fo clarity. First of all, you should just know both PIM Dense and PIM Sparse are the PIM Deployment models.   PIM Dense Mode PIM Dense mode work based on push and prune. Multicast traffic is sent everywhere in the network where you enable PIM Dense mode. This is not necessarily bad. In fact, as a network designer, we don't think there is bad technology. They have use cases If Multicast receivers are everywhere or most of the places in the network, then pushing the traffic everywhere is not a bad thing. Because when you push, you don't build a shared tree, you don't need to deal with the RP - Rendezvous Point, because Multicast Source is learned automatically. Thus, PIM Dense Mode is considered a push-based control plane and it is suitable if the Multicast receiver is distributed in most of the paces if not all, in the network. Otherwise, it can be bad from a resource consumption point of view, bandwidth, sender, and receivers process the packets unnecessarily. PIM Sparse Mode PIM Sparse Mode doesn't work based on the push model. Receivers signal the network whichever Multicast group or Source/Group they are interested in. That's why, if there is no Multicast receiver in some parts of the network, then Multicast traffic is not sent to those locations. There are 3 different deployment models of PIM Sparse Mode. PIM Sparse Mode Deployment Models PIM SSM - Source-Specific Multicast PIM ASM - Any Source Multicast PIM Bidir - Bidirectional Multicast All of these PIM Sparse mode deployment methods in the same way which Multicast receivers send join message to the Multicast Group or Multicast Source and Group. Difference between Multicast PIM Sparse Mode vs PIM Dense Mode Although technically there are so many differences, from a high-level standpoint, the biggest difference between them, PIM Dense mode works based on push-based and PIM Sparse mode works based on the Pull-based model. Multicast traffic is sent by Multicast Source to everywhere in PIM Dense mode, but Multicast traffic is sent to the locations where there are interested receivers in PIM Sparse mode. Then, we can say that, if there are few receivers, PIM Sparse mode can be more efficient from a resource usage point of view, but if there are receivers everywhere in the network, there is no problem using PIM Dense mode from a resource usage point of view.  

Published - Tue, 14 Jun 2022

Created by - Orhan Ergun

How Does Satellite Internet Work?

The orbiting satellite transmits and receives its information to a location on Earth called the Network Operations Center (NOC). NOC is connected to the Internet so all communications made from the customer location (satellite dish) to the orbiting satellite will flow through the NOC before they reached the Internet and the return traffic from the Internet to the user will follow the same path. How does Satellite Internet work? Data over satellite travels at the speed of light and Light speed is 186,300 miles per second. The orbiting satellite is 22,300 miles above earth (This is true for the GEO-based satellite) The data must travel this distance 4 times: 1.  Computer to satellite 2.  Satellite to NOC/Internet 3.  NOC/Internet to satellite 4.  Satellite to computer Satellite Adds latency This adds a lot of time to the communication. This time is called "Latency or Delay" and it is almost 500 milliseconds. This may not be seen so much, but some applications like financial and real-time gaming don’t like latency. Who wants to pull a trigger, and wait for half a second for the gun to go off? But, latency is related to which orbit the satellite is positioned. Let’s have a look at different Satellite Orbits to understand the satellite latency and its effect on the communication Geostationary (GEO) Satellites Geostationary satellites are earth-orbiting about 22,300 miles (35,800 Kilometers) directly above the equator Picture - GEO-Based Satellite Distance   They travel in the same direction as the rotation of the Earth. This gives the satellites the ability to stay in one stationary position relative to the Earth Communication satellites and weather satellites are often given geostationary orbits so that the satellite antennas that communicate with them do not have to move to track them, so they can be pointed permanently at the position in the sky where they stay. The latency in GEO Satellites is very high compared to MEO and LEO Satellites. The geostationary orbit is useful for communication applications, because ground-based antennas, which must be directed toward the satellite, can operate effectively without the need for expensive equipment to track the satellite’s motion. There are hundreds of GEO satellites in orbit today, delivering services ranging from weather and mapping data to distribution of digital video-on-demand, streaming, and satellite TV channels globally. The higher orbit of GEO based satellite means greater signal power loss during transmission when compared to a lower orbit Medium Earth Orbit Satellites MEO is the region of space around the Earth above low Earth orbit and below the geostationary orbit. Historically, MEO constellations have been used for GPS and navigation applications, but in the past five years, MEO satellites have been deployed to provide broadband connectivity to service providers, government agencies, and enterprises. Current applications include delivering 4G LTE and broadband to rural, remote, and underserved areas where laying fiber is either impossible or not cost-effective – such as a cruise or commercial ships, offshore drilling platforms, backhaul for cell towers, and military sites, among others In addition, Service Providers are using managed data services from these MEO satellites to quickly restore connectivity in regions where the service has been lost due to undersea cable cuts or where major storms have occurred MEO satellite constellations can cover the majority of Earth with about eight satellites. Because MEO satellites are not stationary, a constellation of satellites is required to provide continuous service. This means that antennas on the ground need to track the satellite across the sky, which requires ground infrastructure which is more complex compared to GEO-based satellites Low Earth Orbit (LEO) Satellites Unlike geostationary satellites, low and medium Earth orbit satellites do not stay in a fixed position in the sky. Consequently, ground-based antennas cannot be easily locked into communication with any one specific satellite. Low Earth orbit satellites, as their name implies, orbit much closer to earth. LEOs tend to be smaller in size compared to GEO satellites, but require more LEO satellites to orbit together at one time to be effective. Lower orbits tend to have lower latency for time-critical services because of the closer distance to earth. It’s important to reiterate that many LEO satellites must work together to offer sufficient coverage to a given location Although many LEOs are required, they require less power to operate because they are closer to earth     Picture - Low Earth Orbit - LEO Satellite Choosing to go with more satellites in the LEO orbit on less power, or using fewer larger satellites in GEO, is the biggest decision to make here Due to the high number of satellites required in LEO constellations, LEO satellites systems are expected to be high initial manufacturing and launch costs and more expensive ground hardware compared to GEO

Published - Tue, 14 Jun 2022

Created by - Orhan Ergun

BGP RTBH - Remotely Triggered Blackholing

BGP RTBH - Remotely triggered blackholing is used for DDOS prevention for a long time by many companies. DDOS - Distributed Denial of Service Attacks have an economic impact. According to an NBC News article, More than 40% of DDOS Attacks cost $1 million per day. Remote Triggered Blackhole is a technique that is used to mitigate DDOS attacks dynamically. Before RTBH, customers used to call the Operator when there is an attack, Operator NOC engineers used to connect to the attacked network, trace the source of the attack, place the filters accordingly and the attack goes away. •Manual operation is open to configuration mistakes, cannot scale in large networks, and between the attack and the required action, services stay down There are two types of RTBH Destination based RTBH Source-based RTBH Let's have a look at both of them in this blog post. Destination-Based BGP RTBH - Remotely Triggered Blackholing The first RTBH idea was Destination-based RTBH.With this technique, SP and the Customer agree on the discard community. When there is an attack on the server, the victim (customer) sends the server prefix with the previously agreed community value. When SP receives the update with that community, action is set to next-hop to null, so the packet is dropped before reaching the customer link Picture - Destination-based RTBH - Remotely Triggered Blackholing The problem with this attack is the server will not be reachable from legitimate sources too. The attack is completed but at least the other services might stay up Also, a customer might change the IP address of the attacked server in DNS, which might take time to propagate this though. RFC 3882 covers Destination based RTBH Better than manual processing. Requires pre-configuration of the null route on all edge routers in the SP network Source-based BGP RTBH - Remotely Triggered Blackholing RFC 5635 brings the idea of Source RTBH. Instead of the customer specifying the attacked system IP address to the SP, the customer calls SP that they are under attack By combining uRPF and discard route (null route) configuration, based on the attack source, DDOS is mitigated (In theory)

Published - Mon, 13 Jun 2022

Created by - Orhan Ergun

CCIE Service Provider v5.0 What, Why, When?

In this post, we will look at what is CCIE Service Provider v5.0, what comes with it, which technologies we need to learn, what is the difference between CCIE SP v4 and CCIE SP v5, why you should study for CCIE Service Provider v5, when you should study for CCIE SP exam, after which certificate you should aim it for, we will look at all of these questions. What is the Cisco CCIE Service Provider v5 Exam? The CCIE Service Provider  v5 lab exam is testing skillsets related to the service Provider solutions integration, interoperation, configuration, and troubleshooting in complex networks. CCIE SP v5 is the latest version of the CCIE Service Provider lab exam. When the candidates pass this exam, they get their CCIE number. This certification syllabus covers most, if not all real-life Service Provider network technologies. What is the difference between CCIE SP v4 and CCIE SP v5? From the technology standpoint, the biggest difference between CCIE SPv4.1 and the CCIE SP v5.0 exam is Network Programmability and Automation Module. It is 20% of the entire exam, thus very important in the CCIE Service Provider exam. You can access Orhan Ergun's CCIE SP Network Automation and Programmability Course over here. CCIE SP Network Automation topics are as below: CCIE SP Network Automation Topics Design, deploy and optimize NSO service packages (Yang model, template-based, python-based, fastmap, reactive fastmap, CLI NEDs, NETCONF NEDs, NSO northbound integration using REST and RESTCONF). Design NFV orchestration (NFVO) using NSO and ESC in an ETSI NFV architecture. Design and deploy Model-driven telemetry on XR devices (Yang models, gRPC, GPB, device configuration, collection architecture) Deploy and Optimize Ansible playbook scripts that interact with NSO, IOS-XE, and IOS-XR devices The design module is the new section in CCIE SP v5 Exam Another big difference between CCIE SP v4 and CCIE SP v5 is the Design Module. In the previous exam, there was no Design section. It was the biggest problem with the CCIE certifications in general. Because candidates were not tested whether they know Why they do the things they do. Why was not asked? Only what and how were covered in the CCIE Exams. But similar to the CCIE Enterprise Infrastructure exam, the CCIE SP v5 exam as well comes with the 3 hours Design section anymore. Candidates will encounter business and technical requirements and the constraints of the company, and they will translate those requirements and the constraints into technical solutions. How long does it take to study for the CCIE SP v5 Exam? It depends on the candidate's current level of knowledge, how many hours he/she can spend on study daily basis, how is the understanding capability, and whether they receive professional help in terms of the online courses, community, and so on. Basically, if you are a CCNP level engineer and can spend a daily basis average of 2 hours, it would take less than 8-12 months to be ready for the CCIE SP v5 exam. If you spend more time and you have a CCNP level background, for both traditional technologies and the evolving ones such as Assurance, Network Programmability, and Automation, then time could be reduced to a couple of months only, especially if you receive a good CCIE SP v5 training too. But make sure, they just don't cover the traditional technologies from an operational point of view but cover the traditional and the evolving technologies both from an operational and design point of view. How much money CCIE SP v5 certified Engineer can make? Again it depends on some criteria, such as which country, years of experience, which company they work for, and so on. It can start from 2k USD and can go up to 20k USD based on the above criteria but those who have CCIE SP v5 certification, because they will know most of the Enterprise network technologies as well, will be able to work both in Service Provider, Mobile Operators, but also they can work in the Enterprise companies as well. Why you should study for the CCIE SP v5 Exam? Because the CCIE SP v5 blueprint covers most of the technologies that Service Provider companies use, studying for the CCIE SP exam will prepare the students to the real-world SP network environments as well. And because the design module of the CCIE Service Provider requires design knowledge, CCIE SP candidates won't only learn the practical hands-on Cisco configuration but also they will have s good design mindset, of course, this might require professional help. This quote explains it well I think: Design and Architecture is a thing that you can't find on Google. What is unique about the CCIE SP v5 Exam? Network Programmability and Automation definitely very unique as technology and you have to know them very well because the big percentage - 20% of the CCIE SP exam is based on them. Also, as mentioned above in this post, Design is unique in the CCIE SP v5 exam, and any other previous version of the CCIE SP exam was not coming with the Design questions. When you should study for CCIE SP Exam? You will be able to show that you have not only operational experience but also you have good design knowledge for the SP network technologies. Also, CCIE SP-certified engineers will have a bigger chance to find a job in the market. Of course certificate and knowledge should go hand in hand. Last but not least CCIE SP study is a good starting point for the CCDE exam as well and the natural path for the CCIE SP certified engineers is the CCDE certificate. Which Online Training Course for the CCIE SP v5 Exam do you recommend? Orhan Ergun's CCIE SP v5.0 certification preparation program covers all the CCIE SP v5.0 exam topics both from theory, hands-on practice, and DESIGN points of view as well. Most, if not all CCIE SP training courses only cover the practical aspect without the DESIGN part, and they definitely don't cover or cover different things than what CCIE SP Automation and Programmability part requires. Also, check traditional parts as well, such as MVPNs, Segment Routing, and many other technologies should be covered in great detail with real-life information point of view too for complete understanding.

Published - Mon, 13 Jun 2022

Created by - Orhan Ergun

BGP-LS BGP Link State - What is it? Why BGP LS is used?

BGP LS, BGP Link-State is used to distribute Link state information and traffic engineering attributes from the network nodes to the Centralized TE controller. RSVP-TE has been providing resource allocation and providing an LSP with the distributed path computation algorithm (CSPF) for decades. It requires topology information from the network and only link-state IGP protocols such as OSPF and IS-IS can carry the topology information required for the controller to set up a shortest from each node to each destination prefix.  In order to overcome Bin Packing, Dead Lock, or Network-wide optimal traffic engineering, centralized controllers have been used for a long time. Because with the distributed computation for Traffic Engineering, the above issues might arise. RFC 7752 specifies the details of North-Bound Distribution of Link-State and Traffic Engineering (TE) Information Using BGP. PCE (Path Computation Element) is an SDN controller which provides optimal path computation in Multi Area and Multi AS (Autonomous System) deployments. It requires Link State and Traffic Engineering attributes such as Link coloring, SRLG, reserved bandwidth, etc., from the network. Link state IGP protocols (OSPF, IS-IS) can be used for this purpose but they are considered chatty and non-scalable, thus BGP with the new NLRI for the Link state was defined to carry IGP link-state information to the controller. RFC 7752 contains two parts: New BGP link-state Network Layer Reachability Information – BGP-LS NLRI defines three objects – links, nodes, and prefixes. We can reconstruct IGP topology with the combination of Node and Link objects. IP prefix objects provide network reachability information. New BGP path attribute (BGP-LS attribute) that encodes properties of link, node, and prefix objects, such as IGP metrics information as well. We recommend you take a look at this video which explains the history of BGP-LS, its use case, and its usage of it in real networks. Ethan Banks and the inventor of the technology, Hannes Gredler are discussing it in the video. Why does BGP Need Link State? https://www.youtube.com/watch?v=T8okh6pE6lk&t=1737s  

Published - Mon, 13 Jun 2022

Created by - Orhan Ergun

Orhan Ergun CCIE Enterprise Infrastructure Course Review 1

I see some people have been asking what other people are thinking about Orhan Ergun's CCIE Enterprise course, thus starting today to share what other people share about us on their blog posts as well. Not just on social media, but with these blog posts, because they are able to share more thoughts about us, I think it is very valuable feedback for everyone. I would like to start with the website ' samovergre.com '. He is our CCIE Enterprise student and you can find his CCIE study plan on this page. He is sharing feedback about our CCIE Enterprise training and other study materials he uses for his CCIE Enterprise study. Why Orhan Ergun CCIE Enterprise Infrastructure Course? One thing that was very important there was that He understand the uniqueness of our CCIE Enterprise Training. It is the design part. Everyone can teach you how to configure routers or routing protocols, but a design mindset is a completely unique thing and for years, if you are a Network Engineer, probably you have heard about our CCDE training and its success too. Now, we continue delivering our design knowledge and experience to our CCIE students as well and it will help them in their CCIE Enterprise exam, as well as in real life. I would like you to make good research before you decide on CCIE Enterprise or CCIE SP or CCDE courses as time is money and you should learn the design from the designer. Please don't forget most people who say teaching Network Design is already Orhan's student. Note: If you share your study plan or resources that can help people who are studying for their CCIE, reach us by sending an email to [email protected] and let's share them as well.

Published - Mon, 13 Jun 2022

Created by - Orhan Ergun

Why Core or Backbone is used in Networking?

Why Core or Backbone is used in Networking?. Before we start explaining this question, let's note that these two terms are used interchangeably. Usually, Service Providers use Backbone, and Enterprise Networks use Core terminology but they are the same thing. Why Network Core is Necessary? The Key Characteristics of the Core, the Backbone part of the networks are: High-Speed Connectivity. Today it is 100s of Gigabit networks and is usually used as a bundle to increase the capacity. Bringing Internet Gateway, Access, Aggregation, and Datacenter networks together. It connects many different parts of the network, and glues together. Redundancy and High Availability are so important. Redundant physical circuits and devices are very common. Failure impact is so high in this module, compared to other modules Full Mesh or Partial Mesh deployment is seen mostly as these type of topologies provides the most amount of redundancy and the direct path between the different locations. Commonly known in the Operator community as Backbone or ‘P Layer Redundancy in this module is very important. Most of the Core Network deployments in ISP networks are based on Full Mesh or Partial Mesh. The reason for having full mesh physical connectivity in the Core network is that full mesh connectivity provides the most optimal network traffic and the shortest path between the two locations. But not every network can have full mesh architecture, because it is the most expensive design option. Instead, many operators connect their Core/Backbone locations in a partial mesh model. In the partial mesh physical connectivity model, all of the core locations are not connected to each other, instead only the Core POP locations which have high network traffic demand are connected together. Core/Backbone provides scalability to the Service Provider networks. Without this layer, many Aggregation layers are required to be connected to each other to provide end-to-end connectivity. This would be too costly and so many physical links are required to be provisioned. The Core layer reduces the number of circuit requirements between different Aggregation networks. If the cost is a concern and size is small and scalability is not critical consideration, then the network can be designed by collapsing Aggregation and Core networks and having only one layer.  

Published - Wed, 25 May 2022

Created by - Orhan Ergun

Multicast BIER - Bit Indexed Explicit Replication

Multicast BIER - RFC8279 Bit Index Explicit Replication - BIER is an architecture that provides optimal multicast forwarding through a "BIER domain" without requiring intermediate routers to maintain any multicast-related per-flow state. BIER also does not require any explicit tree-building protocol for its operation. So, it removes the need for PIM, MLDP, P2MP LSPs RSVP, etc. A multicast data packet enters a BIER domain at a "Bit-Forwarding Ingress Router" (BFIR), and leaves the BIER domain at one or more "Bit-Forwarding Egress Routers" (BFERs).The BFIR router adds a BIER header to the packet. The BIER header contains a bit-string in which each bit represents exactly one BFER to forward the packet to. The set of BFERs to which the multicast packet needs to be forwarded is expressed by setting the bits that correspond to those routers in the BIER header.Multicast BIER Advantages The obvious advantage of BIER is that there is no per-flow multicast state in the core of the network and there is no tree building protocol that sets up trees on-demand based on users joining a multicast flow. In that sense, BIER is potentially applicable to many services where Multicast is used. Many Service Providers currently investigating how BIER would be applicable to their network, what would be their migration process and which advantages they can get from BIER deployment. By using the BIER header, multicast is not sent to the nodes that do not need to receive the multicast traffic. That’s why multicast follows an optimal path within the BIER domain. Transit nodes don’t maintain the per-flow state and as it is mentioned above, no other multicast protocol is needed. BIER simplifies multicast operation as no dedicated multicast control protocol for BIER is needed while the existing protocols such as IGP (IS-IS, OSPF) or BGP can be leveraged. BIER uses a new type of forwarding lookup (Bit Index Forwarding Table). It can be implemented by software or hardware changes. Hardware upgrade requirements can be a challenge for BIER but when it is solved, BIER can be the single de-facto protocol for Multicast.

Published - Wed, 25 May 2022

Created by - Orhan Ergun

What is NFV - Network Function Virtualization

Network Functions Virtualization (NFV) was founded by the European Telecommunication Standard Institute (ETSI) with Industry Specification Group (ISG) which contains seven of the world’s leading telecom network operators. A challenge of large-scale telecom networks is increasing the variety of proprietary hardware and launching new services that may demand the installation of new hardware. This challenge requires additional floor space, power, cooling, and more maintenance. With evolving virtualization technologies in this decade, NFV focuses on addressing the telecom problems by implementing network functions into software that can run on server hardware or hypervisors. Furthermore, by using NFV, installing new equipment is eliminated and it will be related to the health of underlay servers and the result is lower CAPEX and OPEX. There are many benefits when operators use NFV in today’s networks. One of them is Reducing time-to-market to deploy new services to support changing business requirements and market opportunities for new services. Decoupling physical network equipment from the functions that run on them will help telecom companies to consolidate network equipment types onto servers, storage, and switches that are in data centers. In NFV architecture, the responsibility for handling specific network functions (e.g. IPSEC/SSL VPN) that run in one or more VM, is Virtual Network Function (VNF).   NFV Infrastructure Figure 1 - NFV Infrastructure   As figure 1 depicts, the whole system of NFV that contains physical and virtual components is called NFV Infrastructure (NFVI). NFVI can be different based on deployments and the vision of a service provider. For example, NFVI can build upon Docker or maybe a kind of hypervisor or mixing of both of them. Service Provider NFV Deployment Service Providers may use their own OSS/BSS to provision their infrastructures and boost service hosting to their customers and users. Based on this approach, there should be other protocols and components that help Service Providers to build their end-to-end full automated services using NFV. To meet this demand, ETSI released a framework that shows functional blocks and reference points in the NFV framework. The main reference points and execution reference points are shown by solid lines and are in the scope of NFV. These are potential targets for standardization. The dotted reference points are available in present deployments but might need extensions for handling network function virtualization. However, the dotted reference points are not the main focus of NFV at present. Figure 2 illustrates the ETSI NFV framework architecture that is taken from the ETSI document. Figure 2 - ETSI NFV Framework A key component in the NFV architectural framework is the virtualization layer. This layer abstracts and logically partitions physical hardware resources and anchors between the VNF and the underlying virtualized infrastructure. The primary tool to realize the virtualization layer would be the hypervisors. The NFV architectural framework should accommodate a diverse range of hypervisors. On top of such a virtualization layer, the primary means of VNF deployment would be instantiating it in one or more VMs. Therefore, the virtualization layer shall provide open and standard interfaces towards the hardware resources as well as the VNF deployment container, e.g. VMs, in order to ensure independence among the hardware resources, the virtualization layer, and the VNF instances. VNF portability shall be supported over such a heterogeneous virtualization layer. The decoupling of a VNF from the underlying hardware resources presents new management challenges. Such challenges include end-to-end service to end-to-end NFV network mapping, instantiating VNFs at appropriate locations to realize the intended service, allocating and scaling hardware resources to the VNFs, and keeping track of VNF instances location, etc. Such decoupling also presents challenges in determining faults and correlating them for a successful recovery over the network. While designing the NFV Management and Orchestration, such challenges need to be addressed. In order to perform its task, the NFV Management and Orchestration should work with existing management systems such as OSS/BSS, hardware resource management system, CMS used as a Virtualized Infrastructure Manager, etc., and augment their ability in managing virtualization-specific issues. Also, SDN (Software Defined Network) can bring agile and lower provisioning time to the network alongside NFV.

Published - Tue, 24 May 2022