Created by - Stanley Arvey
If you're like most people, you take the convenience of the internet for granted. You probably don't think about how the addresses used to route your packets to their destinations are resolved. In this blog post, we'll take a closer look at address resolution protocol (ARP), what it is, and how it works. We'll also discuss some of the security implications of ARP and ways to protect yourself against them. Stay tuned! What Is ARP? ARP (Address Resolution Protocol) is a communication protocol used for mapping a 32-bit IPv4 address to a 48-bit MAC address or the other way around. It is used on Ethernet and WiFi networks, and it is also supported by many other network types. It is a crucial part of how IP address works and is responsible for resolving address conflicts and maintaining address tables. When two devices in a network need to communicate, they first use ARP to resolve each other's address. Address resolution protocol is a part of the TCP/IP stack, and it is used by almost all modern networking devices. By understanding how it works, you can troubleshoot many networking problems. How Does ARP Work? The Address Resolution Protocol (ARP) is fundamental to Internet communication. It is used to map a network address, such as an IP address, to a physical device like a NIC. This mapping is necessary because data packets can only be sent to physical devices using their MAC address. It complements the IP protocol by providing a way to determine the MAC address of a remote device when all that is known is its IP address. When a device wants to send a packet to another device on the same network, it first looks up the destination IP address in its ARP table. An ARP table is a data structure used by network devices to store information about the mapping of IP addresses to physical MAC addresses. ARP works by sending out a broadcast message (ARP request) that contains the IP address of the target device. An ARP request is a type of packet that is used to request the Media Access Control (MAC) address of a specific computer on a local area network (LAN). The request is sent to all computers on the LAN, and the computer with the matching MAC address responds with an ARP reply packet. All devices on the network will receive this message and compare the IP address to their own. The device with the matching IP address will respond with its MAC address, which will be added to the sender's ARP cache. Now, whenever the sender needs to communicate with the target device, it can look up its MAC address in the cache and send data directly to it. This process happens automatically and is transparent to users. Thanks to this protocol, we are able to communicate seamlessly with devices on our local network without having to worry about their MAC addresses. For further information, you can read this ARP configuration guide by Cisco. Security Implications of ARP and Protection Methods Address Resolution Protocol (ARP) is a powerful tool that can be used for both legitimate and malicious purposes. When used correctly, it can help improve network performance and stability. However, it can also be exploited to execute denial-of-service attacks, insert false entries into the ARP cache, and sniff network traffic. As a result, it is essential to be aware of the potential security implications of the protocol before using it on a network. By understanding the risks associated with ARP, administrators can take steps to mitigate them and protect their networks. Denial-of-Service Attacks: A denial-of-service attack occurs when an attacker sends a large number of false ARP messages to a target device. This flooding of the ARP cache causes the target to become overloaded and unable to process legitimate traffic. As a result, the target is effectively cut off from the network. While denial-of-service attacks are generally challenging to carry out, they can be devastating in terms of their impact. The good news is that there are a number of steps you can take to protect yourself from this type of attack. First, make sure that your devices are running the latest software and security patches. Second, consider using anti-spoofing measures such as static ARP or port security. Finally, make sure that your network is segmented correctly and that those critical devices are placed on separate subnets. Taking these precautions can help ensure that your network is better protected against denial-of-service attacks. Spoofing: Spoofing allows the attacker to redirect traffic or perform other man-in-the-middle attacks. It is relatively easy to carry out and can be difficult to detect. As a result, it is a serious threat to network security. To protect against spoofing attacks, organizations should implement security measures such as port security and MAC (Media Access Control) filtering. In addition, users should be aware of the risks posed by spoofing and take steps to protect their own devices from attack. By understanding and mitigating the risks posed by spoofing, organizations can help to ensure the security of their networks. Man-in-the-Middle Attacks: A man-in-the-middle attack occurs when an attacker intercepts traffic between two victims and impersonates both victims to each other. The attacker can then read, alter, or even inject data into the communication. One way to protect yourself from man-in-the-middle attacks is to use a VPN, which encrypts your traffic and makes it more difficult for attackers to sniff or tamper with your data. You can also use a firewall to filter traffic and prevent unwanted ARP requests from reaching your computer. Finally, make sure that you have the latest security patches installed on your system to close any potential vulnerabilities that attackers could exploit. By taking these precautions, you can help to keep your data safe from man-in-the-middle attacks. Final Words ARP is an important aspect of networking that you should be aware of. You can troubleshoot networking issues, optimize your network performance, and protect yourself from several attacks by understanding how it works. Start today and check out our IT courses about ARP and more...
Published - Sun, 09 Oct 2022
Created by - Stanley Arvey
BGP (Border Gateway Protocol) is a widely-used routing protocol that plays an important role in the internet's infrastructure. One of the key functions of BGP is to select the best path to route packets to their destination. In this article, we'll provide a quick guide on BGP path selection and some of the factors that it takes into account. What Is BGP Path Selection? The path a packet takes through the network is determined by the routing protocol in use. The Border Gateway Protocol (BGP) is a routing protocol that is used to exchange routing information between different autonomous systems (AS). BGP path selection is the process of determining which route to take when there are multiple routes to the same destination. The route that is selected must meet certain criteria, such as being the shortest or having the lowest cost. BGP path selection can be difficult to configure, but it is essential for ensuring that packets are routed efficiently through the network. What Is the Importance of BGP Path Selection? The Border Gateway Protocol (BGP) is a critical part of the Internet's infrastructure. It helps to route traffic between different networks and ensures that data packets are delivered to their intended destination. BGP path selection is a key part of this process, and it is essential for ensuring that traffic is routed efficiently and effectively. There are several factors that contribute to BGP path selection, including network latency, congestion, and reliability. By considering these factors, BGP can help ensure that traffic is routed along the best possible path. As the Internet continues to grow and evolve, BGP path selection will become even more important. With billions of devices now connected to the Internet, it is essential for routers to be able to quickly and reliably find the best path for each data packet. By understanding the importance of BGP path selection, we can ensure that the Internet continues to function effectively. How Does BGP Path Selection Work? The Border Gateway Protocol (BGP) is the standard exterior gateway protocol used to route traffic on the Internet. BGP path selection is the process of choosing the best route for traffic between two BGP-speaking routers. BGP path selection is based on several factors, including: The length of the AS path: BGP will prefer routes with shorter AS paths, as these are typically faster. The origin of the AS: BGP will prefer routes that originate from within the same country or region, as these are typically more reliable. The preference of the AS: BGP will prefer routes that have been explicitly configured by an administrator, as these are typically more reliable. The stability of the AS: BGP will prefer routes that pass through stable ASes, as these are typically more reliable. The performance of the AS: BGP will prefer routes that pass through high-performing ASes, as these are typically faster. BGP path selection is a complex process, and it is constantly evolving to adapt to changing conditions on the Internet. However, by understanding the basics of how BGP works, you can ensure that your traffic is always routed along the best possible path. Final Words BGP path selection is a crucial technology for the internet. It allows routers to select the best path to send traffic along, and it’s essential for ensuring that packets reach their destination quickly and efficiently. We hope this guide has helped you understand how BGP works and why it’s so important. If you have any questions or would like more information, please don’t hesitate to contact us.Also you can check our courses that has BGP topics;CCIE Enterprise InfrastructureCCDE v3 Certification Training
Published - Sun, 09 Oct 2022
Created by - Orhan Ergun
Multicast PIM Dense mode vs PIM Sparse mode is one of the most important things for every Network Engineer who deploys IP Multicast on their networks. Because these two design option is completely different and the resulting impact can be very high. In this post, we will look at, which situation, which one should be used, and why. Although we will not explain PIM Dense or PIM Sparse mode in detail in this post, very briefly we will look at them and then compare them fo clarity. First of all, you should just know both PIM Dense and PIM Sparse are the PIM Deployment models. PIM Dense Mode PIM Dense mode work based on push and prune. Multicast traffic is sent everywhere in the network where you enable PIM Dense mode. This is not necessarily bad. In fact, as a network designer, we don't think there is bad technology. They have use cases If Multicast receivers are everywhere or most of the places in the network, then pushing the traffic everywhere is not a bad thing. Because when you push, you don't build a shared tree, you don't need to deal with the RP - Rendezvous Point because Multicast Source is learned automatically. Thus, PIM Dense Mode is considered a push-based control plane and it is suitable if the Multicast receiver is distributed in most of the paces if not all, in the network. Otherwise, it can be bad from a resource consumption point of view, bandwidth, sender, and receivers process the packets unnecessarily. PIM Sparse Mode PIM Sparse Mode doesn't work based on the push model. Receivers signal the network whichever Multicast group or Source/Group they are interested in. That's why, if there is no Multicast receiver in some parts of the network, then Multicast traffic is not sent to those locations. There are 3 different deployment models of PIM Sparse Mode. PIM Sparse Mode Deployment Models PIM SSM - Source-Specific Multicast PIM ASM - Any Source Multicast PIM Bidir - Bidirectional Multicast All of these PIM Sparse mode deployment methods in the same way which Multicast receivers send join message to the Multicast Group or Multicast Source and Group. Difference between Multicast PIM Sparse Mode vs PIM Dense Mode Although technically there are so many differences, from a high-level standpoint, the biggest difference between them, PIM Dense mode works based on push-based and PIM Sparse mode works based on the Pull-based model. Multicast traffic is sent by Multicast Source to everywhere in PIM Dense mode, but Multicast traffic is sent to the locations where there are interested receivers in PIM Sparse mode. Then, we can say that, if there are few receivers, PIM Sparse mode can be more efficient from a resource usage point of view, but if there are receivers everywhere in the network, there is no problem using PIM Dense mode from a resource usage point of view.
Published - Tue, 14 Jun 2022
Created by - Orhan Ergun
The orbiting satellite transmits and receives its information to a location on Earth called the Network Operations Center (NOC). NOC is connected to the Internet so all communications made from the customer location (satellite dish) to the orbiting satellite will flow through the NOC before they reached the Internet and the return traffic from the Internet to the user will follow the same path. How does Satellite Internet work? Data over satellite travels at the speed of light and Light speed is 186,300 miles per second. The orbiting satellite is 22,300 miles above earth (This is true for the GEO-based satellite) The data must travel this distance 4 times: 1. Computer to satellite 2. Satellite to NOC/Internet 3. NOC/Internet to satellite 4. Satellite to computer Satellite Adds latency This adds a lot of time to the communication. This time is called "Latency or Delay" and it is almost 500 milliseconds. This may not be seen so much, but some applications like financial and real-time gaming don’t like latency. Who wants to pull a trigger, and wait for half a second for the gun to go off? But, latency is related to which orbit the satellite is positioned. Let’s have a look at different Satellite Orbits to understand the satellite latency and its effect on the communication Geostationary (GEO) Satellites Geostationary satellites are earth-orbiting about 22,300 miles (35,800 Kilometers) directly above the equator Picture - GEO-Based Satellite Distance They travel in the same direction as the rotation of the Earth. This gives the satellites the ability to stay in one stationary position relative to the Earth Communication satellites and weather satellites are often given geostationary orbits so that the satellite antennas that communicate with them do not have to move to track them, so they can be pointed permanently at the position in the sky where they stay. The latency in GEO Satellites is very high compared to MEO and LEO Satellites. The geostationary orbit is useful for communication applications, because ground-based antennas, which must be directed toward the satellite, can operate effectively without the need for expensive equipment to track the satellite’s motion. There are hundreds of GEO satellites in orbit today, delivering services ranging from weather and mapping data to distribution of digital video-on-demand, streaming, and satellite TV channels globally. The higher orbit of GEO based satellite means greater signal power loss during transmission when compared to a lower orbit Medium Earth Orbit Satellites MEO is the region of space around the Earth above low Earth orbit and below the geostationary orbit. Historically, MEO constellations have been used for GPS and navigation applications, but in the past five years, MEO satellites have been deployed to provide broadband connectivity to service providers, government agencies, and enterprises. Current applications include delivering 4G LTE and broadband to rural, remote, and underserved areas where laying fiber is either impossible or not cost-effective – such as a cruise or commercial ships, offshore drilling platforms, backhaul for cell towers, and military sites, among others In addition, Service Providers are using managed data services from these MEO satellites to quickly restore connectivity in regions where the service has been lost due to undersea cable cuts or where major storms have occurred MEO satellite constellations can cover the majority of Earth with about eight satellites. Because MEO satellites are not stationary, a constellation of satellites is required to provide continuous service. This means that antennas on the ground need to track the satellite across the sky, which requires ground infrastructure which is more complex compared to GEO-based satellites Low Earth Orbit (LEO) Satellites Unlike geostationary satellites, low and medium Earth orbit satellites do not stay in a fixed position in the sky. Consequently, ground-based antennas cannot be easily locked into communication with any one specific satellite. Low Earth orbit satellites, as their name implies, orbit much closer to earth. LEOs tend to be smaller in size compared to GEO satellites, but require more LEO satellites to orbit together at one time to be effective. Lower orbits tend to have lower latency for time-critical services because of the closer distance to earth. It’s important to reiterate that many LEO satellites must work together to offer sufficient coverage to a given location Although many LEOs are required, they require less power to operate because they are closer to earth Picture - Low Earth Orbit - LEO Satellite Choosing to go with more satellites in the LEO orbit on less power, or using fewer larger satellites in GEO, is the biggest decision to make here Due to the high number of satellites required in LEO constellations, LEO satellites systems are expected to be high initial manufacturing and launch costs and more expensive ground hardware compared to GEO
Published - Tue, 14 Jun 2022
Created by - Orhan Ergun
BGP RTBH - Remotely triggered blackholing is used for DDOS prevention for a long time by many companies. DDOS - Distributed Denial of Service Attacks have an economic impact. According to an NBC News article, More than 40% of DDOS Attacks cost $1 million per day. Remote Triggered Blackhole is a technique that is used to mitigate DDOS attacks dynamically. Before RTBH, customers used to call the Operator when there is an attack, Operator NOC engineers used to connect to the attacked network, trace the source of the attack, place the filters accordingly and the attack goes away. Manual operation is open to configuration mistakes, cannot scale in large networks, and between the attack and the required action, services stay down There are two types of RTBH Destination based RTBH Source-based RTBH Let's have a look at both of them in this blog post. Destination-Based BGP RTBH - Remotely Triggered Blackholing The first RTBH idea was Destination-based RTBH.With this technique, SP and the Customer agree on the discard community. When there is an attack on the server, the victim (customer) sends the server prefix with the previously agreed community value. When SP receives the update with that community, action is set to next-hop to null, so the packet is dropped before reaching the customer link Picture - Destination-based RTBH - Remotely Triggered Blackholing The problem with this attack is the server will not be reachable from legitimate sources too. The attack is completed but at least the other services might stay up Also, a customer might change the IP address of the attacked server in DNS, which might take time to propagate this though. RFC 3882 covers Destination based RTBH Better than manual processing. Requires pre-configuration of the null route on all edge routers in the SP network Source-based BGP RTBH - Remotely Triggered Blackholing RFC 5635 brings the idea of Source RTBH. Instead of the customer specifying the attacked system IP address to the SP, the customer calls SP that they are under attack By combining uRPF and discard route (null route) configuration, based on the attack source, DDOS is mitigated (In theory)
Published - Tue, 14 Jun 2022
Created by - Orhan Ergun
In this post, we will look at what is CCIE Service Provider v5.0, what comes with it, which technologies we need to learn, what is the difference between CCIE SP v4 and CCIE SP v5, why you should study for CCIE Service Provider v5, when you should study for CCIE SP exam, after which certificate you should aim it for, we will look at all of these questions. What is the Cisco CCIE Service Provider v5 Exam? The CCIE Service Provider v5 lab exam is testing skillsets related to the service Provider solutions integration, interoperation, configuration, and troubleshooting in complex networks. CCIE SP v5 is the latest version of the CCIE Service Provider lab exam. When the candidates pass this exam, they get their CCIE number. This certification syllabus covers most, if not all real-life Service Provider network technologies. What is the difference between CCIE SP v4 and CCIE SP v5? From the technology standpoint, the biggest difference between CCIE SPv4.1 and the CCIE SP v5.0 exam is Network Programmability and Automation Module. It is 20% of the entire exam, thus very important in the CCIE Service Provider exam. CCIE SP Network Automation topics are as below: CCIE SP Network Automation Topics Design, deploy and optimize NSO service packages (Yang model, template-based, python-based, fastmap, reactive fastmap, CLI NEDs, NETCONF NEDs, NSO northbound integration using REST and RESTCONF). Design NFV orchestration (NFVO) using NSO and ESC in an ETSI NFV architecture. Design and deploy Model-driven telemetry on XR devices (Yang models, gRPC, GPB, device configuration, collection architecture) Deploy and Optimize Ansible playbook scripts that interact with NSO, IOS-XE, and IOS-XR devices The design module is the new section in CCIE SP v5 Exam Another big difference between CCIE SP v4 and CCIE SP v5 is the Design Module. In the previous exam, there was no Design section. It was the biggest problem with the CCIE certifications in general. Because candidates were not tested whether they know Why they do the things they do. Why was not asked? Only what and how were covered in the CCIE Exams. But similar to the CCIE Enterprise Infrastructure exam, the CCIE SP v5 exam as well comes with the 3 hours Design section anymore. Candidates will encounter business and technical requirements and the constraints of the company, and they will translate those requirements and the constraints into technical solutions. How long does it take to study for the CCIE SP v5 Exam? It depends on the candidate's current level of knowledge, how many hours he/she can spend on study daily basis, how is the understanding capability, and whether they receive professional help in terms of the online courses, community, and so on. Basically, if you are a CCNP level engineer and can spend a daily basis average of 2 hours, it would take less than 8-12 months to be ready for the CCIE SP v5 exam. If you spend more time and you have a CCNP level background, for both traditional technologies and the evolving ones such as Assurance, Network Programmability, and Automation, then time could be reduced to a couple of months only, especially if you receive a good CCIE SP v5 training too. But make sure, they just don't cover the traditional technologies from an operational point of view but cover the traditional and the evolving technologies both from an operational and design point of view. How much money CCIE SP v5 certified Engineer can make? Again it depends on some criteria, such as which country, years of experience, which company they work for, and so on. It can start from 2k USD and can go up to 20k USD based on the above criteria but those who have CCIE SP v5 certification, because they will know most of the Enterprise network technologies as well, will be able to work both in Service Provider, Mobile Operators, but also they can work in the Enterprise companies as well. Why you should study for the CCIE SP v5 Exam? Because the CCIE SP v5 blueprint covers most of the technologies that Service Provider companies use, studying for the CCIE SP exam will prepare the students to the real-world SP network environments as well. And because the design module of the CCIE Service Provider requires design knowledge, CCIE SP candidates won't only learn the practical hands-on Cisco configuration but also they will have s good design mindset, of course, this might require professional help. This quote explains it well I think: Design and Architecture is a thing that you can't find on Google. What is unique about the CCIE SP v5 Exam? Network Programmability and Automation definitely very unique as technology and you have to know them very well because the big percentage - 20% of the CCIE SP exam is based on them. Also, as mentioned above in this post, Design is unique in the CCIE SP v5 exam, and any other previous version of the CCIE SP exam was not coming with the Design questions. When you should study for CCIE SP Exam? You will be able to show that you have not only operational experience but also you have good design knowledge for the SP network technologies. Also, CCIE SP-certified engineers will have a bigger chance to find a job in the market. Of course certificate and knowledge should go hand in hand. Last but not least CCIE SP study is a good starting point for the CCDE exam as well and the natural path for the CCIE SP certified engineers is the CCDE certificate. Which Online Training Course for the CCIE SP v5 Exam do you recommend? Orhan Ergun's CCIE SP v5.0 certification preparation program covers all the CCIE SP v5.0 exam topics both from theory, hands-on practice, and DESIGN points of view as well. Most, if not all CCIE SP training courses only cover the practical aspect without the DESIGN part, and they definitely don't cover or cover different things than what CCIE SP Automation and Programmability part requires. Also, check traditional parts as well, such as MVPNs, Segment Routing, and many other technologies should be covered in great detail with real-life information point of view too for complete understanding.
Published - Mon, 13 Jun 2022
Created by - Orhan Ergun
BGP LS, BGP Link-State is used to distribute Link state information and traffic engineering attributes from the network nodes to the Centralized TE controller. RSVP-TE has been providing resource allocation and providing an LSP with the distributed path computation algorithm (CSPF) for decades. It requires topology information from the network and only link-state IGP protocols such as OSPF and IS-IS can carry the topology information required for the controller to set up a shortest from each node to each destination prefix. In order to overcome Bin Packing, Dead Lock, or Network-wide optimal traffic engineering, centralized controllers have been used for a long time. Because with the distributed computation for Traffic Engineering, the above issues might arise. RFC 7752 specifies the details of North-Bound Distribution of Link-State and Traffic Engineering (TE) Information Using BGP. PCE (Path Computation Element) is an SDN controller which provides optimal path computation in Multi Area and Multi AS (Autonomous System) deployments. It requires Link State and Traffic Engineering attributes such as Link coloring, SRLG, reserved bandwidth, etc., from the network. Link state IGP protocols (OSPF, IS-IS) can be used for this purpose but they are considered chatty and non-scalable, thus BGP with the new NLRI for the Link state was defined to carry IGP link-state information to the controller. RFC 7752 contains two parts: New BGP link-state Network Layer Reachability Information – BGP-LS NLRI defines three objects – links, nodes, and prefixes. We can reconstruct IGP topology with the combination of Node and Link objects. IP prefix objects provide network reachability information. New BGP path attribute (BGP-LS attribute) that encodes properties of link, node, and prefix objects, such as IGP metrics information as well. We recommend you take a look at this video which explains the history of BGP-LS, its use case, and its usage of it in real networks. Ethan Banks and the inventor of the technology, Hannes Gredler are discussing it in the video. Why does BGP Need Link State?
Published - Mon, 13 Jun 2022
Created by - Orhan Ergun
I see some people have been asking what other people are thinking about Orhan Ergun's CCIE Enterprise course, thus starting today to share what other people share about us on their blog posts as well. Not just on social media, but with these blog posts, because they are able to share more thoughts about us, I think it is very valuable feedback for everyone. I would like to start with the website ' samovergre.com '. He is our CCIE Enterprise student and you can find his CCIE study plan on this page. He is sharing feedback about our CCIE Enterprise training and other study materials he uses for his CCIE Enterprise study. Why Orhan Ergun CCIE Enterprise Infrastructure Course? One thing that was very important there was that He understand the uniqueness of our CCIE Enterprise Training. It is the design part. Everyone can teach you how to configure routers or routing protocols, but a design mindset is a completely unique thing and for years, if you are a Network Engineer, probably you have heard about our CCDE training and its success too. Now, we continue delivering our design knowledge and experience to our CCIE students as well and it will help them in their CCIE Enterprise exam, as well as in real life. I would like you to make good research before you decide on CCIE Enterprise or CCIE SP or CCDE courses as time is money and you should learn the design from the designer. Please don't forget most people who say teaching Network Design is already Orhan's student. Note: If you share your study plan or resources that can help people who are studying for their CCIE, reach us by sending an email to [email protected] and let's share them as well.
Published - Mon, 13 Jun 2022
Created by - Orhan Ergun
Before we start explaining this question, let's note that these two terms are used interchangeably. Usually, Service Providers use Backbone, and Enterprise Networks use Core terminology but they are the same thing. Why Network Core is Necessary? The Key Characteristics of the Core, the Backbone part of the networks are: High-Speed Connectivity. Today it is 100s of Gigabit networks and is usually used as a bundle to increase the capacity. Bringing Internet Gateway, Access, Aggregation, and Datacenter networks together. It connects many different parts of the network, and glues together. Redundancy and High Availability are so important. Redundant physical circuits and devices are very common. Failure impact is so high in this module, compared to other modules Full Mesh or Partial Mesh deployment is seen mostly as these type of topologies provides the most amount of redundancy and the direct path between the different locations. Commonly known in the Operator community as Backbone or ‘P Layer Redundancy in this module is very important. Most of the Core Network deployments in ISP networks are based on Full Mesh or Partial Mesh. The reason for having full mesh physical connectivity in the Core network is that full mesh connectivity provides the most optimal network traffic and the shortest path between the two locations. But not every network can have full mesh architecture, because it is the most expensive design option. Instead, many operators connect their Core/Backbone locations in a partial mesh model. In the partial mesh physical connectivity model, all of the core locations are not connected to each other, instead only the Core POP locations which have high network traffic demand are connected together. Core/Backbone provides scalability to the Service Provider networks. Without this layer, many Aggregation layers are required to be connected to each other to provide end-to-end connectivity. This would be too costly and so many physical links are required to be provisioned. The Core layer reduces the number of circuit requirements between different Aggregation networks. If the cost is a concern and size is small and scalability is not critical consideration, then the network can be designed by collapsing Aggregation and Core networks and having only one layer.
Published - Wed, 25 May 2022