BGP route reflectors, used as an alternate method to full mesh IBGP, help in scaling.
BGP route reflector clustering is used to provide redundancy in a BGP RR design. BGP Route reflectors and its clients create a cluster.
In IBGP topologies, every BGP speaker has to be in a logical full mesh. So, every BGP router has to have a direct IBGP neighborship with each other. However, route reflector is an exception.
If you place a BGP Route Reflector , IBGP router sets up BGP neighborship with only the route reflectors.
In this article, I will specifically mention the route reflector clusters and its design.
For those who want to understand BGP Route Reflectors, I highly recommend my ‘ BGP Route Reflector in Plain English ‘ post.
If you want to learn Route Reflector Loop Problem , check this post
Also, I explained BGP Route Reflectors, Route Reflector Design Options and many other Service Provider Design topic in my Service Provider Design Workshop.
What is BGP Route Reflector Cluster ID ?
Route Reflector Cluster ID is a four-byte BGP attribute, and, by default, it is taken from the Route Reflector’s BGP router ID.
If two routers share the same BGP cluster ID, they belong to the same cluster.
Before reflecting a route, route reflectors append its cluster ID to the cluster list. If the route is originated from the route reflector itself, then route reflector does not create a cluster list.
If the route is sent to EBGP peer, RR removes the cluster list information.
If the route is received from EBGP peer, RR does not create a cluster list attribute.
Why Cluster list it used ?
Cluster list is used for loop prevention by only the route reflectors. Route reflector clients do not use cluster list attribute, so they do not know to which cluster they belong.
If there are two Route Reflectors, Is same or different cluster IDs better on the Route Reflectors ?
If RR receives the routes with the same cluster ID, it is discarded.
Let’s start with the basic topology.
Figure-1 Route Reflector uses same cluster id
In the diagram shown above in fig.1, R1 and R2 are the route reflectors, and R3 and R4 are the RR clients. Both route reflectors use the same cluster ID.
Green lines depict physical connections. Red lines show IBGP connections.
Assume that we use both route reflectors as cluster ID 126.96.36.199 which is R1’s router ID.
R1 and R2 receive routes from R4.
R1 and R2 receive routes from R3.
Both R1 and R2 as route reflectors appends 188.8.131.52 as cluster ID attributes that they send to each other. However, since they use same cluster, they discard the routes of each other.
That’s why, if RRs use the same cluster ID, RR clients have to connect to both RRs.
In this topology, routes behind R4 is learned only from the R1-R4 direct IBGP session by the R1 (R1 rejects from R2). Of course, IGP path goes through R1-R2-R4, since there is no physical path between R1-R4.
If the physical link between R2 and R4 goes down, both IBGP sessions between R1-R4 and R2-R4 goes down as well. Thus, the networks behind R4 cannot be learned.
Since, the routes cannot be learned from R2 (the same cluster ID), if physical link is up and IBGP session goes down between R1 and R4, networks behind R4 will not be reachable either, but if you have BGP neighborship between loopbacks and physical topology is redundant , the chance of IBGP session going down is very hard.
Note : Having redundant physical links in a network design is a common best practice. Thats why below topology is a more realistic one.
What if we add a physical link between R1-R4 and R2-R3 ?
Figure-2 Route Reflector uses same cluster-ID, physical cross-connection is added between the RR and RR clients
In Figure-2 physical cross-connections are added between R1-R4 and R2-R3.
Still, we are using the same BGP cluster ID on the route reflectors.
Thus, when R2 reflects R4 routes to R1, R1 will discard those routes. In addition, R1 will learn R4 routes through direct IBGP peering with R4. In this case, IGP path will change to R1-R4 rather than to R1-R2-R4.
In a situation in which R1-R4 physical link fails, IBGP session will not go down if the IGP converges to R1-R2-R4 path quicker than BGP session timeout (By default it does).
Thus, having the same cluster ID on the RRs saves a lot of memory and CPU resource on the route reflectors even though link failures do not cause IBGP session drop if there is enough redundancy in the network.
If we would use different BGP cluster ID on R1 and R2, R1 would accept reflected routes from R2 in addition to routes from direct peering with R4.
Orhan Ergun recommends Same BGP Cluster ID for the Route Reflector redundancy.
Otherwise, Route reflectors would keep an extra copy for each prefix which wouldn’t be advertised to Route Reflector clients anyway.