Quality of service (QoS) is the overall performance of a telephony or computer network, particularly the performance seen by the users of the network.
Above is the Quality of Service definition from Wikipedia. Performance metrics can be bandwidth, delay, jitter, packet loss, and so on.
- 1 Two Quality Of Service approaches have been defined by the standard organizations. Namely Intserv (Integrated Services) and Diffserv (Differentiated Services).
- 2 Intserv demands each and every flow request bandwidth from the network
- 3 Policing is deployed together with classification/marking
- 4 Another Quality of Service tool is Queueing;
Two Quality Of Service approaches have been defined by the standard organizations. Namely Intserv (Integrated Services) and Diffserv (Differentiated Services).
In this post, I will not explain each method or the special tools used in each method. Instead, which method makes sense in a particular design and which Quality of Service Tools can solve user needs without compromising the network design goals.
Intserv demands each and every flow request bandwidth from the network
and the network would reserve the required bandwidth for the user during a conversation. Think this is an on-demand circuit switching, each flow of each user would be remembered by the network. This clearly would create a resource problem (CPU, Memory, Bandwidth) on the network thus never widely adopted.
Although with RSVP-TE ( RSVP Traffic Engineering ) particular LSP can ask for bandwidth from the network nodes, and in turn nodes reserve a bandwidth, the number of LSP between the Edge nodes of the network is orders of magnitude less than individual flows of the users.
The second Quality of Service Approach is Diffserv (Differentiated Services) don’t require a reservation instead flows are aggregated and placed into the classes. Then each and every node can be controlled by the network operator to treat differently for the aggregated flows.
Obviously, it can be scalable compared to the Intserv Quality of Service model.
When you practice Quality of Service, you learn Classification, Marking, Queueing, Policing, and Shaping tools.
And you are also told that in order to have the best Quality Of Service for the user, you need to deploy it from end to end.
But where are those ends?
The name of the nodes might differ based on business. On the Enterprise campus, your access switch is one end, and the branch router, data center virtual or physical access switches, and internet gateways might be on the other end.
Or in the Service Provider business, the Provider Edge router is one end, other provider edge routers, data center virtual or physical access, internet gateways, service access devices such as DSLAM, CMTS devices might be another end.
So an end-to-end principle will fail since the end-to-end domain might be too broad and too many devices to manage.
But definitely, some tools make sense in some places in some networks.
For example ” Policing ” in the Service Provider Networks. It can be used for billing purposes. The provider can drop the excess usage or charge for the premium service.
Policing is deployed together with classification/marking
But you don’t need to deploy QoS tools on the other nodes so those classifications and marking will locally make sense. This tool is also used for the Call Admission Control purpose.
Imagine you have 200Mb links and each Telepresence flow requires 45Mb traffic. You can place 4 calls onto the link. If the 5th call is set up, all other 4 calls suffer as well since packets have to be dropped. ( 45 * 5 – 200 – buffer size)
Another Quality of Service tool is Queueing;
And in particular, it is used whenever there is an oversubscription.
Oversubscription can be between the nodes ( On the links ) or within the nodes.
If the congestion is within the node, queueing in the ingress direction is applied to protect some traffic (maybe real-time ) from the Head of Line Blocking in the switching fabric of the node. Or in the egress direction between the nodes to protect selective traffic.
The problem is if there is enough traffic, buffers (queues) will get full, and eventually, all the traffic will be dropped no matter what queueing method ( LLQ, WFQ, CBWFW ) is used.
So if you try to design end-to-end Quality of Service by enabling queueing to cover all possible oversubscription in the network you fail.
When the congestion happens, some flows will just die a couple of milliseconds after another.
The design tradeoff here is to add more bandwidth vs engineering all possible congestion points. I am not talking only about the initial QoS design phase but the complexity brought by the QoS in the design as well. Network Operator needs to manage, understand, and troubleshoot QoS during steady-state and in the case of failure as well.
Bandwidth is getting cheaper and cheaper every day but the complexity of Quality of Service will stay there forever.