Site home page
(news and notices)

Get alerts when Linktionary is updated

Book updates and addendums

Get info about the Encyclopedia of Networking and Telecommunicatons, 3rd edition (2001)

Download the electronic version of the Encyclopedia of Networking, 2nd edition (1996). It's free!

Contribute to this site

Electronic licensing info

 

 

QoS (Quality of Service)

Related Entries    Web Links    New/Updated Information

  
Search Linktionary (powered by FreeFind)

A network with quality of service has the ability to deliver data traffic with a minimum amount of delay in an environment in which many users share the same network. QoS should not be confused with CoS (class of service). CoS classifies traffic into categories such as high, medium, and low (gold, silver, and bronze). Low-priority traffic is "drop eligible," while high-priority traffic gets the best service. However, if the network does not have enough bandwidth, even high-priority traffic may not get through. Traffic engineering, which enables QoS, is about making sure that the network can deliver the expected traffic loads.

A package-delivery service provides an analogy. You can request priority delivery for a package. The delivery service has different levels of priority (next day, two-day, and so on). However, prioritization does not guarantee the package will get there on time. It may only mean that the delivery service handles that package before handling others. To provide guaranteed delivery, various procedures, schedules, and delivery mechanisms must be in place. For example, Federal Express has its own fleet of planes and trucks, as well as a computerized package tracking system. Traffic engineers work out flight plans and schedule delivery trucks to make sure that packages are delivered as promised.

The highest quality of service is on a nonshared communication link such as a cable that directly connects two computers. No other users contend for access to the network. A switched Ethernet network in which one computer is attached to each switch port can deliver a high level of QoS. The only contention for the cable is between the computers that are exchanging data with one another. If the link is full duplex, there is no contention. Situations that cause QoS to degrade are listed here:

  • Shared network links, in which two or more users or devices must contend for the same communication channel.

  • Delays caused by networking equipment (e.g., inability to process large loads).

  • Delays caused by distance (satellite links) or excessive hops (cross-country or global routed networks).

  • Network congestion, caused by overflowing queues and retransmission of dropped packets.

  • Poorly managed network capacity or insufficient capacity. If a link has fixed bandwidth, the only option to improve performance is to manage QoS.

The starting point for providing QoS in any network is to control and avoid congestion. See "Congestion Control Mechanisms" for more information.

What can be done to improve QoS? The obvious solution is to overprovision network capacity and upgrade to the most efficient networking equipment. This is often a practical solution in the private network environment, but not for private WAN links. Another solution is to classify traffic into various priorities and place the highest priority traffic in queues that get better service. This is how bandwidth is divided up in packet-switched networks. Higher-level queues get to send more packets, and so get a higher percentage of the bandwidth. New optical networks in the Internet core provide QoS with excess bandwidth. A single fiber strand can support hundreds or even thousands of wavelength circuits (lambdas). Lambdas can provide single-hop optical pathways between two points with gigabit bandwidth. A single circuit can be dedicated to traffic that needs a specific service level. See "Optical Networks."

Service providers have been reluctant to implement QoS across their networks because of the management and logistics problems. If subscribers don't classify traffic in advance, then the provider will need edge devices that can classify traffic going into their networks. QoS features must also be set up from one end of a network to another, and that is often difficult to accomplish. QoS levels must be negotiated with every switch and router along a path. Still, QoS is getting easier to manage, and, in some cases, it is the only way to optimize network bandwidth.

Leading-edge service providers now offer a range of QoS service levels for Internet traffic. Subscribers specify QoS requirements in SLAs (service-level agreements). Some of the SLA specifications required for QoS are described here:

  • Throughput    An SLA can specify a guaranteed data transfer rate. This is easy on virtual circuit networks such as ATM. It is more difficult on IP networks.

  • Packet loss    When a shared network gets busy, queues in routers and other network devices can fill and start dropping packets. A vendor may guarantee a minimum packet loss.

  • Latency    This is the delay in the time it takes a packet to cross a network. Packets may be held up in queues, on slow links, or because of congestion. The more networking devices a packet crosses, the bigger the delay. Delays of over 100 ms are disruptive to voice.

  • Jitter    Delay that is variable and difficult to interpret.

Of course, the range, location, and ownership of the network will make a big difference in how QoS is applied. An enterprise may wish to install QoS on its own intranet to support voice and video. QoS may also be applied to the LAN/WAN gateway to ensure that private WAN links or VPNs are appropriately loaded and provide quality service for intercompany voice calls, videoconferences, and so on. Most of the focus for QoS technologies is centered on the Internet because it lacks features that can provide QoS.

Service Levels: IP Versus ATM

The Internet is a connectionless packet-switching network, meaning that without any special QoS provisions, all services are best effort. In contrast, leased lines and ATM naturally support QoS because they deliver data in a predictable way. Leased lines such as T1 circuit use TDM (time division multiplexing), which provides fixed-size repeating slots for data. ATM uses fixed-size cells and has built-in traffic engineering parameters to ensure QoS.

Obtaining QoS in IP networks is not so easy, primarily for the following reasons:

  • The architecture is routed, meaning that packets may take different paths, which produces unpredictable delays.

  • IP is connectionless-that is, it does not have virtual circuit capabilities that could be used to allocate and guarantee bandwidth.

  • IP uses variable-size packets, which makes traffic patterns unpredictable.

  • Packets from many sources traverse shared links and may burst into routers, causing congestion; packet drops; retransmission; more congestion; and, ultimately, excessive delay that is unsuitable for real-time traffic.

Consider a typical LAN/WAN interface. It is an aggregation point where traffic from many sources inside the network comes together for transmission over the WAN link. If the WAN link has insufficient bandwidth, congestion will occur.

In the preceding scenario, all packets are equal. Packets for mission-critical applications may be dropped, while packets carrying the latest Dilbert cartoon get through. Classification is essential. Fortunately, packet classification is now easy with multilayer routing solutions from vendors such as Extreme Networks. See "Multilayer Switching." Still, the service these devices offer is more CoS oriented. Keep in mind that true QoS requires bandwidth management and traffic engineering across the networks that packets will travel.

ATM networks provide a number of native features to support QoS:

  • Fixed-size cells (as opposed to IP's variable-length packets) provide predictable throughput. As an analogy, if all boxcars on a train are the same size, you can predict how many will pass a certain point if you know the speed of the train.

  • Predictable behavior allows for bandwidth management and the creation of guaranteed service-level agreements.

  • ATM is also connection oriented and delivers data over virtual circuits that deliver cells in order, an important requirement for real-time audio and video.

  • ATM supports admittance control and policing, which monitor traffic and only allow a new flow if the network will support it without affecting the bandwidth requirements of other users.

  • ATM networks "police" traffic to prevent senders from exceeding their bandwidth allocations. If traffic exceeds a certain level, the network may drop packets in that circuit. Packets are classified, with some being more "drop eligible" than other.

As a point of comparison, the Internet has no admittance controls, which is probably good, but it also means that long file transfers can consume bandwidth and prevent other packets from getting through. This is especially disruptive to real-time traffic.

See "ATM (Asynchronous Transfer Mode)," "Admission Control," and "Traffic Management, Shaping, and Engineering."

The following sections describe the various techniques that may be used to provide QoS on the Internet and in enterprise networks. Note that some of these solutions provide only partial QoS, but are required to provide higher levels of service. The various solutions may be categorized as follows:

  • Congestion management    Schemes that help reduce congestion when it occurs or that actively work to prevent congestion from occurring.

  • Classification and queuing techniques    Traffic is classified according to service levels. Queues exist for each service level, and the highest priority queues are serviced first.

  • Bandwidth reservation techniques    Bandwidth is reserved in the network to ensure packet delivery.

  • Packet tagging and label switching    Packets are tagged with identifiers that specify a delivery path across a network of switches. The paths can be engineered to provide QoS.

Congestion Management Techniques

Managing network congestion is a critical part of any QoS scheme. TCP has some rudimentary congestion controls. The technique relies on dropped packets. When a packet is dropped, the receiver fails to acknowledge receipt to the sender. The sender assumes that the receiver or the network must be congested and scales back its transmission rates. This reduces the congestion problem temporarily. The sender will eventually start to scale up its transmissions and the process may repeat.

Packets are dropped because a router queue is full or because a network device is using a congestion avoidance scheme, such as RED (random early detection). RED monitors queues to determine when they are getting full enough that they might overflow. It then drops packets in advance to signal senders that they should slow down. Fewer packets are dropped in this scheme.

The problem with RED is that it relies on dropping packets to signal congestion. ECN (explicit congestion control) is an end-to-end congestion avoidance mechanism in which a router that is experiencing congestion sets a notification bit in a packet and forwards the packet to the destination. The destination node then sends a "slow down" message back to the sender.

Traffic shaping is a technique that "smoothes out" the flow of packets coming from upstream sources so that downstream nodes are not overwhelmed by bursts of traffic. An upstream node may be a host, or it may be a network device that has a higher data rate than the downstream network. At the same time, some hosts with priority requirements may be allowed to burst traffic under certain conditions, such as when the network is not busy. A traffic shaper is basically a regulated queue that takes uneven and/or bursty flows of packets and outputs them in a steady predictable stream so that the network is not overwhelmed with traffic.

Refer to "Congestion Control Mechanisms" and "Queuing" for more information.

Classification, Admission, and Tagging

Any QoS scheme involves guaranteeing service levels to traffic flows. In a world of infinite bandwidth, all flows could be handled equally. But networks are still bandwidth limited and congestion problems occur due to improper network design. Therefore, traffic must be classified-and, in some cases, tagged-so that downstream devices know what to do with it. Basic classification techniques are outlined here:

  • Inspect and classify (differentiate) incoming traffic using various techniques, such as "sniffing" the MAC address, the physical port on which the packet arrived, IEEE 802.1Q VLAN information, IEEE 802.1D-1998 (formerly IEEE 802.1p) information, source and destination IP address, well-known TCP/UDP port numbers, application information at layer 7, such as cookies and other information. Note that some encryption and tunneling schemes make packet sniffing impossible. Some applications never use the same port, and a variety of different applications go to port 80-the Web services port, which makes differentiating on port number difficult.

  • If a flow is requesting a particular service, use admission controls to either accept or reject the flow. Admission controls help enforce administrative policies, as well as provide accounting and administrative reporting.

  • Schedule the packets into appropriate queues and manage the queues in a way that ensures that each queue gets an appropriate level of service for its class.

Extreme Networks has a line of switches with built-in traffic classification features. Figure Q-1 shows an example of bandwidth allocation for various types of traffic.

Classification requires administrative decisions about how traffic should be classified and where it should be tagged. Administrators might classify traffic based on whether it is best effort and suitable for discard, real-time voice and video, network controls (e.g., OSPF messages), or mission critical.

The following classification schemes identify traffic near its source and mark packets before they enter the network. Network nodes only need to read the markings and forward packets appropriately.

  • IEEE frame tagging    This scheme defines a tag, inserted into an Ethernet frame, which contains three bits that can be used to identify class of service. See the next section for more information.

  • IETF Differentiated Services (Diff-Serv)    Diff-Serv is an IETF specification that works at the network layer. It alters bits in the IP ToS field to signal a particular class of service. Diff-Serv works across networks, including carrier and service provider networks that support the service; and, therefore, it has become an important scheme for specifying QoS across the Internet. Diff-Serv is covered briefly later in this topic and under its own topic.

The first scheme works over LANs, while Diff-Serv works over internetworks. The tag information in MAC-layer frames will be lost if the frame crosses a router. However, some method may be used to capture the information and use it to set Diff-Serv markings.

MAC-Layer Prioritization

As mentioned, the IEEE defined a method for inserting a tag into an IEEE MAC-layer frame that contains bits to define class of service. During development, this was known as Project 802.1p, and you will see it referred to that way in much of the literature. It is now officially part of IEEE 802.1D-1998. The tag defines the following eight "user priority" levels that provide signals to network devices as to the class of service that the frame should receive:

  • Priority 7    Network control traffic such as router configuration messages

  • Priority 6    Voice traffic, such as NetMeeting, that is especially sensitive to jitter

  • Priority 5    Video, which is high bandwidth and sensitive to jitter

  • Priority 4    Controlled load, latency-sensitive traffic such as SNA transactions

  • Priority 3    Better than best effort, which would include important business traffic that can tolerate some delay

  • Priority 2    Best-effort traffic

  • Priority 1    The default mode if none is specified

  • Priority 0    Noncritical traffic such as backups, noncritical replications, some electronic mail, and so on

A method for reordering and moving delay-sensitive real-time traffic to the front of a queue is also defined. A component of this scheme is GARP (Group Address Registration Protocol), which is used by LAN switches and network-attached devices to exchange information about current VLAN configurations. Note that 802.1D-1998 provides at the LAN level what Diff-Serv provides in layer 3 across internetworks. MAC-layer tags may be used to signal a class of service to Diff-Serv.

Two Web sites provide additional information:

Intel paper: "Layer 2 Traffic Prioritization"

http://www.intel.com/network/white_papers/priority_packet.htm

IEEE 802.1 Working Group Web page

http://grouper.ieee.org/groups/802/1/index.html

IP ToS

The role of the IP ToS field has changed with the development of Diff-Serv. The original meaning of the ToS field was defined in RFC 791 (Internet Protocol, September 1981); however, it was never used in a consistent way. Most routers are aware of the field, but it has little meaning across public networks. Many enterprises have used it internally to designate various classes of service or to prioritize traffic across private WAN links.

The ToS field is divided into two sections: the Precedence field (three bits), and a field that is customarily called "Type-of-Service" or "TOS" (five bits). Interestingly, The Precedence field was intended for Department of Defense applications to signal a priority message in times of crisis or when a five-star general wanted to get a good tee time.

Diff-Serv redefined the field as the DS Field (Diff-Serv Field). RFC 2474 (Definition of the Differentiated Services Field in the IPv4 and IPv6 Headers, December 1998) describes this further. See "Differentiated Services (Diff-Serv)."

IETF QoS Solutions

The IETF has been working to define Internet QoS models for many years. The task has not been easy since packets must cross many networks, and providers must agree not only how QoS will be managed, but also how it is paid for. The primary QoS techniques developed by the IETF are Int-Serv (Integrated Services), Diff-Serv (Differentiated Services), and MPLS (Multiprotocol Label Switching), as described next. Each of these is discussed under its own heading elsewhere.

  • Integrated Services (Int-Serv)    This is a model for providing QoS on the Internet and intranets. The intention of Int-Serv designers was to set aside some portion of network bandwidth for traffic such as real-time voice and video that required low delay, low jitter (variable delay), and guaranteed bandwidth. The Int-Serv Working Group developed RSVP (Resource Reservation Protocol), a signaling mechanism to specify QoS requirements across a network. Int-Serv has scalability problems and it was too difficult to deploy on the Internet. However, RSVP is used in enterprise networks, and its control mechanism for setting up bandwidth across a network is being used in new ways with MPLS.

  • Differentiated Services (Diff-Serv)    Diff-Serv classifies and marks packets so that they receive a specific per-hop forwarding at network devices along a route. The important part is that Diff-Serv does the work at the edge so that network devices only need to get involved in properly queuing and forwarding packets. Diff-Serv works at the IP level to provide QoS based on IP ToS settings. Diff-Serv is perhaps the best choice for signaling QoS levels available today.

  • MPLS (Multiprotocol Label Switching)    MPLS is a protocol, designed primarily for Internet core networks, that is meant to provide bandwidth management and quality of service for IP and other protocols. Control of core network resources is accomplished by building LSPs (label switched paths) across networks and rapidly forwarding IP packets across the network through these paths. By labeling packets with an indicator of the LSP they are to traverse, it is possible to eliminate the overhead of inspecting packets at every network device along the way. LSPs are similar to virtual circuits in ATM and frame relay networks, and traffic engineering approaches can be used to create LSP that delivers a required level of service.

Policies and Policy Protocols

The final pieces of the QoS picture are policies, policy services, and policy signaling protocols. Most of the QoS systems just described use policy systems to keep track of how network users and network devices can access network resources. A defining feature of a policy system is that it works across a large network and provides policy information to appropriate devices with that network.

A policy architecture consists of the following components, which primarily manage the rules that govern how network resources may be used by specific users, applications, or systems. When rules are specified and programmed into policy systems, they are known as policies.

  • Policy clients    Network devices that process network traffic such as switches and routers running various queuing algorithms. Policy clients query policy servers to obtain rules about how traffic should be handled.

  • Policy servers    This is the central authority that interprets network policies and distributes them to policy clients.

  • Policy information system    The information about who or what can use network resources is stored in some type of database, usually a directory services database.

This architecture allows network administrators to specify policies for individuals, applications, and systems in a single place-the policy information system. The policy server then uses protocols such as LDAP (Lightweight Directory Access Protocol) or SQL to obtain this information and form policies that can be distributed to policy clients. Policy clients talk to policy servers via network protocols such as COPS (Common Open Policy Service) and SNMP (Simple Network Management Protocol). COPS is an intradomain mechanism for allocating bandwidth resources and it is being adapted for use in establishing policy associated with a Diff-Serv-capable networks.

This topic is covered in more detail under "Policy-Based Management." In addition, RFC 2768 (Network Policy and Services: A Report of a Workshop on Middleware, February 2000) provides useful information about policy.

Additional QoS Information

The following IETF working groups are developing QoS recommendations and standards. Refer to the working group pages for a list of related RFCs and other documents.

Audio/Video Transport (avt)

http://www.ietf.org/html.charters/avt-charter.html

Differentiated Services (Diff-Serv)

http://www.ietf.org/html.charters/Diff-Serv-charter.html

Endpoint Congestion Management (ecm)

http://www.ietf.org/html.charters/ecm-charter.html

Integrated Services (Int-Serv)

http://www.ietf.org/html.charters/Int-Serv-charter.html

Integrated Services over Specific Link Layers (issll)

http://www.ietf.org/html.charters/issll-charter.html

Internet Traffic Engineering (tewg)

http://www.ietf.org/html.charters/tewg-charter.html

Policy Framework (policy)

http://www.ietf.org/html.charters/policy-charter.html

Resource Allocation Protocol (rap)

http://www.ietf.org/html.charters/rap-charter.html

Resource Reservation Setup Protocol (rsvp)

http://www.ietf.org/html.charters/rsvp-charter.html

The following RFCs provide more information about QoS. More specific RFCs are listed under the headings just mentioned.

  • RFC 1633 (Integrated Services in the Internet Architecture: An Overview, June 1994)

  • RFC 2386 (A Framework for QoS-based Routing on the Internet, August 1998)

  • RFC 2430 (A Provider Architecture for Differentiated Services and Traffic Engineering, October 1998)

  • RFC 2475 (An Architecture for Differentiated Services, December 1998)

  • RFC 2581 (TCP Congestion Control, April 1999)

  • RFC 2702 (Requirements for Traffic Engineering over MPLS, September 1999).

  • RFC 2915 (Congestion Control Principles, September 2000)

  • RFC 2990 (Next Steps for the IP QoS Architecture, November 2000)



Copyright (c) 2001 Tom Sheldon and Big Sur Multimedia.
All rights reserved under Pan American and International copyright conventions.