ccnp 4 version 5 module 3 - staffweb.itsligo.ie 3/ccnp 4 qos/ccnp 4 version 5 module...ccnp 4...

75
CCNP 4 Version 5 Module 3 1 Overview As user applications continue to drive network growth and evolution, demand for support of different types of traffic also increases. Applications with differing network requirements create the need for administrative policies that control how the network treats individual applications. The network must service requests from business-critical and delay-sensitive applications with priority. The deployment and enforcement of quality of service (QoS) policies within a network plays an essential role in enabling network administrators and architects to meet networked application demands in converged networks. QoS is a crucial element of any administrative policy that decides how to handle application traffic on a network. This module introduces the concept of QoS, explains key issues of networked applications, lists models for providing QoS in the network (best effort, integrated services [IntServ], and differentiated services [DiffServ]), and describes various methods for implementing QoS, including the Cisco Modular QoS CLI (MQC) and Cisco Router and Security Device Manager (SDM) QoS Wizard. 3.1.1 Converged Network Quality Issues Before network convergence was commonplace, network engineering focused on connectivity. Figure shows how nonconverged networks met differing traffic needs simply by connecting endpoints over dedicated links—data to data, voice to voice, and video to video. Data rates delivered to the network links may result in sporadic bursts of data. Access to the bandwidth in these networks is on a first-come, first-served basis. The data rate available to any one user varies depending on the number of users accessing the network at that time. Protocols are used in nonconverged traditional networks to handle the bursty nature of data networks. Data networks can survive brief outages. For example, when you retrieve e-mail, a delay of a few seconds is generally not noticeable. A delay of several minutes is annoying, but not serious. Traditional networks also had requirements for applications such as data, video, and Systems Network Architecture (SNA). Since each application has different traffic characteristics and requirements, network designers deployed nonintegrated networks. These nonintegrated networks carried specific types of traffic: data network, SNA network, voice network, and video network. Figure illustrates a converged network in which voice, video, and data traffic use the same network facilities. Merging these different traffic streams with dramatically differing requirements can lead to a number of problems. Key among these problems is the fact that voice traffic and video traffic are very time-sensitive and must have priority. In a converged network, constant, small-packet voice flows compete with bursty data flows. Although the packets carrying voice traffic on a converged network are typically very small, the packets cannot tolerate delay or variation in delay while they traverse the network. When delay and delay variation occur, voices break up and words become incomprehensible. Conversely, packets carrying file transfer data are typically large and the nature of IP lets the packets survive delays and drops. It is possible to retransmit part of a dropped data file, but it is not feasible to retransmit part of a voice conversation. Critical voice and video traffic must have priority over data traffic. Mechanisms must be in place to provide this priority. Another reality of the converged network is that service providers cannot have failures when voice and video traffic are involved. Although a file transfer or an e-mail packet can wait until a failed network recovers and delays are almost transparent, voice and video packets cannot wait. Converged networks must provide secure, predictable, measurable, and, sometimes, guaranteed services. Even a brief network outage on a converged network seriously disrupts business operations. Network administrators and architects achieve required performance from the network by managing delay, delay variation (jitter), bandwidth provisioning, and packet loss parameters

Upload: others

Post on 11-Mar-2020

79 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

1

Overview

As user applications continue to drive network growth and evolution, demand for support of different types of traffic also increases. Applications with differing network requirements create the need for administrative policies that control how the network treats individual applications. The network must service requests from business-critical and delay-sensitive applications with priority. The deployment and enforcement of quality of service (QoS) policies within a network plays an essential role in enabling network administrators and architects to meet networked application demands in converged networks. QoS is a crucial element of any administrative policy that decides how to handle application traffic on a network.

This module introduces the concept of QoS, explains key issues of networked applications, lists models for providing QoS in the network (best effort, integrated services [IntServ], and differentiated services [DiffServ]), and describes various methods for implementing QoS, including the Cisco Modular QoS CLI (MQC) and Cisco Router and Security Device Manager (SDM) QoS Wizard.

3.1.1

Converged Network Quality Issues

Before network convergence was commonplace, network engineering focused on connectivity. Figure shows how nonconverged networks met differing traffic needs simply by connecting endpoints over dedicated links—data to data, voice to voice, and video to video. Data rates delivered to the network links may result in sporadic bursts of data. Access to the bandwidth in these networks is on a first-come, first-served basis. The data rate available to any one user varies depending on the number of users accessing the network at that time.

Protocols are used in nonconverged traditional networks to handle the bursty nature of data networks. Data networks can survive brief outages. For example, when you retrieve e-mail, a delay of a few seconds is generally not noticeable. A delay of several minutes is annoying, but not serious.

Traditional networks also had requirements for applications such as data, video, and Systems Network Architecture (SNA). Since each application has different traffic characteristics and requirements, network designers deployed nonintegrated networks. These nonintegrated networks carried specific types of traffic: data network, SNA network, voice network, and video network.

Figure illustrates a converged network in which voice, video, and data traffic use the same network facilities. Merging these different traffic streams with dramatically differing requirements can lead to a number of problems. Key among these problems is the fact that voice traffic and video traffic are very time-sensitive and must have priority.

In a converged network, constant, small-packet voice flows compete with bursty data flows. Although the packets carrying voice traffic on a converged network are typically very small, the packets cannot tolerate delay or variation in delay while they traverse the network. When delay and delay variation occur, voices break up and words become incomprehensible. Conversely, packets carrying file transfer data are typically large and the nature of IP lets the packets survive delays and drops. It is possible to retransmit part of a dropped data file, but it is not feasible to retransmit part of a voice conversation. Critical voice and video traffic must have priority over data traffic. Mechanisms must be in place to provide this priority.

Another reality of the converged network is that service providers cannot have failures when voice and video traffic are involved. Although a file transfer or an e-mail packet can wait until a failed network recovers and delays are almost transparent, voice and video packets cannot wait. Converged networks must provide secure, predictable, measurable, and, sometimes, guaranteed services. Even a brief network outage on a converged network seriously disrupts business operations.

Network administrators and architects achieve required performance from the network by managing delay, delay variation (jitter), bandwidth provisioning, and packet loss parameters

Page 2: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

2

with quality of service (QoS) techniques. Multimedia streams, such as those used in IP telephony or videoconferencing, are very sensitive to delivery delays and create unique QoS demands. If service providers rely on a best-effort network model, packets may not arrive in order, in a timely manner, or at all. The result is unclear pictures, jerky and slow movement, and sound that is not synchronized with images.

Page 3: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

3

3.1.2 Quality Issues in Converged Networks

With inadequate network configuration, voice transmission is irregular or unintelligible. Gaps in speech where pieces of speech are interspersed with silence are particularly troublesome.

Delay causes poor caller interactivity, which can cause echo and talker overlap. Echo is the effect of the signal reflecting the voice of the speaker from the far-end telephone equipment back into the ear of the speaker. Talker overlap is caused when one-way delay becomes greater than 250 ms. When this long delay occurs; one talker steps in on the speech of the other talker.

The worst-case result of delay is a disconnected call. If there are long gaps in speech, the parties will hang up. If there are signaling problems, calls are disconnected. Such events are unacceptable in voice communications, yet are quite common with an inadequately prepared data network that is attempting to carry voice.

Converged enterprise networks face four major issues :

• Bandwidth capacity: Large graphics files, multimedia uses, and increasing use of voice and video cause bandwidth capacity problems over data networks.

• End-to-end delay (both fixed and variable): Delay is the time it takes for a packet to reach the receiving endpoint after being transmitted from the sending endpoint. This period of time is called the “end-to-end delay” and consists of two components:

o Fixed network delay: Two types of fixed network delay are serialization and propagation delays. Serialization is the process of placing bits on the circuit. The higher the circuit speed, the less time it takes to place the bits on the circuit. Therefore, the higher the speed of the link, the less serialization delay is incurred. Propagation delay is the time it takes frames to transit the physical media.

o Variable network delay: Processing delay is a type of variable delay and is the time required by a networking device to look up the route, change the header, and complete other switching tasks. In some cases, the packet must also be manipulated, as, for example, when the encapsulation type or the hop count must be changed. Each of these steps can contribute to processing delay.

• Variation of delay (also called jitter): Jitter is the delta, or difference, in the total end-to-end delay values of two voice packets in the voice flow.

• Packet loss: WAN congestion is the usual cause of packet loss. In the case of voice traffic, packet loss causes dropped speech. Conversations are difficult to follow and communications are confused. If one talker tries to accommodate for the lost speech by repeating the speech, the result on the receiving end can sound like stuttering.

3.1.3 Measuring Available Bandwidth

Figure shows an empty network with four hops between a server and a client. Each hop uses different media with different bandwidths. The maximum available bandwidth is equal to the bandwidth of the slowest link as follows:

Page 4: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

4

Bandwidthmax = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256 kbps

The calculation of the available bandwidth, however, is much more complex in cases where multiple flows are traversing the network. An IP flow is a unidirectional series of IP packets of a given protocol traveling between a source and a destination within a certain period. When there are multiple flows, use this formula to calculate average bandwidth available per flow as follows:

Bandwidthavail = Bandwidthmax / flows

Inadequate bandwidth can have performance impacts on network applications, especially those that are time-sensitive (such as voice) or consume a lot of bandwidth (such as videoconferencing). These performance impacts result in poor voice and video quality. In addition, interactive network services, such as terminal services and remote desktops, may suffer from lower bandwidth, which results in slow application response.

3.1.4 Increasing Available Bandwidth

Bandwidth is one of the key factors that affect QoS in a network; the more bandwidth there is, the better the QoS will be. However, simply increasing bandwidth will not necessarily solve all congestion and flow problems.

Intuitively, the easiest way to increase bandwidth would seem to be to increase the link capacity of the network to accommodate all applications and users, allowing extra, spare bandwidth. Although this solution sounds simple, increasing bandwidth is expensive and takes time to implement. There are often technological limitations in upgrading to a higher bandwidth. In any event, ignoring QoS in favor of increasing bandwidth is at best a temporary

Page 5: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

5

fix. The faster the network, the faster traffic will increase and the problems return.

Figure illustrates a more rational approach of using advanced queuing and compression techniques. Queuing means to classify traffic into QoS classes and then prioritize each class according to its relative importance. The basic queuing mechanism is first-in, first-out (FIFO). Other queuing mechanisms provide additional granularity to serve voice and business-critical traffic. Such traffic types should receive sufficient bandwidth to support their application requirements. Voice traffic should receive prioritized forwarding, and the least important traffic should receive the unallocated bandwidth that remains after prioritized traffic is accommodated. Cisco IOS QoS software provides a variety of mechanisms that can be used to assign bandwidth priority to specific classes of traffic:

• FIFO

• Priority queuing (PQ) or custom queuing (CQ)

• Modified deficit round robin (MDRR)

• Distributed type of service (ToS)-based and QoS group-based weighted fair queuing (WFQ)

• Class-based weighted fair queuing (CBWFQ)

• Low-latency queuing (LLQ)

A way to increase the available link bandwidth is to optimize link usage by compressing the payload of frames (virtually). Compression, however, also increases delay because of the complexity of compression algorithms. Using hardware compression can accelerate packet payload compressions. Stacker and Predictor are two compression algorithms that are available in Cisco IOS software.

Another mechanism that is used for link bandwidth efficiency is header compression. Header compression is especially effective in networks where most packets carry small amounts of data (that is, where the payload-to-header ratio is small). Typical examples of header compression are TCP header compression and Real-Time Transport Protocol (RTP) header compression.

Note Payload compression is always end-to-end compression, and header compression is hop-by-hop compression.

Example: Using Available Bandwidth More Efficiently In a network with remote sites that use interactive traffic and voice for daily business, bandwidth availability is an issue. In some regions, broadband bandwidth services are difficult to obtain or, in the worst case, are not available. This situation means that available bandwidth resources must be used efficiently. Advanced queuing techniques, such as CBWFQ or LLQ, and header compression mechanisms, such as TCP and RTP header compression, are needed to use the bandwidth much more efficiently.

Figure shows an example of how to use bandwidth efficiently using advanced queuing and header compression mechanisms. In this scenario, a low-speed WAN link connects two office sites. Both sites are equipped with IP phones, PCs, and servers that run interactive applications, such as terminal services. Because the available bandwidth is limited, an appropriate strategy for efficient bandwidth use must be determined and implemented.

Administrators must chose suitable queuing and compression mechanisms for the network based on the kind of traffic that is traversing the network. The example in Figure uses LLQ and RTP header compression to provide the optimal quality for voice traffic. CBWFQ and TCP header compression are effective for managing interactive data traffic.

Page 6: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

6

Page 7: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

7

3.1.5 Effects of End-to-end Delay and Jitter

End-to-end delay and jitter have a severe quality impact on the network as follows:

• End-to-end delay is the sum of all types of delays.

• Each hop in the network has its own set of variable processing and queuing delays, which can result in jitter.

Figure shows four types of delay:

• Processing delay: Processing delay is the time that it takes for a router (or Layer 3 switch) to take the packet from an input interface and put the packet into the output queue of the output interface. The processing delay depends on the following factors:

o CPU speed o CPU use o IP switching mode o Router architecture o Configured features on both the input and output interfaces

Note Many high-end routers or Layer 3 switches use advanced hardware architectures that speed up packet processing and do not require the main CPU to process the packets.

• Queuing delay: Queuing delay is the time that a packet resides in the output queue of a router. Queuing delay depends on the number of packets that are already in the queue and packet sizes. Queuing delay also depends on the bandwidth of the interface and the queuing mechanism.

• Serialization delay: Serialization delay is the time that it takes to place a frame on the physical medium for transport. This delay is typically inversely proportional to the link bandwidth.

• Propagation delay: Propagation delay is the time that it takes for the packet to cross the link from one end to the other. The time it takes for the packet to cross usually depends on whether it is data, voice, or video that is being transmitted. For example, satellite links produce the longest propagation delay because of the distance to communications satellites.

Figure summarizes the impact of delay and jitter on the quality of networks.

The International Telecommunication Union (ITU) considers network delay for voice applications in Recommendation G.114. This recommendation defines three bands of one-way delay as shown in Figure . These recommendations are intended for national telecom administrations. Therefore, they are more stringent than would normally be applied in private voice networks. When the location and business needs of end users are well known to the network designer, more delay can prove acceptable. For private networks, 200 ms of delay is a reasonable goal and 250 ms a limit.

Page 8: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

8

Page 9: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

9

3.1.6 Reducing the Impact of Delay on Quality

When considering solutions to the delay problem, there are two things to note:

• Processing and queuing delays are related to devices and are bound to the behavior of the operating system.

• Propagation and serialization delays are related to the media.

There are many ways to reduce the delay at a router. Assuming that the router has enough power to make forwarding decisions rapidly, the following factors have the most influence on queuing and serialization delays:

• Average length of the queue

• Average length of packets in the queue

• Link bandwidth

Figure illustrates how administrators can accelerate packet dispatching for delay-sensitive flows in the following ways:

• Increase link capacity: Sufficient bandwidth causes queues to shrink so that packets do not wait long before transmittal. Increasing bandwidth reduces serialization time. This approach can be unrealistic because of the costs that are associated with the upgrade.

• Prioritize delay-sensitive packets: This approach can be more cost-effective than increasing link capacity. WFQ, CBWFQ, and LLQ can each serve certain queues first (this is a preemptive way of servicing queues).

• Reprioritize packets: In some cases, important packets need to be reprioritized when they are entering or exiting a device. For example, when packets leave a private network to transit an Internet service provider (ISP) network, the ISP may require that the packets be reprioritized.

• Compress payload: Payload compression reduces the size of packets, which virtually increases link bandwidth. Compressed packets are smaller and take less time to transmit. Compression uses complex algorithms that add delay. If you are using payload compression to reduce delay, make sure that the time that is needed to compress the payload does not negate the benefits of having less data to transfer over the link.

• Use header compression: Header compression is not as CPU-intensive as payload compression. Header compression reduces delay when used with other mechanisms. Header compression is especially useful for voice packets that have a bad payload-to-header ratio (relative large header in comparison to the payload), which is improved by reducing the header of the packet (RTP header compression).

By minimizing delay, network administrators can also reduce jitter (delay is more predictable than jitter and easier to reduce).

Page 10: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

10

Figure shows examples of ways to increase bandwidth efficiency in a network. In this scenario, an ISP providing QoS connects the offices of the customer to each other. A low-speed link (512 kbps) connects the branch office while a higher-speed link (1024 kbps) connects the main office. The customer uses both IP phones and TCP/IP-based applications to conduct daily business. Because the branch office only has a bandwidth of 512 kbps, the customer needs an appropriate QoS strategy to provide the highest possible quality for voice and data traffic.

In this example, the customer needs to communicate with HTTP, FTP, e-mail, and voice services in the main office. Because the available bandwidth at the customer site is only 512 kbps, most traffic, but especially voice traffic, would suffer from end-to-end delays. In this example, the customer performs TCP and RTP header compression, LLQ, and prioritization of the various types of traffic. These mechanisms give voice traffic a higher priority than HTTP or e-mail traffic. In addition to these measures, the customer has chosen an ISP that supports QoS in the backbone. The ISP performs reprioritization on the customer's traffic, according to the QoS policy, so the traffic streams arrive on time at the customer's main office. This design guarantees that voice traffic has high priority and a guaranteed bandwidth of 128 kbps, FTP and e-mail traffic receive medium priority and a bandwidth of 256 kbps, and HTTP traffic receives low priority and a bandwidth of 64 kbps. Signaling and other management traffic uses the remaining 64 kbps.

Page 11: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

11

3.1.7

Packet Loss

After delay, the next most serious concern for networks is packet loss. Usually, packet loss occurs when routers run out of buffer space for a particular interface (output queue). Figure gives some examples of the results of packet loss in a converged network.

Figure illustrates a full interface output queue, which causes newly arriving packets to be dropped. The term that is used for such drops is “output drop” or “tail drop” (packets are dropped at the tail of the queue).

Routers might also drop packets for these other less common reasons:

• Input queue drop: The main CPU is busy and cannot process packets (the input queue is full).

• Ignore: The router runs out of buffer space.

• Overrun: The CPU is busy and cannot assign a free buffer to the new packet.

• Frame errors: The hardware detects an error in a frame; for example, cyclic redundancy checks (CRCs), runt, and giant.

Page 12: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

12

Page 13: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

13

3.1.8

Congestion Management: Ways to Prevent Packet Loss

Packet loss is usually the result of congestion on an interface. Most applications that use TCP experience slowdown because TCP automatically adjusts to network congestion. Dropped TCP segments cause TCP sessions to reduce their window sizes. Some applications do not use TCP and cannot handle drops (fragile flows).

These approaches can prevent drops in sensitive applications:

• Increase link capacity to ease or prevent congestion.

• Guarantee enough bandwidth and increase buffer space to accommodate bursts of traffic from fragile flows. There are several mechanisms available in Cisco IOS QoS software that can guarantee bandwidth and provide prioritized forwarding to drop-sensitive applications. Examples being WFQ, CBWFQ, and LLQ.

• Prevent congestion by dropping lower-priority packets before congestion occurs. Cisco IOS QoS provides queuing mechanisms that start dropping lower-priority packets before congestion occurs. An example being weighted random early detection (WRED).

Figure summarizes these points with a graphic.

Cisco IOS QoS software also provides the following mechanisms to prevent congestion:

• Traffic policing: Traffic policing propagates bursts. When the traffic rate reaches the configured maximum rate, excess traffic is dropped (or remarked). The result is an output rate that appears as a saw-tooth with crests and troughs.

• Traffic shaping: In contrast to policing, traffic shaping retains excess packets in a queue and then schedules the excess for later transmission over increments of time. The result of traffic shaping is a smoothed packet output rate.

Shaping implies the existence of a queue and of sufficient memory to buffer delayed packets, while policing does not. Queuing is an outbound concept; packets going out an interface get queued and can be shaped. Only policing can be applied to inbound traffic on an interface. Ensure that you have sufficient memory when enabling shaping. In addition, shaping requires a scheduling function for later transmission of any delayed packets. This scheduling function allows you to organize the shaping queue into different queues. Examples of scheduling functions are CBWFQ and LLQ.

Figure illustrates the differences between policing and shaping.

Example: Packet Loss Solution Figure shows a customer connected to the network via the WAN who is suffering from packet loss caused by interface congestion. The packet loss results in poor voice quality and slow data traffic. Upgrading the WAN link is not an option to increase quality and speed. Other options must be considered to solve the problem and restore network quality.

Congestion-avoidance techniques monitor network traffic loads in an effort to anticipate and avoid congestion at common network and internetwork bottlenecks before congestion becomes a problem. These techniques provide preferential treatment for premium (priority) traffic when there is congestion while concurrently maximizing network throughput and capacity use and minimizing packet loss and delay. For example, Cisco IOS QoS congestion-avoidance features include weighted random early detection (WRED) and LLQ as possible solutions.

The WRED algorithm allows for congestion avoidance on network interfaces by providing buffer management and allowing TCP traffic to decrease, or throttle back, before buffers are exhausted. Using WRED helps avoid tail drops and maximizes network use and TCP-based application performance. There is no such congestion avoidance for User Datagram Protocol

Page 14: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

14

(UDP)-based traffic, such as voice traffic. In case of UDP-based traffic, methods such as queuing and compression techniques help to reduce and even prevent UDP packet loss. As Figure indicates through shaping, congestion avoidance combined with queuing can be a very powerful tool for avoiding packet drops.

Page 15: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

15

Page 16: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

16

3.2.1

What is QoS?

QoS is a generic term that refers to algorithms that provide different levels of quality to different types of network traffic. QoS technologies provide the elemental building blocks that will be used for future business applications in campus, WAN, and service provider networks. QoS manages the following network characteristics:

• Bandwidth: The rate at which traffic is carried by the network.

• Latency: The delay in data transmission from source to destination.

• Jitter: The variation in latency.

• Reliability: The percentage of packets discarded by a router.

Figure illustrates these points.

Simple networks process traffic with a FIFO queue. However, QoS enables you to provide better service to certain flows by either raising the priority of a flow or limiting the priority of another flow. It is also important to ensure that providing priority for one or more flows does not make other flows fail. For example, the network can delay e-mail packets several minutes with no one noticing but it cannot delay VoIP packets for more than a tenth of a second before users notice the delay.

Cisco IOS QoS is a tool box, and many tools can accomplish the same result. A simple analogy comes from the need to tighten a bolt: You can tighten a bolt with pliers or with a wrench. Both are equally effective, but these are different tools. It is the same with QoS tools. You will find that results can be achieved using different QoS tools depending on network traffic. Just as you would not use a screwdriver to drive a nail, you would not use an inappropriate QoS mechanism for managing flow.

Page 17: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

17

QoS tools can help alleviate most congestion problems. However, many times there is too much traffic for the bandwidth available. In such cases, QoS may only be a temporary fix. A simple analogy would be pouring syrup from one bottle to another. If you pour syrup into the second container faster than the neck can accommodate, the syrup will overflow and run down the side of the bottle. You could solve the problem by pouring the syrup into a funnel, which would temporarily hold the extra syrup. But eventually, if you pour the syrup quickly, the funnel will fill up and overflow as well.

Congestion management, queue management, link efficiency, and traffic shaping and policing tools provide QoS within a single network element. These tools are listed in Figure .

Congestion Management Because of the bursty nature of voice, video, and data traffic, the amount of traffic sometimes exceeds the speed of a link. At this point, what will the router do? Will it buffer traffic in a single queue and let the first packet in be the first packet out? Or, will the router put packets into different queues and service certain queues more often? Congestion-management tools address these questions. Tools include PQ, CQ, WFQ, and CBWFQ.

Queue Management Because queues are finite in size, they can fill and overflow. When a queue is full, any additional packets cannot get into the queue and the tail of the flow is dropped. This is called tail drop. Routers cannot prevent packets from being dropped, even high-priority packets. Therefore, a mechanism is necessary to do two things:

1. Try to make sure that the queue does not fill up, so that there is room for high-priority packets.

2. Use some sort of criteria for dropping packets that are of lower priority before dropping higher-priority packets.

WRED provides both of these mechanisms.

Link Efficiency Many times, low-speed links present an issue for smaller packets. For example, the serialization delay of a 1500-byte packet on a 56-kbps link is 214 ms. If a voice packet were to get behind this big packet, the delay budget for voice would be exceeded even before the packet left the router. Link fragmentation and interleave allow this large packet to be segmented into smaller packets interleaving the voice packet. Interleaving is as important as the fragmentation. There is no reason to fragment the packet and have the voice packet go behind all the fragmented packets.

Note Serialization delay is the time that it takes to put a packet on the link. For the example just given, these mathematics apply:

Packet size: 1500-byte packet x 8 bits/byte = 12,000 bits Line rate: 56,000 bps Result: 12,000 bits/56,000bps = .214 sec or 214 ms

Another method of efficiency is by eliminating overhead bits. For example, RTP headers have a 40-byte header. With a payload of as little as 20 bytes, the overhead can be twice that of the payload in some cases. RTP header compression (also known as Compressed Real-Time Protocol header) reduces the header to a more manageable size.

Traffic Shaping and Policing Shaping is used to create a traffic flow that limits the full bandwidth potential of the flow(s). This is used many times to prevent the overflow problem mentioned in the introduction. For instance, many network topologies use Frame Relay in a hub-and-spoke design. In this case, the central site normally has a high-bandwidth link (say, T1), while remote sites have a low-

Page 18: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

18

bandwidth link in comparison (say, 384 Kbps). In this case, it is possible for traffic from the central site to overflow the low bandwidth link at the other end. Shaping is a perfect way to pace traffic closer to 384 Kbps to avoid the overflow of the remote link. Traffic above the configured rate is buffered for transmission later to maintain the rate configured.

Policing is similar to shaping, but it differs in one very important way: Traffic that exceeds the configured rate is not buffered (and normally is discarded).

Note Cisco's implementation of policing (committed access rate [CAR]) allows a number of actions besides discard to be performed. However, policing normally refers to the discard of traffic above a configured rate.

In summary, QoS is the ability of the network to provide better or “special” services to selected users and applications, but with a cost to other users and applications. In any bandwidth-limited network, QoS reduces jitter, delay, and packet loss for time-sensitive and mission-critical applications.

Page 19: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

19

3.2.2

Congestion-Management Tools

One way network elements handle an overflow of arriving traffic is to use a queuing algorithm to sort the traffic and then determine some method of prioritizing it onto an output link. Cisco IOS software includes the following queuing tools:

• First-in, first-out (FIFO) queuing

• Priority queuing (PQ)

• Custom queuing (CQ)

• Flow-based weighted fair queuing (WFQ)

• Class-based weighted fair queuing (CBWFQ)

Each queuing algorithm was designed to solve a specific network traffic problem and has a particular effect on network performance, as described in the following sections.

Page 20: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

20

Note Queuing algorithms take effect when congestion is experienced. By definition, if the link is not congested, then there is no need to queue packets. In the absence of congestion, all packets are delivered directly to the interface.

FIFO: Basic Store-and-Forward Capability In its simplest form, FIFO queuing involves storing packets when the network is congested and forwarding them in order of arrival when the network is no longer congested. FIFO is the default queuing algorithm in some instances, thus requiring no configuration, but it has several shortcomings. Most important, FIFO queuing makes no decision about packet priority; the order of arrival determines bandwidth, promptness, and buffer allocation. Nor does it provide protection against ill-behaved applications (sources). Bursty sources can cause long delays in delivering time-sensitive application traffic, and potentially to network control and signaling messages. FIFO queuing was a necessary first step in controlling network traffic, but today's intelligent networks need more sophisticated algorithms. In addition, a full queue causes tail drops. This is undesirable because the dropped packet could be a high-priority packet. The router can’t prevent this packet from being dropped because there is no room in the queue for it (in addition to the fact that FIFO cannot tell a high-priority packet from a low-priority packet). Cisco IOS software implements queuing algorithms that avoid the shortcomings of FIFO queuing.

PQ: Prioritizing Traffic PQ ensures that important traffic gets the fastest handling at each point where it is used. It was designed to give strict priority to important traffic. Priority queuing can flexibly prioritize according to network protocol (for example IP, IPX, or AppleTalk), incoming interface, packet size, source/destination address, and so on. In PQ, each packet is placed in one of four queues—high, medium, normal, or low—based on an assigned priority. Packets that are not classified by this priority list mechanism fall into the normal queue. During transmission, the algorithm gives higher-priority queues absolute preferential treatment over low-priority queues.

As shown in Figure , PQ puts data into four levels of queues: High, Medium, Normal, and Low. PQ is useful for making sure that mission-critical traffic traversing various WAN links gets priority treatment. For example, Cisco uses PQ to ensure that important Oracle-based sales reporting data gets to its destination ahead of other, less-critical traffic. PQ currently uses static configuration and thus does not automatically adapt to changing network requirements.

CQ: Guaranteeing Bandwidth CQ allows various applications or organizations to share the network among applications with specific minimum bandwidth or latency requirements. In these environments, bandwidth must be shared proportionally between applications and users. You can use the Cisco CQ feature to provide guaranteed bandwidth at a potential congestion point, ensuring the specified traffic a fixed portion of available bandwidth and leaving the remaining bandwidth to other traffic. Custom queuing handles traffic by assigning a specified amount of queue space to each class of packets and then servicing the queues in a round-robin fashion).

As shown in Figure , CQ handles traffic by assigning a specified amount of queue space to each class of packet and then servicing up to 16 queues in a round-robin fashion. As an example, encapsulated Systems Network Architecture (SNA) requires a guaranteed minimum level of service. You could reserve half of available bandwidth for SNA data and allow the remaining half to be used by other protocols such as IP and Internetwork Packet Exchange (IPX).

The queuing algorithm places the messages in one of 17 queues (queue 0 holds system messages such as keepalives, signaling, and so on) and is emptied with weighted priority. The router services queues 1 through 16 in round-robin order, dequeuing a configured byte count from each queue in each cycle. This feature ensures that no application (or specified group of applications) achieves more than a predetermined proportion of overall capacity when the line is under stress. Like PQ, CQ is statically configured and does not automatically adapt to

Page 21: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

21

changing network conditions.

Flow-Based WFQ: Creating Fairness Among Flows For situations in which it is desirable to provide consistent response time to heavy and light network users alike without adding excessive bandwidth, the solution is flow-based WFQ (commonly referred to as just WFQ). WFQ is one of Cisco's premier queuing techniques. It is a flow-based queuing algorithm that creates bit-wise fairness by allowing each queue to be serviced fairly in terms of byte count. For example, if queue 1 has 100-byte packets and queue 2 has 50-byte packets, the WFQ algorithm will take two packets from queue 2 for every one packet from queue 1. This makes service fair for each queue: 100 bytes each time the queue is serviced.

WFQ ensures that queues do not starve for bandwidth and that traffic gets predictable service. Low-volume traffic streams that comprise the majority of traffic, receive increased service, transmitting the same number of bytes as high-volume streams. This behavior results in what appears to be preferential treatment for low-volume traffic, when in actuality it is creating fairness.

As shown in Figure , if high-volume conversations are active, WFQ makes their transfer rates and interarrival periods much more predictable.

Class-Based WFQ: Ensuring Network Bandwidth CBWFQ is one of Cisco's newest congestion-management tools for providing greater flexibility. It will provide a minimum amount of bandwidth to a class as opposed to providing a maximum amount of bandwidth as with traffic shaping.

CBWFQ allows a network administrator to create minimum guaranteed bandwidth classes. Instead of providing a queue for each individual flow, the administrator defines a class that consists of one or more flows, each class with a guaranteed minimum amount of bandwidth.

CBWFQ prevents multiple low-priority flows from swamping out a single high-priority flow. For example, WFQ will provide a video stream that needs half the bandwidth of T1 if there are two flows. But, if more flows are added, the video stream gets less of the bandwidth because WFQ's mechanism creates fairness. If there are 10 flows, the video stream will get only 1/10th of the bandwidth, which is not enough.

CBWFQ provides the mechanism needed to provide the half of the bandwidth that video needs. The network administrator defines a class, places the video stream in the class, and tells the router to provide 768 kbps (half of a T1) service for the class. Video therefore gets the bandwidth that it needs. The rest of flows receive a default class. The default class uses flow-based WFQ schemes fairly allocating the remainder of the bandwidth (half of the T1, in this example).

Page 22: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

22

Page 23: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

23

3.2.3

Queue Management (Congestion-Avoidance Tools)

Congestion avoidance is a form of queue management. Congestion-avoidance techniques monitor network traffic loads in an effort to anticipate and avoid congestion at common network bottlenecks, as opposed to congestion-management techniques that operate to control congestion after it occurs. The primary Cisco IOS congestion avoidance tool is WRED.

The random early detection (RED) algorithms avoid congestion in internetworks before it becomes a problem. RED works by monitoring traffic load at points in the network and randomly discards packets if the congestion begins to increase. The result of the drop is that the source detects the dropped traffic and slows its transmission. RED is primarily designed to work with TCP in IP internetwork environments.

WRED combines the capabilities of the RED algorithm with IP precedence. This combination provides for preferential traffic handling for higher-priority packets. It can selectively discard lower-priority traffic when the interface starts to get congested and can provide differentiated performance characteristics for different classes of service as shown in Figure . WRED is also Resource Reservation Protocol (RSVP) aware and can provide an integrated services controlled-load QoS.

As you know, each queue can house a finite number of packets. A full queue causes tail drops. Tail drops are dropped packets that could not fit into the queue because the queue was full. This is undesirable because the dropped packet may have been a high-priority packet and the router did not have a chance to queue it. If the queue is not full, the router can look at the priority of all arriving packets and drop the lower-priority packets, allowing high-priority packets into the queue. By managing the depth of the queue (the number of packets in the queue) by dropping specified packets, the router does its best to make sure that the queue does not fill and that tail drops do not happen. This allows the router to make a better decision as to which packets to drop when the queue depth increases.

Page 24: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

24

WRED also helps prevent overall congestion in an internetwork. WRED uses a minimum threshold for each IP precedence level to determine when to drop a packet. (The queue length must exceed the minimum threshold for WRED to consider a packet as a candidate for dropping.) Consider this example for two classes of traffic. The first class has a minimum drop threshold for IP precedence of 20. The next queue in our example has a drop threshold for IP precedence of 22. If the queue length is 21, then WRED drops packets for the first class, but packets from the second class remain in the queue. If the queue depth deepens and exceeds 22, then packets with IP precedence = 1 can be dropped as well.

WRED uses an algorithm that raises the probability that the router can drop a packet as the queue depth rises from the minimum drop threshold to the maximum drop threshold. Above the maximum drop threshold, WRED drops all packets.

3.2.4

Preparing to Implement QoS

There are three basic steps shown in Figure involved in implementing QoS on a network:

Step 1 Identify types of traffic and their requirements: Study the network to determine the type of traffic that is running on the network and then determine the QoS requirements needed for the different types of traffic.

Step 2 Define traffic classes: This activity groups the traffic with similar QoS requirements into classes. For example, three classes of traffic might be defined as voice, mission-critical, and best effort.

Step 3 Define QoS policies: QoS policies meet QoS requirements for each traffic class.

Page 25: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

25

3.2.5

Step 1: Identify Types of Traffic and Their Requirements

The first step in implementing QoS is to identify the traffic on the network and then determine the QoS requirements and the importance of the various traffic types. This step provides some high-level guidelines for implementing QoS in networks that support for multiple applications, including delay-sensitive and bandwidth-intensive applications. These applications may enhance business processes, but stretch network resources. QoS can provide secure, predictable, measurable, and guaranteed services to these applications by managing delay, delay variation (jitter), bandwidth, and packet loss in a network.

This step consists of these activities illustrated in Figure :

• Determine the QoS problems of users. Measure the traffic on the network during congested periods. Conduct a CPU use assessment on each of the network devices during busy periods to determine where problems might be occurring.

• Determine the business model and goals and obtain a list of business requirements. This activity helps define the number of classes that are needed and allows you to determine the business requirements for each traffic class.

• Define the service levels required by different traffic classes in terms of response time and availability. Questions to consider when defining service levels include what is the impact on business if the network delays a transaction by two or three seconds. A service level assignment will include the priority and the treatment a packet will receive. For example, you would assign voice applications a high service level (high priority, LLQ and RTP compression). You would assign low priority data a lower service level (lower priority, WFQ, TCP header compression).

Determining which applications are business-critical and necessitate protection requires you to review all of the applications competing for network resources. Tools to analyze the traffic patterns in the network include NetFlow Accounting, Network-based Application Recognition

Page 26: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

26

(NBAR), and QoS Device Manager (QDM).

• NetFlow Accounting provides detail about network traffic and can be used to capture the traffic classification or precedence associated with each flow.

• NBAR is a classification tool that can identify traffic up to the application layer. It provides per-interface, per-protocol, and bi-directional statistics for each traffic flow traversing an interface. NBAR also does sub-port classification; observing and identifying items beyond application ports.

• QDM is a web-based network management application that provides an easy-to-use graphical user interface for configuring and monitoring advanced IP-based QoS functionality in routers.

It is important to understand the characteristics of the applications that need protection. Some applications tend to be sensitive to latency or packet loss, while others are considered "aggressive" because they are bursty or consume a lot of bandwidth. If the application is bursty, determine if there is a constant burst or a small burst. Is the packet size of the application large or small? Is the application TCP or UDP based? The table “Understanding the Characteristics of Applications” provides some-high level guidelines .

Page 27: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

27

3.2.6

Step 2: Define Traffic Classes

After identifying and measuring network traffic, use business requirements to perform the second step: define the traffic classes.

A class is a group of network flows that share similar characteristics. For example, an ISP might define classes to represent the different service levels offered to customers. An enterprise might define service level agreements (SLAs) that give different levels of service to various applications.

Because of its stringent QoS requirements, voice traffic is usually in a class by itself. Cisco has developed specific QoS mechanisms, such as LLQ, to ensure that voice always receives

Page 28: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

28

priority treatment over all other traffic.

After you define the applications with the most critical requirements, the remaining traffic classes are defined using business requirements.

Example: Define Traffic Classes A typical enterprise might define the following five traffic classes as shown in Figure based on department requirements or based on the preponderance of a particular application in the network traffic:

• Voice: Absolute priority for VoIP traffic.

• Mission-critical: Small set of locally defined critical business applications. For example, a mission-critical application might be an order-entry database that needs to run 24 hours a day.

• Transactional: Database access, transaction services, interactive traffic, and preferred data services. Depending on the importance of the database application to the enterprise, you might give the database a large amount of bandwidth and a high priority. For example, your payroll department performs critical or sensitive work. Their importance to the organization determines the priority and amount of bandwidth you would give their network traffic.

• Best effort: Popular applications such as e-mail and FTP could each constitute a class. Your QoS policy might guarantee employees using these applications a smaller amount of bandwidth and a lower priority then other applications. Incoming HTTP queries to your company's external website might be a class that gets a moderate amount of bandwidth and runs at low priority.

• Scavenger: The unspecified traffic is considered as less than best effort. Scavenger applications, such as BitTorrent and other point-to-point applications, are served by this class.

Page 29: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

29

3.2.7

Step 3: Define QoS Policy

In the third step, define a QoS policy for each traffic class as shown in Figure . A QoS policy typically defines the following:

• Discrete groups of network traffic (classes of service [CoS]).

• Metrics for regulating the amount of network traffic for each class. These metrics govern the traffic-measuring process (metering).

• Actions to be applied to a packet flow (per-hop behavior [PHB]).

• Any statistics gathering required for a CoS (for example, traffic that is generated by a customer or particular application).

When packets pass to your network, QoS evaluates the packet headers. The QoS policy determines the action that QoS takes.

Defining a QoS policy involves one or more of the following activities:

• Setting a minimum bandwidth guarantee

• Setting a maximum bandwidth limit

• Assigning priorities to each class

• Using QoS technologies, such as advanced queuing, to manage congestion

Example: Define QoS Policies As an example, consider a network that has a finite amount of bandwidth available. Using the traffic classes, previously defined QoS policies can be mandated based on the following priorities (with Priority 5 being the highest and Priority 1 being the lowest):

• Priority 5—Voice: Use LLQ to give voice priority always. Minimum bandwidth of 1 Mbps.

• Priority 4—Mission-critical: Use CBWFQ to prioritize critical-class traffic flows. Minimum bandwidth of 1 Mbps.

• Priority 3—Transactional: Use CBWFQ to prioritize transactional traffic flows. Minimum bandwidth of 1 Mbps.

• Priority 2—Best-effort: Use CBWFQ to prioritize best-effort traffic flows that are below mission-critical and voice. Maximum bandwidth of 500 kbps.

• Priority 1—Scavenger (less-than-best-effort): Use WRED to drop these packets whenever the network has a tendency toward congestion. Maximum bandwidth of 100 kbps.

Page 30: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

30

3.3.1

Three QoS Models

The following are three models for implementing QoS in a network as illustrated in Figure :

• Best-effort model: The best-effort model does not use QoS. If it is not important when or how packets arrive, the best-effort model is appropriate. Figure summarizes the best-effort model.

• Integrated services (IntServ): IntServ can provide very high QoS to IP packets. Essentially, IntServ defines a signaling process for applications to signal to the network that they require special QoS for a period and that bandwidth should be reserved. With IntServ, packet delivery is guaranteed. However, the use of IntServ can severely limit the scalability of a network. Figure is a simple illustration of the IntServ model.

• Differentiated services (DiffServ): DiffServ provides the greatest scalability and flexibility in implementing QoS in a network. Network devices recognize traffic classes and provide different levels of QoS to different traffic classes. Figure 3 is a simple illustration of the DiffServ model.

Page 31: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

31

Page 32: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

32

3.3.2

Best-Effort Model

The basic design of the Internet provides for best-effort packet delivery and provides no guarantees. This approach is still predominant on the Internet today and remains appropriate for most purposes. The best-effort model treats all network packets in the same way, so an emergency voice message is treated the same way a digital photograph attached to an e-mail is treated. Without QoS, the network cannot tell the difference between packets and, as a result, cannot treat packets preferentially. These points are summarized in Figure .

When you mail a letter using standard postal mail, you are using a best-effort model. Your letter is treated exactly the same as every other letter. With the best-effort model, the letter may never arrive, and, unless you have a separate notification arrangement with the letter recipient, you may never know that the letter did not arrive.

Figure summarizes the benefits and drawbacks of best effort model as follows:

• Benefits: o The model has nearly unlimited scalability. The only way to reach scalability

limits is to reach bandwidth limits, in which case all traffic is equally affected. o You do not need to employ special QoS mechanisms to use the best-effort

model. Best-effort is the easiest and quickest model to deploy.

• Drawbacks: o There are no guarantees of delivery. Packets will arrive whenever they can

and in any order possible, if they arrive at all. o No packets have preferential treatment. Critical data is treated the same as

casual e-mail is treated.

3.3.3

IntServ Model

The needs of real-time applications, such as remote video, multimedia conferencing, visualization, and virtual reality, motivated the development of the IntServ architecture model (RFC 1633, June 1994). Figure illustrates a more detailed view of the operation of the IntServ model.

IntServ provides a way to deliver the end-to-end QoS that real-time applications require by explicitly managing network resources to provide QoS to specific user packet streams, sometimes called microflows. IntServ uses resource reservation and admission-control mechanisms as key building blocks to establish and maintain QoS. This practice is similar to a concept known as “hard QoS.” Hard QoS guarantees traffic characteristics, such as

Page 33: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

33

bandwidth, delay, and packet-loss rates, from end to end. Hard QoS ensures both predictable and guaranteed service levels for mission-critical applications.

IntServ uses Resource Reservation Protocol (RSVP) explicitly to signal the QoS needs of an application’s traffic along devices in the end-to-end path through the network. If network devices along the path can reserve the necessary bandwidth, the originating application can begin transmitting. If the requested reservation fails along the path, the originating application does not send any data.

IntServ is a multiple-service model that can accommodate multiple QoS requirements. IntServ inherits the connection-oriented approach from telephony network design. Each individual communication must explicitly specify its traffic descriptor and requested resources to the network. The edge router performs admission control to ensure that available resources are sufficient in the network. The IntServ standard assumes that routers along a path set and maintain the state for each individual communication.

In the IntServ model, the application requests a specific kind of service from the network before sending data. The application informs the network of its traffic profile and requests a particular kind of service that can encompass its bandwidth and delay requirements. The application sends data only after it receives confirmation for bandwidth and delay requirements from the network. The application also sends data that lies within its described traffic profile.

The network performs admission control based on information from the application and available network resources. The network commits to meeting the QoS requirements of the application as long as the traffic remains within the profile specifications. The network fulfills its commitment by maintaining the per-flow state and then performing packet classification, policing, and intelligent queuing based on that state.

The QoS feature set in Cisco IOS software includes these features that provide controlled traffic volume services:

• RSVP, which can be used by applications to signal their QoS requirements to the router

• Intelligent queuing mechanisms, which can be used with RSVP to provide these QoS service levels:

o Guaranteed-rate: Allows applications to reserve bandwidth to meet their requirements. For example, a VoIP application can reserve 32 Mbps end–to-end bandwidth using this type of service. Cisco IOS QoS uses LLQ with RSVP to provide a guaranteed-rate type of service.

o Controlled-load: Allows applications to have low delay and high throughput, even during times of congestion. For example, adaptive real-time applications, such as the playback of a recorded conference, can use this service. Cisco IOS QoS uses RSVP with WRED to provide a controlled-load type of service.

IntServ Functions As a means of illustrating the function of the IntServ model, Figure shows the control and data planes. Besides end-to-end signaling, IntServ requires several functions in order to be available on routers and switches along the network path. These functions include the following:

• Admission control: Admission control determines whether a new flow requested by users or systems can be granted the requested QoS without affecting existing reservations in order to guarantee end-to-end QoS. Admission control ensures that resources are available before allowing a reservation.

• Classification: Entails using a traffic descriptor to categorize a packet within a specific group to define that packet and make it accessible for QoS handling on the

Page 34: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

34

network. Classification is pivotal for policy techniques that select packets for different types of QoS service.

• Policing: Takes action, including possibly dropping packets, when traffic does not conform to its specified characteristics. Policing is defined by rate and burst parameters, as well as by actions for in-profile and out-of-profile traffic.

• Queuing: Queuing accommodates temporary congestion on an interface of a network device by storing excess packets in buffers until access to the bandwidth becomes available.

• Scheduling: A QoS component, the QoS scheduler, negotiates simultaneous requests for network access and determines which queue receives priority. IntServ uses round robin scheduling. Round robin scheduling is a time-sharing approach in which the scheduler gives a short time slice to each job before moving on to the next job, polling each task round and round. This way, all the tasks advance, little by little, on a controlled basis. Packet scheduling enforces the reservations by queuing and scheduling packets for transmission.

Benefits and Drawbacks of the IntServ Model The IntServ model has several benefits and some drawbacks as shown in Figure :

• Benefits: o IntServ supports admission control that allows a network to reject or

downgrade new RSVP sessions if one of the interfaces in the path has reached the limit (that is, if all bandwidth that can be reserved is booked).

o RSVP signals QoS requests for each individual flow. In the request, the authorized user (authorization object) and needed traffic policy (policy object) are sent. The network can then provide guarantees to these individual flows.

o RSVP informs network devices of flow parameters (IP addresses and port numbers). Some applications use dynamic port numbers, such as H.323-based applications, which can be difficult for network devices to recognize. Network-Based Application Recognition (NBAR) is a mechanism that complements RSVP for applications that use dynamic port numbers but do not use RSVP.

• Drawbacks: o There is continuous signaling because of the stateful RSVP architecture that

adds to the bandwidth overhead. RSVP continues signaling for the entire duration of the flow. If the network changes, or links fail and routing convergence occurs, the network may no longer be able to support the reservation.

o The flow-based approach is not scalable to large implementations, such as the public Internet, because RSVP has to track each individual flow. This circumstance makes end-to-end signaling difficult. A possible solution is to combine IntServ with elements from the DiffServ model to provide the needed scalability.

Page 35: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

35

Page 36: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

36

3.3.4

RSVP and the IntServ QoS Model

Cisco QoS architecture uses RSVP as one of several methods for providing Call Admission Control (CAC) for voice in a VoIP network. The RSVP method for CAC is the only method that makes an actual bandwidth reservation for each allowed voice call. Other CAC methods can only make a decision that is a guess based on the state of the network at the initiation of the call. The use of RSVP not only provides CAC, it also guarantees QoS for the duration of the call regardless of changing network conditions. RSVP is the method used by Cisco Unified CallManager 5.0 to perform CAC.

If resources are available, RSVP accepts a reservation and installs a traffic classifier to assign a temporary QoS class for that traffic flow in the QoS forwarding path. The traffic classifier tells the QoS forwarding path how to classify packets from a particular flow and what forwarding treatment to provide.

RSVP is a network control protocol that enables applications to obtain prescriptive QoS for the application data flows. Such a capability recognizes that different applications have different network performance requirements. Some applications, including the more traditional interactive and batch applications, require reliable delivery of data but do not impose any stringent requirements for the timeliness of delivery. Newer application types, including videoconferencing, IP telephony, and other forms of multimedia communication require the opposite: Data delivery must be timely but not necessarily reliable. Thus, RSVP was intended to provide IP networks with the ability to support the divergent performance requirements of differing application types.

Figure lists the characteristics of RSVP. RSVP is an IP protocol that uses IP protocol ID 46 and TCP and UDP port 3455.

It is important to note that RSVP is not a routing protocol. RSVP works in conjunction with routing protocols and installs the equivalent of dynamic access control lists (ACLs) along the routes that routing protocols calculate. Thus, implementing RSVP in an existing network does not require migration to a new routing protocol.

In RSVP, a data flow is a sequence of datagrams that have the same source, destination (regardless of whether that destination is one or more physical machines), and QoS requirements. QoS requirements are communicated through a network via a flow specification, which is a data structure that is used by internetwork hosts to request special services from the internetwork. A flow specification describes the level of service that is required for that data flow. RSVP focuses on the following two main traffic types:

• Rate-sensitive traffic: Traffic that requires a guaranteed and constant (or nearly constant) transmission rate from its source to its destination. An example of such an application is H.323 videoconferencing. RSVP enables constant-bit-rate service in packet-switched networks via its rate-sensitive level of service. This service is sometimes referred to as guaranteed-bit-rate service.

• Delay-sensitive traffic: Traffic that requires timeliness of delivery and that varies its rate accordingly. MPEG-II video, for example, averages about 3 to 7 Mbps, depending

Page 37: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

37

on the rate at which the picture is changing. RSVP services supporting delay-sensitive traffic are referred to as controlled-delay service (non-real-time service) and predictive service (real-time service).

3.3.5

RSVP Operation

Unlike routing protocols, RSVP manages flows of data rather than making decisions for individual datagrams. Data flows consist of discrete sessions between specific source and destination machines. The definition of a session is a simplex flow of datagrams to a particular destination and transport layer protocol. The following data identifies the sessions: destination address, protocol ID, and destination port. RSVP supports both unicast and multicast simplex sessions.

In the context of RSVP, QoS is an attribute specified in flow specifications that determine how participating entities (routers, receivers, and senders) handle data interchanges. RSVP specifies the QoS used by both hosts and routers. Hosts use RSVP to request a QoS level from the network on behalf of an application data stream. Routers use RSVP to deliver QoS requests to other routers along the path(s) of the data stream. In doing so, RSVP maintains the router and host state to provide the requested service.

To initiate an RSVP multicast session, a receiver first joins the multicast group specified by an IP destination address by using the Internet Group Membership Protocol (IGMP). After the receiver joins a group, a potential sender starts sending RSVP path messages to the IP destination address. The receiver application receives a path message and starts sending appropriate reservation-request messages specifying the desired flow descriptors using RSVP. After the sender application receives a reservation-request message, the sender starts

Page 38: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

38

sending data packets.

How does RSVP work? Figure illustrates the RSVP daemon found in each node capable of resource reservation. Each node that uses RSVP has two local decision modules:

• Admission control: Admission control keeps track of the system resources and determines whether the node has sufficient resources to supply the requested QoS. The RSVP daemon monitors both of these checking actions. If either check fails, the RSVP program returns an error message to the application that originated the request. If both checks succeed, the RSVP daemon sets parameters in the packet classifier and packet scheduler to obtain the requested QoS.

• Policy control: Policy control determines whether the user has administrative permission to make the reservation.

If both Admission control and Policy control succeed, the daemon then sets parameters in two entities, packet classifier and packet scheduler.

• Packet classifier: The RSVP packet classifier determines the route and QoS class for each packet.

• Packet scheduler: The RSVP packet scheduler orders packet transmission to achieve the promised QoS for each stream. The scheduler allocates resources for transmission on the particular data link layer medium used by each interface.

• Routing Process: The RSVP daemon also communicates with the routing process to determine the path to send its reservation requests and to handle changing memberships and routes. Each router participating in resource reservation passes incoming data packets to a packet classifier and then queues the packets as necessary in a packet scheduler.

Once the packet classifier determines the route and QoS class for each packet, and the scheduler allocates resources for transmission, RSVP passes the request to all the nodes (routers and hosts) along the reverse data paths to the data sources. At each node, the RSVP program applies a local decision procedure called admission control to determine whether that node can supply the requested QoS. If admission control succeeds in providing the required QoS, the RSVP program sets the parameters of the packet classifier and scheduler to obtain the desired QoS. If admission control fails at any node, the RSVP program returns an error indication to the application that originated the request. Routers along the reverse data stream path repeat this reservation until the reservation merges with another reservation for the same source stream.

RSVP is not a routing protocol, but rather an internet control protocol. RSVP relies on the underlying routing protocols to find where it should deliver reservation requests. When an RSVP-managed flow changes its path, the routing module will notify the RSVP module of the route changes. In this way, RSVP can quickly adjust the resource reservation to new routes.

Reservation Merging When a potential receiver initiates a reservation request, the request does not need to travel all the way to the source of the sender. Instead, it travels upstream until it meets another reservation request for the same source stream. The request then merges with that reservation. Figure shows how the reservation requests merge as they progress up the multicast tree.

Reservation merging leads to the primary advantage of RSVP, scalability. This allows a large number of users to join a multicast group without significantly increasing the data traffic. RSVP scales to large multicast groups. The average protocol overhead decreases as the number of participants increases.

Note

Page 39: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

39

Practical issues of latency and propagation make it impossible to deploy RSVP or any new protocol at the same moment to all points throughout the entire Internet. Indeed, RSVP can never be deployed everywhere. To support the connection of RSVP networks through non-RSVP networks, RSVP supports tunneling, which occurs automatically through non-RSVP clouds.

Example As an example, Figure illustrates the basic principles of how RSVP performs CAC and bandwidth reservation in a network from a functionality perspective. In this example, RSVP is enabled on each router interface in the network.

In this scenario, an IntServ-enabled WAN connects three Cisco IP phones to each other and to the Cisco Unified CallManager 5.0. Because bandwidth is limited on the WAN links, RSVP determines whether the requested bandwidth for a successful call is available. For performing CAC, Cisco Unified CallManager 5.0 uses RSVP.

An RSVP-enabled voice application wants to reserve 20 kbps of bandwidth for a data stream from IP-Phone 1 to IP-Phone 2.

Recall that RSVP does not perform its own routing; instead, RSVP uses underlying routing protocols to determine whether to carry reservation requests. As routing changes paths to adapt to changes in topology, RSVP adapts reservations to the new paths wherever reservations are in place.

The RSVP protocol attempts to establish an end-to-end reservation by checking for available bandwidth resources on all RSVP-enabled routers along the path from IP-Phone 1 to IP-Phone 2. As the RSVP messages progress through the network from Router R1 via R2 to R3, the available RSVP bandwidth is decremented by 20 kbps on the router interfaces. For voice calls, a reservation must be made in both directions.

The available bandwidth on all interfaces is sufficient to accept the new data stream, so the reservation succeeds and the application is notified.

Page 40: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

40

Page 41: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

41

3.3.6

The DiffServ Model

The differentiated services (DiffServ) architecture specifies a simple, scalable, and coarse-grained mechanism for classifying and managing network traffic and providing QoS guarantees on modern IP networks. For example, DiffServ can provide low-latency guaranteed service (GS) to critical network traffic such as voice or video while providing simple best-effort traffic guarantees to non-critical services such as web traffic or file transfers.

The DiffServ design overcomes the limitations of both the best-effort and IntServ models. The DiffServ model is described in Internet Engineering Task Force (IETF) RFC 2474 and RFC 2475. DiffServ can provide an “almost guaranteed” QoS while still being cost-effective and scalable.

The concept of soft QoS is the basis of the DiffServ model. You will recall that IntServ (hard QoS) uses signaling in which the end-hosts signal their QoS needs to the network. DiffServ does not use signaling but works on the provisioned-QoS model, where network elements are set up to service multiple classes of traffic each with varying QoS requirements. By classifying flows into aggregates (classes), and providing appropriate QoS for the aggregates, DiffServ can avoid significant complexity, cost, and introducing scalability issues. For example, DiffServ groups all TCP flows as a single class, and allocates bandwidth for that class, rather than for the individual flows as hard QoS (DiffServ) would do. In addition to classifying traffic, DiffServ minimizes signaling and state maintenance requirements on each network node.

The hard QoS model (IntServ) provides for a rich end-to-end QoS solution, using end-to-end signaling, state-maintenance (for each RSVP-flow and reservation) and admission control at each network element. This approach consumes significant overhead, thus restricting its scalability. On the other hand, DiffServ is not an end-to-end QoS strategy because it cannot enforce end-to-end guarantees, but DiffServ QoS is a more scalable approach to implementing QoS. This is because DiffServ maps many applications into small sets of

Page 42: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

42

classes. DiffServ assigns each class with similar sets of QoS behaviors and enforces and applies QoS mechanisms on a hop-by-hop basis, uniformly applying global meaning to each traffic class to provide both flexibility and scalability.

Figure shows the key characteristics of the DiffServ model against a network with many nodes to reinforce the concept per-hop behavior (PHB).

DiffServ divides network traffic into classes based on business requirements. Each of the classes can then be assigned a different level of service. As the packets traverse a network, each of the network devices identifies the packet class and services the packet according to that class. It is possible to choose many levels of service with DiffServ. For example, voice traffic from IP phones is usually given preferential treatment over all other application traffic, e-mail is generally given best-effort service, and nonbusiness traffic can either be given very poor service or blocked entirely.

DiffServ works like a packet delivery service. You request (and pay for) a level of service when you send your package. Throughout the package network, the level of service is recognized and your package is given either preferential or normal service, depending on what you requested.

As shown in Figure , the DiffServ model has several benefits and some drawbacks:

• Benefits: o Highly scalable o Provides many different levels of quality

• Drawbacks: o No absolute guarantee of service quality o Requires a set of complex mechanisms to work in concert throughout the

network

Page 43: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

43

3.4.1

Methods for Implementing QoS Policy

Figure shows the methods that have been used for implementing QoS policies over the years.

A few years ago, the only way to implement QoS in a network was by using the command-line interface (CLI) to configure individual QoS policies at each interface. This is a time-consuming and error-prone task involving cutting and pasting configurations from one interface to another.

Cisco introduced the Modular QoS CLI (MQC) to simplify QoS configuration by making configurations modular. MQC provides a building-block approach that uses a single module repeatedly to apply a policy to multiple interfaces.

Cisco AutoQoS represents innovative technology that simplifies the challenges of network administration by reducing QoS complexity, deployment time, and cost to enterprise networks. Cisco AutoQoS incorporates value-added intelligence in Cisco IOS software and Cisco Catalyst software to provision and assist in the management of large-scale QoS deployments.

Page 44: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

44

The first phase of Cisco AutoQoS VoIP offers straightforward capabilities to automate VoIP deployments for customers that want to deploy IP telephony but lack the expertise and staffing to plan and deploy IP QoS and IP services. The second phase, Cisco AutoQoS Enterprise, adds these features but is supported only on router interfaces. Cisco AutoQoS Enterprise uses NBAR to discover the traffic. After this discovery phase, the AutoQoS process can then configure the interface to support up to 10 traffic classes.

Customers can easily configure, manage, and successfully troubleshoot QoS deployments by using the Cisco Router and Security Device Manager (SDM) QoS wizard. The Cisco SDM QoS wizard provides centralized QoS design, administration, and traffic monitoring that scales to large QoS deployments.

The table in Figure summarizes a comparison of these methods.

Page 45: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

45

3.4.2

Configuring QoS at the CLI

Cisco does not recommend the legacy CLI method for initially implementing QoS policies. The CLI method is time-consuming and prone to errors. Nonetheless, QoS implementation at the CLI remains the choice for some administrators, especially for fine-tuning and adjusting QoS properties.

The legacy CLI method of QoS implementation has the following limitations:

• It is the hardest and most time-consuming way to configure QoS

• Has little opportunity to fine-tune and there is less granularity for supported QoS features than other QoS configuration techniques.

• QoS functionalities have limited options; for example, you cannot fully separate the traffic classification from the QoS mechanisms.

To implement QoS this way, use the console or Telnet to access the CLI. Using the CLI approach is simple but only allows basic features to be configured. To implement QoS this way, you must first build a QoS policy (traffic policy) and then apply the policy to the interface. Figure summarizes these points.

Figure lists these guidelines for building a QoS policy (traffic policy):

• Identify the traffic patterns in your network by using a packet analyzer. This activity gives you the ability to identify the traffic types, for example, IP, TCP, User Datagram Protocol (UDP), DECnet, AppleTalk, and Internetwork Packet Exchange (IPX).

• After you have performed the traffic identification, start classifying the traffic. For example, separate the voice traffic class from the business-critical traffic class.

• For each traffic class, specify the priority for the class. For example, voice is assigned a higher priority than business-critical traffic.

• After applying the priorities to the traffic classes, select a proper QoS mechanism, such as queuing, compression, or a combination of both. This choice determines which traffic leaves the device first and how traffic leaves the device.

Example: Legacy CLI QoS Figure shows a possible implementation scenario for legacy CLI followed by a sample configuration. In this scenario, a low-speed WAN link connects the office site to the central site. Both sites are equipped with PCs and servers that run interactive applications, such as terminal services. Because the available bandwidth is limited, you must devise an appropriate strategy for efficient bandwidth use.

In a network with remote sites that use interactive traffic for daily business, bandwidth availability is an issue. The available bandwidth resources need to be used efficiently. Because only simple services are run, basic queuing techniques, such as PQ or CQ, and header compression mechanisms, such as TCP header compression, are needed to use the bandwidth much more efficiently.

Note PQ and CQ are traditional Cisco priority mechanisms that have been mostly replaced by more advanced mechanisms, such as weighted fair queuing (WFQ), CBWFQ, and LLQ.

Depending on the kind of traffic in the network, you must choose suitable queuing and compression mechanisms. In the example in Figure , CQ and TCP header compression are a strategy for interactive traffic quality assurance.

The output in Figure illustrates complex configuration tasks that can be involved with using the CLI as follows:

• Each QoS feature needs a separate line. CQ needs two lines: one line that sets up

Page 46: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

46

the queue list, in this example for Telnet traffic, and a second line that binds the queue list to an interface and activates the list.

• PPP multilink configuration needs four lines and another line for TCP header compression.

3.4.3

Modular QoS CLI

Page 47: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

47

The Cisco MQC allows users to create traffic policies and then attach these policies to interfaces. A QoS policy contains one or more traffic classes and one or more QoS features. A traffic class classifies traffic, and the QoS features in the QoS policy determine how to treat the classified traffic.

The Cisco MQC offers significant advantages over the legacy CLI method for implementing QoS. By using MQC, a network administrator can significantly reduce the time and effort it takes to configure QoS in a complex network. Rather than configuring “raw” CLI commands interface by interface, the administrator develops a uniform set of traffic classes and QoS policies that are applied on interfaces.

The use of the Cisco MQC allows the separation of traffic classification from the definition of QoS policy. This capability enables easier initial QoS implementation and maintenance as new traffic classes emerge and QoS policies for the network evolve. Figure summarizes these points.

Figure summarizes the three steps to follow when configuring QoS using Cisco MQC configuration. Each step answers a question concerning the classes assigned to different traffic flows:

• Build a class map: What traffic do we care about? The first step in QoS deployment is to identify the interesting traffic, that is, classify the packets. This step defines a grouping of network traffic—a class-map in MQC terminology—with various classification tools: Access Control Lists (ACLs), IP addresses, IP precedence, IP Differentiated Services Code Point (DSCP), IEEE 802.1p, Multiprotcol Label Switching Experimental bit (MPLS EXP), and Cisco Network Based Application Recognition (NBAR). In this step, you configure traffic classification by using the class-map command.

• Policy map: What will happen to the classified traffic? Decide what to do with a group once you identify its traffic. This step is the actual construction of a QoS policy—a policy-map in MQC terminology—by choosing the group of traffic (class-map) on which to perform QoS functions. Examples of QoS functions are queuing, dropping, policing, shaping, and marking. In this step, you configure each traffic policy by associating the traffic class with one or more QoS features using the policy-map

command.

• Service policy: Where will the policy apply? Apply the appropriate policy map to the desired interfaces, sub-interfaces, or Asynchronous Transfer Mode (ATM) or Frame Relay Permanent Virtual Circuits (PVCs). In this step, you attach the traffic policy to inbound or outbound traffic on interfaces, subinterfaces, or virtual circuits by using the service-policy command.

Figure is a simple example of using the three-step process. The classification step is modular and independent of what happens to the packet after it is classified. For example, a defined policy map contains various class maps and the configuration within a policy map can be changed independently from the configuration of a defined class map (and vice versa). Further, use of the no policy-map command can disable an entire QoS policy.

Page 48: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

48

3.4.4

Modular QoS CLI Step 1: Configuring Class Maps

Step 1 requires you to tell the router what traffic gets QoS and to what degree. An ACL is the traditional way to define any traffic for a router. A class-map defines the traffic into groups with classification templates that are used in policy maps where QoS mechanisms are bound to classes. You can configure up to 256 class maps on a router. For example, you might assign video applications to a class map called Video, and e-mail application traffic to a class map called Mail. You could also create a class map called VoIP traffic and and include all of the

Page 49: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

49

VoIP protocols. Figure summarizes these points.

Use the class-map global configuration command to create a class map. Identify class maps

with case-sensitive names. All subsequent references to the class map must use the same name. Each class map contains one or more conditions that define which packets belong to the class. Figure shows the command syntax for the class-map command.

There are two ways of processing conditions when there is more than one condition in a class map:

• Match all: Must meet all conditions to bind a packet to the class.

• Match any: Meet at least one condition to bind the packet to the class.

The default match strategy of class maps is match all.

The match commands specify various criteria for classifying packets. Packets are checked to

determine whether they match the criteria that are specified in the match commands. If a packet matches the specified criteria, that packet is considered a member of the class and is forwarded according to the QoS specifications set in the traffic policy. Packets that fail to meet any of the matching criteria are classified as members of the default traffic class. The Cisco MQC does not necessarily require that users associate a single traffic class to one traffic policy. Multiple types of traffic can be associated with a single traffic class using the match

any command.

The match not command inverts the specified condition. This command specifies a match

criterion value that prevents packets from being classified as members of a specified traffic class. All other values of that particular match criterion belong to the class. At least one match

command should be used within the class-map configuration mode.

The description command is used for documenting a comment about the class map.

There are many ways to classify traffic when configuring class maps. Figure shows one possible way to classify traffic using access control lists (ACLs) to specify the traffic that needs to match for the QoS policy. Class maps support standard ACLs and extended ACLs.

The match access-group command allows an ACL to be used as a match criterion for

traffic classification.

Example: Traffic Classes Defined In the following example, two traffic classes are created and their match criteria are defined. For the first traffic class called class1, access control list (ACL) 101 is used as the match criterion. For the second traffic class called class2, ACL 102 is used as the match criterion. The router checks the packets against the contents of these ACLs to determine if they belong to the class.

Router(config)# class-map class1 Router(config-cmap)# match access-group 101

Router(config-cmap)# exit Router(config)# class-map class2

Router(config-cmap)# match access-group 102 Router(config-cmap)# exit

Example: match not Command

The match not command is used to specify a specific QoS policy value that is not used as a

match criterion. When using the match not command, all other values of that QoS policy

Page 50: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

50

become successful match criteria.

For instance, if the match not qos-group 4 command is issued in class-map

configuration mode, the specified class will accept all QoS group values except 4 as successful match criteria.

In the following traffic class, all protocols except IP are considered successful match criteria:

Router(config)# class-map noip

Router(config-cmap)# match not protocol ip Router(config-cmap)# exit

Page 51: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

51

3.4.5

Step 2: Configuring Policy Maps

Figure describes the concept of a policy map. The policy-map command creates a traffic

policy. The purpose of a traffic policy is to configure the QoS features that should be

Page 52: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

52

associated with the traffic which are then classified into a traffic class or classes. You can then assign as much bandwidth or set whatever priority you need to that class. A traffic policy contains three elements: a case-sensitive name, a traffic class (specified with the class

command), and the QoS policies.

The policy-map command specifies the name of a traffic policy (for example, issuing the

policy-map class1 command would create a traffic policy named class1). After you issue

the policy-map command, you enter policy-map configuration mode. You can then enter the

name of a traffic class. You must be in the policy-map configuration mode to enter QoS features that apply to the traffic matching the named class.

A packet can match only one traffic class within a traffic policy. If a packet matches more than one traffic class in the traffic policy, the first traffic class defined in the policy is used.

On the other hand, the Cisco MQC does not necessarily require that you associate only one traffic class to a single traffic policy. When packets match to more than one match criterion, multiple traffic classes can be associated with a single traffic policy. The next topic will explain the concept of nested class maps.

You configure service policies with the policy-map command. One policy map can have up

to 256 classes using the class command with the name of a preconfigured class map. Figure shows the policy-map and class command syntax.

A nonexistent class can also be used within the policy-map configuration mode if the match condition is specified after the name of the class. The running configuration will reflect such a configuration by using the match-any strategy and inserting a full class map configuration.

The Configuration Modes table shows starting and resulting configuration modes for the class-map, policy-map, and class commands.

Configuration Modes

Staring Configuration Mode Command Resulting Configuration Mode

Router(config)# class-map Router(config-cmap)#

Router(config)# policy-map Router(config-pmap)#

Router(config-pmap)# class Router(config-pmap-c)#

All traffic not identified in any of the class maps that are used within the policy map, become part of the default class “class-default.” This class has no QoS guarantees by default. The default class, when used on output, can use one FIFO queue or flow-based WFQ. The default class is part of every policy map, even if a default class is not included in the configuration.

Example: Traffic Policy Created In the following example, a traffic policy called policy1 is defined to contain policy specifications for the two classes—class1 and class2. The match criteria for these classes were defined in the traffic classes.

For class1, the policy includes a bandwidth allocation request and a maximum packet count limit for the queue reserved for the class. For class2, the policy specifies only a bandwidth allocation request.

Router(config)# policy-map policy1 Router(config-pmap)# class class1

Router(config-pmap-c)# bandwidth 3000 Router(config-pmap-c)# queue-limit 30

Page 53: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

53

Router(config-pmap)# exit Router(config-pmap)# class class2

Router(config-pmap-c)# bandwidth 2000 Router(config-pmap)# exit

Example: Enforcing a Sub-rate In this example, you need to enforce a sub-rate (that is, 10 Mbps virtual pipe on a 1 Gbps link) on a particular link, while offering minimum bandwidth guarantees to applications such as voice, mission critical applications, and video within that virtual pipe as follows:

• Voice: 1 Mbps

• Mission critical applications traffic: 2 Mbps

• Video: 5 Mbps

• Remaining bandwidth allocated to best-effort traffic within the defined 10 Mbps pipe

Below is the sample configuration.

Router(config)# policy-map CHILD

Router(config-pmap)# class VOICE Router(config-pmap-c)# priority 1000 Router(config-pmap-c)# class MCA

Router(config-pmap-c)# bandwidth 2000 Router(config-pmap-c)# class VIDEO

Router(config-pmap-c)# bandwidth 5000 Router(config)# policy-map PARENT Router(config-pmap)# class class-default

Router(config-pmap-c)# shape average 10000000 Router(config-pmap-c)# service-policy CHILD

Note If a particular application does not use the bandwidth, it can be shared among the active applications, therefore no bandwidth is wasted.

Example: Default Traffic Class Configuration Unclassified traffic (traffic that does not meet the match criteria specified in the traffic classes) is treated as belonging to the default traffic class.

If the user does not configure a default class, packets are still treated as members of the default class. However, by default, the default class has no enabled features. Therefore, packets belonging to a default class with no configured features have no QoS functionality. These packets are then placed into a FIFO queue and forwarded at a rate determined by the available underlying link bandwidth. This FIFO queue is managed by tail drop. (Tail drop is a means of avoiding congestion that treats all traffic equally and does not differentiate between classes of service. Queues fill during periods of congestion. When the output queue is full and tail drop is in effect, packets are dropped until the congestion is eliminated and the queue is no longer full).

The following example configures a traffic policy for the default class of the traffic policy called policy1. The default class (which is always called class-default) has these characteristics: 10 queues for traffic that does not meet the match criteria of other classes whose policy is defined by the traffic policy policy1, and a maximum of 20 packets per queue before tail drop is enacted to handle additional queued packets.

Router(config)# policy-map policy1

Router(config-pmap)# class class-default Router(config-pmap-c)# fair-queue 10

Router(config-pmap-c)# queue-limit 20

Page 54: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

54

Example: class-map match-any and class-map match-all Commands

These examples illustrate the difference between the class-map match-any command and

the class-map match-all command. The match-any and match-all options determine

how packets are evaluated when multiple match criteria exist. Packets must either meet all of the match criteria (match-all) or one of the match criteria (match-any) in order to be

considered a member of the traffic class.

This example shows a traffic class configured with the class-map match-all command:

Router(config)# class-map match-all cisco1 Router(config-cmap)# match protocol ip

Router(config-cmap)# match qos-group 4 Router(config-cmap)# match access-group 101

If a packet arrives on a router with traffic class called cisco1 configured on the interface, the router evaluates the packet to determine if it matches the IP protocol, QoS group 4, and access group 101. If the packet meets all three of these match criteria, the packet matches traffic class cisco1.

The following example shows a traffic class configured with the class-map match-any

command:

Router(config)# class-map match-any cisco2 Router(config-cmap)# match protocol ip

Router(config-cmap)# match qos-group 4 Router(config-cmap)# match access-group 101

In traffic class called cisco2, the router evaluates the match criteria consecutively until a successful match criterion is located. The packet evaluation determines whether IP protocol can be used as a match criterion. If IP protocol can be used as a match criterion, the packet is matched to traffic class cisco2. If IP protocol is not a successful match criterion, then QoS group 4 is evaluated as a match criterion. Each matching criterion is evaluated to see if the packet matches that criterion. Once a successful match occurs, the packet is classified as a member of traffic class cisco2. If the packet matches none of the specified criteria, the packet is classified as a member of the traffic class, default class.

Note that the class map match-all command requires that all of the match criteria must

be met in order for the packet to be considered a member of the specified traffic class (a logical AND operator). In the example, protocol IP AND QoS group 4 AND access group 101 have to be successful match criteria. However, only one match criterion must be met for the packet in the class map match-any command to be classified as a member of the traffic

class (a logical OR operator). In the example, protocol IP OR QoS group 4 OR access group 101 have to be successful match criteria.

Page 55: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

55

3.4.6

Step 3: Attaching a Service Policy to Interfaces

Like an ACL, you must apply the policy map to the specific interface you want it to affect. You can apply the policy map in either output or input mode. The last configuration step when configuring QoS mechanisms using the Cisco MQC is to attach a policy map to the inbound or outbound packets using the service-policy command.

The router immediately verifies the parameters that are used in the policy map. If there is a mistake in the policy map configuration, the router displays a message explaining what is wrong with the policy map.

The sample configuration in Figure shows how a policy map is used to separate HTTP from other traffic. HTTP is guaranteed 2 Mbps of bandwidth. All other traffic belongs to the default class and is guaranteed to get 6 Mbps of bandwidth.

Example: Traffic Policy Attached to an Interface The following example shows how to attach an existing traffic policy (that was created in the example "Traffic Policy Created") to an interface. After you define a traffic policy with the policy-map command, you can attach it to one or more interfaces to specify the traffic policy

for those interfaces by using the service-policy command in interface configuration

mode. Although you can assign the same traffic policy to multiple interfaces, each interface can have only one traffic policy attached at the input and only one traffic policy attached at the output.

Router(config)# interface e1/1 Router(config-if)# service-policy output policy1

Router(config-if)# exit

Router(config)# interface fa1/0/0

Page 56: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

56

Router(config-if)# service-policy output policy1 Router(config-if)# exit

3.4.7

Nested Class Maps

There are two reasons to use the match class-map command. One reason is maintenance;

if a long traffic class currently exists, using the Traffic Class match criterion is simply easier than retyping the same traffic class configuration.

The more prominent reason for the match class-map command is to allow users to use

match-any and match-all statements in the same traffic class. If you want to combine match-all and match-any characteristics in a traffic policy, create a traffic class using one match criteria evaluation instruction (either match any or match all) and then use this traffic class as a match criterion in a traffic class that uses a different match criteria type.

A simple library analogy illustrated in Figure will serve to clarify the concept. Let us assume you are looking in a library database for a book that covers the salaries of either football players or hockey players. In Boolean terms, your search is represented by the phrase (salaries AND [football players OR hockey players]). This is a ‘nested search’, one search within another. The part of the search enclosed in brackets, football players OR hockey players, will be performed first, followed by the AND operation. This search will retrieve items on salaries and football players as well as items on salaries and hockey players.

The Venn diagram shows six different sectors, three of which overlap to a degree. The overlapping area in the center includes salaries, hockey players, and football players. The area to the right of center contains items on salaries and hockey players. The overlapping area to the left of center contains items on salaries and football players. Only the left of center and the right of center match our criteria.

This next example pertains to creating class maps. Suppose A, B, C, and D were all separate match criterion, and you wanted traffic matching A, B, or C, and D to be classified as

Page 57: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

57

belonging to the traffic class. In Boolean terms, the nested equation is (A or B or [C and D]). Without the nested traffic class, traffic would either have to match all 4 of the match criterion (A and B and C and D) or match any of the match criterion (A or B or C or D) to be considered part of the traffic class. You would not be able to combine "and" (match-all) and "or" (match-any) statements within the traffic class, and you would therefore be unable to configure the desired configuration.

The elegant solution is to create one traffic class using match-all for C and D (which we will call criterion E), and then create a new match-any traffic class using A, B, and E. The new traffic class would have the correct evaluation sequence (A or B or E, which would also be A or B or [C and D]). The desired traffic class configuration is complete.

The only method of including both match-any and match-all characteristics in a single traffic class is to use the match class-map command. To combine match-any and match-all characteristics into a single class, a traffic class created with the match-any instruction must use a class configured with the match-all instruction as a match criterion (through the match class-map command), or vice versa. These next two examples help illustrate the concept.

Example: Nested Traffic Class for Maintenance In the following example, the traffic class called class1 has the same characteristics as traffic class called class2, with the exception that traffic class class1 has added a destination address as a match criterion. Rather than configuring traffic class class1 line by line, a user can enter the match class-map class2 command. This command allows all of the characteristics in the traffic class called class2 to be included in the traffic class called class1, and the user can simply add the new destination address match criterion without reconfiguring the entire traffic class.

Router(config)# class-map match-any class2 Router(config-cmap)# match protocol ip Router(config-cmap)# match qos-group 3

Router(config-cmap)# match access-group 2 Router(config-cmap)# exit

Router(config)# class-map match-all class1

Router(config-cmap)# match class-map class2 Router(config-cmap)# match destination-address mac 1.1.1

Router(config-cmap)# exit

Example: Nested Traffic Class to Combine match-any and match-all Characteristics in One Traffic Class The following example shows how to combine the characteristics of two traffic classes, one with match-any and one with match-all characteristics, into one traffic class with the match class-map command. The result of traffic class class3 requires a packet to match one of the following three match criteria to be considered a member of traffic class class4: IP protocol and QoS group 4, destination MAC address 00.00.00.00.00.00, or access group 2.

In this example, only the traffic class called class4 is used with the traffic policy called policy1.

Router(config)# class-map match-all class3 Router(config-cmap)# match protocol ip Router(config-cmap)# match qos-group 4

Router(config-cmap)# exit

Router(config)# class-map match-any class4 Router(config-cmap)# match class-map class3

Router(config-cmap)# match destination-address mac 1.1.1 Router(config-cmap)# match access-group 2

Page 58: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

58

Router(config-cmap)# exit

Router(config)# policy-map policy1 Router(config-pmap)# class class4

Router(config-pmap-c)# police 8100 1500 2504 conform-action transmit exceed-action set-qos-transmit 4

Router(config-pmap-c)# exit

3.4.8

MQC Example

The example in Figure shows a network using interactive traffic and VoIP with an applied Cisco MQC configuration. In this scenario, the office site connects over a low-speed WAN link to the central site. Both sites are equipped with IP phones, PCs, and servers that run interactive applications, such as terminal services. Because the available bandwidth is limited, an administrator must implement an appropriate strategy for efficient bandwidth use.

The strategy must meet the requirements of voice traffic’s need for high priority, low delay, and constant bandwidth along the communication path and interactive traffic’s need for bandwidth and low delay.

Classification requirements also affect the strategy. Classification and policing of the important traffic streams by applying traffic parameters such as priority, queuing, and bandwidth, are the major elements of the traffic policy that improve the overall quality. Finally, the traffic policy is applied to the WAN interface of the routers.

Figure shows an example of the complex configuration tasks involved in using Cisco MQC on the router Office.

Page 59: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

59

Page 60: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

60

3.4.9

Basic MQC Verification Commands

To display and verify basic QoS classes and policies configured by using the Cisco MQC, use the commands listed in the MQC Verification Commands table. Figure shows the command syntax.

MQC Verification Commands

Command Description

show class-map Displays the configured classes

show policy-map Displays the configured policy

show policy-map interface Displays the applied policy map on an interface

Page 61: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

61

3.5.1

Configuring QoS with Cisco SDM QoS Wizard

Cisco Router and Security Device Manager (SDM) allows you to easily configure routing, security, and QoS services on Cisco routers while helping to enable proactive management through performance monitoring. Whether you are deploying a new router or installing Cisco SDM on an existing router, you can now remotely configure and monitor these routers without using the Cisco IOS software CLI. The Cisco SDM GUI helps people who are not expert users of Cisco IOS software in day-to-day operations, provides easy-to-use smart wizards, automates router security management, and assists them through comprehensive online help and tutorials.

Cisco SDM smart wizards guide you step by step through router and security configuration workflow by systematically configuring the LAN and WAN interfaces, firewall, Network Address Translation (NAT), intrusion prevention system (IPS), IPsec virtual private network (VPNs) routing, and QoS. Cisco SDM smart wizards can intelligently detect incorrect configurations and propose fixes. Online help embedded within Cisco SDM contains appropriate background information in addition to step-by-step procedures to help you enter correct data in the Cisco SDM.

In the QoS configuration section of Cisco SDM, there are several features when defining traffic classes and configuring QoS policies in the network as summarized in Figure . Cisco SDM supports a wide range of Cisco IOS software releases and is available free on Cisco router models from the Cisco 830 Router to the Cisco 7301 Router. Cisco SDM comes preinstalled on all new Cisco 850 Series, Cisco 870 Series, Cisco 1800 Series, Cisco 2800 Series, and Cisco 3800 Series Integrated Services Routers.

The Cisco SDM QoS wizard offers easy and effective optimization of LAN, WAN, and VPN bandwidth and application performance for different business needs (for example, voice and video, enterprise applications, and web). There are three predefined categories of business needs:

• Real-time: Voice over IP (VoIP) traffic and voice-signaling traffic.

Page 62: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

62

• Business-critical: Business traffic important to a typical corporate environment. Some of the protocols included in this traffic category are: Citrix, SQLNet, Notes, LDAP, and Secure LDAP. Routing protocols included in this category are egp, bgp, eigrp, and rip.

• Best-effort: Remaining traffic.

In addition, the Cisco SDM QoS wizard supports NBAR, which provides real-time validation of application use of WAN bandwidth against predefined service policies as well as QoS policing and traffic monitoring.

Figure shows the main page of Cisco SDM which consists of the following two sections:

• About Your Router: This section displays the hardware and software configuration of the router.

• Configuration Overview: This section displays basic traffic statistics.

There are two important icons in the top horizontal navigation bar:

• Configure icon: Opens the configuration page.

• Monitor icon: Opens the page where the status of the tunnels, interfaces, and device can be monitored.

Page 63: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

63

3.5.2

Creating a QoS Policy

Figures to show the following seven steps that are used to create a QoS policy using the Cisco SDM GUI:

Step 1 Enter configuration mode by clicking Configure in the top toolbar of the Cisco SDM window.

Step 2 Click Quality of Service in the Tasks toolbar at the left side of the Cisco SDM window.

Step 3 Click the Create QoS Policy tab.

Step 4 Click the Launch QoS Wizard button to launch the wizard. Figure shows the QoS Wizard screen.

Step 5 Cisco SDM QoS Wizard informs you that it will configure two classes: real-time and business-critical. Click Next to proceed.

Step 6 Figure shows you how to select an interface. You are asked to select an interface that you want a QoS policy applied to. Click Next to proceed.

Step 7 The QoS Policy Generation screen (Figure ) prompts you to enter the bandwidth percentages for each class. After you enter the numbers, Cisco SDM automatically calculates the best-effort class and the bandwidth requirements for each class. Click Next to proceed to the Summary screen and review the QoS configuration.

Page 64: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

64

Page 65: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

65

Page 66: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

66

3.5.3

Reviewing the QoS Configuration

Before delivering the configuration to the router, review the QoS configuration summary. In the Summary window, Cisco SDM shows the QoS configuration that was configured. Review the QoS configuration carefully to ensure that there are no errors since the wizard applies this configuration to your router once you click the Finish button.

Figures 1 and 2 present the settings for the SDMVoice-FastEthernet0/1, SDMVideo-FastEthernet0/1, SDMSignal-FastEthernet0/1, and SDMSVideo-FastEthernet0/1 classes.

Figures 3 and 4 present the settings for the SDMTrans-FastEthernet0/1, SDMManage-FastEthernet0/1, SDMRout-FastEthernet0/1, and SDMBulk-FastEthernet0/1 classes.

Figure 5 presents the settings that will be applied with the SDMScave-FastEthernet0/1 class.

The last step is to click Finish to deliver the configuration to the router. The Commands Delivery Status window shown in Figure 6 shows the progress of the delivery of the configuration to the window. When the window tells you that the commands have been delivered to the router, click OK to complete your configuration.

Page 67: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

67

Page 68: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

68

Page 69: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

69

3.5.4

Monitoring QoS Status

After QoS is configured, you can use the following steps to monitor its status as shown in Figure :

Step 1 To enter monitor mode, click the Monitor icon in the toolbar at the top of the Cisco SDM window.

Step 2 Click QoS Status in the Tasks toolbar at the left side of the Cisco SDM window.

Page 70: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

70

The traffic statistics appear in bar charts based on the combination of the selected interval and QoS parameters for monitoring as follows:

• The interval can be changed using the View Interval drop-down menu. The options available are Now, Every 1 Minute, Every 5 Minutes, and Every 1 Hour.

• QoS parameters for monitoring include Direction (input and output) and Statistics (bandwidth, bytes, and packets dropped).

3.6.1

Lab 3.1 Preparing for QoS

Lab Activity

Lab Exercise: Lab 3.1 Preparing for QoS

The Quality of Service (QoS) labs for Modules 3, 4, and 5 have been designed to rely on traffic generation and measuring tools for testing purposes. Traffic generation will be used to create streams of traffic that will flow through your network unidirectionally. These labs will use the Cisco Pagent image and toolset for the QoS labs in the QoS modules. Pagent is a set of traffic generation and testing tools that runs on top of a Cisco IOS image. Booting a router with Pagent can be done by acquiring the image through the Cisco Networking Academy program, loading it into the router’s flash memory, and entering a license key when prompted during system boot. When using the lab configurations it will be necessary to load the Pagent image on R4. This lab guides you through creating configurations for the QoS labs.

Page 71: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

71

Page 72: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

72

Page 73: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

73

3.6.2

Lab 3.2 Installing SDM

Lab Activity

Lab Exercise: Lab 3.2 Installing SDM

In this lab, you will prepare a router for access via the Cisco Security Device Manager (SDM), using some basic commands, to allow connectivity from the SDM to the router. You will then install the SDM application locally on a host computer. Finally, you will install SDM onto the flash memory of a router.

3.6.3

Lab 3.3 Configuring QoS with SDM

Page 74: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

74

Lab Activity

Lab Exercise: Lab 3.3 Configuring QoS with SDM

Cisco Security Device Manager employs a basic Quality of Service (QoS) configuration wizard that can be used to apply some basic QoS tools to a router’s interfaces.

Normally, you would configure and deploy QoS tools on the command-line interface (CLI) without the benefit of a graphical user interface (GUI). However, SDM’s QoS wizard provides a useful introduction to QoS tools. Thus, we begin our exploration of QoS tools using the SDM GUI.

Summary

This module describes the difference between nonconverged and converged networks and how converged IP networks can suffer from poor quality of service (QoS) for several reasons, including low bandwidth, excessive delay, jitter, and packet loss. The module explains the steps used to implement Cisco IOS QoS and how implementing QoS decreases the effect of poor QoS factors with the use of queuing, compression, prioritization, and link efficiency mechanisms.

The three basic QoS models, best effort, Integrated Services (IntServ), and Differentiated Services (DiffServ) are defined: The best-effort model does not provide any QoS, while the IntServ model relies on applications signaling QoS requirements to the network, and DiffServ is the most scalable model for implementing the QoS required for modern converged networks.

The four different implementation techniques for QoS are the legacy command-line interface

Page 75: CCNP 4 Version 5 Module 3 - staffweb.itsligo.ie 3/CCNP 4 QOS/CCNP 4 Version 5 Module...CCNP 4 Version 5 Module 3 4 Bandwidth max = min (10 Mbps, 256 kbps, 512 kbps, 100 Mbps) = 256

CCNP 4 Version 5 Module 3

75

(CLI) method used for basic QoS deployments, the Modular QoS CLI (MQC) for high-level deployments and QoS fine-tuning, Cisco AutoQoS for general QoS setups, and Cisco Router and Security Device Monitor (SDM) QoS wizard, a web-based application for QoS deployments in the enterprise environment.