case study only use internal

28
<Course Title> Case Study INTERNAL USE ONLY

Upload: others

Post on 19-Apr-2022

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Case Study ONLY USE INTERNAL

<Course Title>Case Study

INTERNAL U

SE ONLY

Page 2: Case Study ONLY USE INTERNAL

Case Study

2 www.juniper.net

VoIP Case Study Topology

The slide shows the topology that serves as the basis for the CoS configuration case study. The overall goal is to analyze typical Junos CoS configurations for edge and core routers in the context of a VoIP application.

This case study is based on a unidirectional CoS deployment. In other words, the configurations shown focus exclusively on moving voice and data traffic from the subscribers on the left to the server and voice switch (PABX) on the right. While CoS does not mandate symmetric treatment of data flows, it is rare for only one half of a CoS solution to be deployed in a production network.

The unidirectional approach taken here has the goal of keeping the configuration as simple (and short) as possible while still providing concrete examples of functional CoS configuration.

The slide highlights the topic we discuss next.

INTERNAL U

SE ONLY

Page 3: Case Study ONLY USE INTERNAL

Case Study

www.juniper.net 3

Ingress Node Criteria: Part 1

The slide outlines the application specifics and guidelines for the ingress node. The case study differentiates ingress node processing from transit and egress node processing to emphasize the different roles typically played by these types of devices. General items to note include the following:

• Classification and policing: The ingress node must use multifield classification to correctly identify voice over IP (VoIP)-related traffic. The slide lists the protocol and port information needed to create a multifield classifier. The ingress node must classify non-VoIP traffic with IP precedence 0 as best effort (BE), and it must police BE traffic with excess data marked with a high loss priority. Ingress classification must also correctly recognize IP precedence 6 and 7 traffic as network control. Note that you normally configure a policer’s burst size to the larger of either ten times the medium’s MTU, or to 3–5 milliseconds worth of line rate traffic. In this example, we deploy the small burst size parameter of 3000 bytes to help demonstrate correct policer operation.IN

TERNAL USE O

NLY

Page 4: Case Study ONLY USE INTERNAL

Case Study

4 www.juniper.net

Ingress Node Criteria: Part 2

The following list is a continuation from the previous page:

• Scheduling and congestion control: After classification and policing, you must configure the ingress node with schedulers for all supported forwarding classes. In the case of the BE class, you must configure RED profiles that factor loss priority into discard decisions, so that BE traffic in excess of the configured policer is more likely to be discarded during periods of congestion. Note that you must ensure that the BE class cannot exceed a 1-Mbps transmission rate.

• Packet header rewrite: You must configure the ingress node with a DiffServ code point (DSCP)-based rewrite table that marks traffic with CoS values so that downstream nodes can use the more efficient behavior aggregate (BA) classification to classify inbound traffic. In addition, you must ensure that your rewrite configuration permits the distinction of low and high loss priority for the BE class in downstream nodes.

INTERNAL U

SE ONLY

Page 5: Case Study ONLY USE INTERNAL

Case Study

www.juniper.net 5

Ingress Node Classification and Policing

The CoS functional block diagram on the slide shows the CoS processing functionality configured first. In this case, the diagram shows that multifield classification and policing will be added to the ingress node. The result of this functionality will identify a packet’s forwarding class and loss priority.

INTERNAL U

SE ONLY

Page 6: Case Study ONLY USE INTERNAL

Case Study

6 www.juniper.net

Ingress Node Multifield Classifier

The slide displays a Junos OS firewall filter that correctly classifies traffic received from the customer site. Terms 1 and 2 classify VoIP-related signaling and media as EF in accordance with the case study criteria. Term 3 matches non-VoIP traffic with an IP precedence value of 0 (also known as routine) and sends it to a policer named police-be and for classification as BE. The final term simply accepts all remaining traffic, which accommodates the need to classify traffic with precedence values of 6 or 7 as network control by virtue of not overwriting the actions of the default IP precedence classifier that is in effect by default.

INTERNAL U

SE ONLY

Page 7: Case Study ONLY USE INTERNAL

Case Study

www.juniper.net 7

Policer Configuration

The top of the slide shows the police-be policer configuration. Traffic in-profile is handed back to the filter term, where it is classified as BE. Traffic in excess of the policer profile is marked with a high loss priority and is handed back to the filter term, where it is also classified as BE.

Firewall Filter Application

The mf-classify firewall filter comes into effect when it is applied to the customer-facing interface. Note that the filter is applied in the input direction so as to correctly classify traffic received from the customer router as it enters the service provider’s network.

INTERNAL U

SE ONLY

Page 8: Case Study ONLY USE INTERNAL

Case Study

8 www.juniper.net

Ingress Node Scheduling and WRED

The CoS functional block diagram on the slide reflects the CoS processing functionality configured next. The diagram shows that the next item on the configuration check list is ingress node scheduling and weighted random early detection (WRED). Note that schedulers are defined for each forwarding class and that WRED can be configured to act on a packet’s loss priority.

INTERNAL U

SE ONLY

Page 9: Case Study ONLY USE INTERNAL

Case Study

www.juniper.net 9

Configuring Schedulers

Schedulers are a critical component of CoS on Junos platforms. Schedulers control the transmission weight for a given forwarding class (or queue), the queue’s priority, buffer depth, and the set of WRED profiles that are applied when congestion occurs.

You should define a scheduler for each of the forwarding classes (or queues) on the router. By applying these schedulers to all possible egress interfaces, you ensure that traffic is always treated to the expected service level, regardless of which interface traffic happens to egress.

Forgetting to create and assign a scheduler for NC traffic is a common mistake with potentially disastrous consequences. The default scheduler provides 5% of the bandwidth to the NC queue (queue 3). We recommend that you make a similar provision for NC traffic when deploying an explicit CoS configuration to ensure that NC traffic is not starved for bandwidth. We further recommend that you set the priority of the NC scheduler to high whenever a strict-high scheduler is also in effect.

In the example on the slide, the BE scheduler is correctly configured with low priority (the default) and a 1-Mbps limit that prevent the BE class from using any additional bandwidth, even when other classes are idle. The BE class scheduler references two WRED profiles (shown on the next page) and correctly uses the tcp flag to enable these profiles for TCP traffic only. The NC class is not set to a high priority because there are no strict-high schedulers defined in this example. As a final note, the queue depth is set for the EF class only using a temporal value. By default, the BE and NC classes will have elastic buffer depths that make use of the remaining queuing space. In this example, the temporal depth is set to 200,000 microseconds, which supports the case study criteria specifying a per-hop maximum delay of 200 milliseconds.

INTERNAL U

SE ONLY

Page 10: Case Study ONLY USE INTERNAL

Case Study

10 www.juniper.net

WRED Drop Profiles

The case study criteria require the configuration WRED for the BE class such that a greater percentage of discards occur for traffic marked with high loss priority. The configuration examples on the slide meet these criteria by defining two WRED profiles with different drop probability-to-fill level mappings. In this case, TCP traffic with low loss priority is mapped to the low-red profile (this mapping is part of the scheduler configuration shown on the previous page) with a 10% drop probability at an 80% queue fullness. The high-red profile, on the other hand, has the same 10% drop probability, but this profile begins dropping packets at a 50% fullness level, which leads to a greater percentage of packet drops when compared to the low-red profile.

Although it is not shown in this example, you could specify the interpolated option when defining the RED profiles.INTERNAL U

SE ONLY

Page 11: Case Study ONLY USE INTERNAL

Case Study

www.juniper.net 11

Link Schedulers to Classes

Use a scheduler map to logically group one or more schedulers together. This grouping makes it easy to later apply a set of schedulers to one or more egress interfaces. In the example on the slide, the scheduler names reflect the forwarding class to which they are ultimately applied (through the scheduler map), but this naming is only a convenience; you can map any defined scheduler to any defined forwarding class with a scheduler map.

Apply Scheduler Maps to Egress Interfaces

Once you have logically bound a set of schedulers to forwarding classes with a scheduler map, you can put the schedulers into effect for a given interface’s egress queues by referencing the scheduler map by name at the [edit class-of-service interfaces] hierarchy level. Note that the scheduler map is applied at the port level, which means the same scheduler map is in effect for all logical interfaces that might be defined on that port. In this example, the voip-case scheduler-map is used to provide CoS for all logical units (that is, VLANs) that might be defined on the fe-0/0/1 interface.

Note that you could apply the same scheduler map to a set of interfaces using a wild-card expression, such as so-*.

INTERNAL U

SE ONLY

Page 12: Case Study ONLY USE INTERNAL

Case Study

12 www.juniper.net

Ingress Node Packet Header Rewrite

The CoS functional block diagram on the slide indicates the next CoS processing functionality to be configured. In this case, the diagram shows that rewrite marker functionality is next on the configuration check list.

INTERNAL U

SE ONLY

Page 13: Case Study ONLY USE INTERNAL

Case Study

www.juniper.net 13

Defining a Custom DSCP Rewrite Table

A custom DSCP rewrite table is necessary because the default DSCP rewrite table assigns the same code point to all traffic belonging to the BE class, regardless of its loss-priority status. You must define a rewrite table that assigns distinct values based upon the traffic’s loss priority to ensure that transit and egress nodes, which will use DSCP-based BA classification, correctly recognize the loss priority of incoming BE traffic. This point is critical, because the policing function used to determine loss priority in this example is performed only at ingress. As a result, loss priority will not have end-to-end significance if you fail to define a custom code point for EF traffic with a high loss priority.

In this example, the custom voip-dscp-rewrite table imports the default DSCP rewrite settings through the import default statement. These defaults are then updated with a code-point value of 000001 for traffic assigned to the BE class with a high loss priority.

With the custom rewrite table configured, apply it as a rewrite-rule at the [edit class-of-service interfaces interface-name unit unit-number] hierarchy for the desired interface.

You can also use a wildcard expression to apply a rewrite table to a group of interfaces using the keyword all and an asterisk (*) for the unit number.

INTERNAL U

SE ONLY

Page 14: Case Study ONLY USE INTERNAL

Case Study

14 www.juniper.net

Ingress Node Cos Configuration Summary: Part 1

The slide shows part of the complete CoS-related configuration for the ingress node.

Note that this configuration reflects a unidirectional CoS design, as described in the case study overview. In most cases, you would expect to also see a CoS configuration in place for traffic moving in the opposite direction.

INTERNAL U

SE ONLY

Page 15: Case Study ONLY USE INTERNAL

Case Study

www.juniper.net 15

Ingress Node Cos Configuration Summary: Part 2

The slide completes the display of the ingress node’s CoS-related configuration.

Note that the ingress node’s multifield classification filter and related policer are not shown.

INTERNAL U

SE ONLY

Page 16: Case Study ONLY USE INTERNAL

Case Study

16 www.juniper.net

Transit and Egress Node Criteria: Part 1

The slide defines the configuration criteria for the transit and egress node that support the VoIP CoS case study requirements and guidelines. Key points include the following:

• BA classification: Unlike the ingress node, which uses a multifield classifier, transit and egress nodes must use a DSCP-based BA to classify traffic. By carefully matching an upstream node’s rewrite table to the downstream node’s classification table, you ensure consistent classification through the DiffServ domain.

• Scheduling and congestion control: Any node that handles traffic must use a set of schedulers to correctly service and weight the queues associated with each defined forwarding class. Because constancy is key, transit and egress nodes should use the same set of scheduler and WRED parameters as deployed in the ingress node.IN

TERNAL USE O

NLY

Page 17: Case Study ONLY USE INTERNAL

Case Study

www.juniper.net 17

Transit and Egress Node Criteria: Part 2

The following list is a continuation from the previous page:

• Packet header rewrite: Transit nodes need the same DSCP rewrite table that is in effect at the ingress node to ensure that nodes further downstream make consistent classification decisions. Note that in some environments, the egress node might use a different rewrite table to prepare traffic for hand-off to a customer device or another DiffServ domain. In this case study, such boundary conditioning is not required. While the egress node could use a default rewrite table, in this case study, the egress node is configured with the same rewrite table that is in effect at ingress and transit nodes.

INTERNAL U

SE ONLY

Page 18: Case Study ONLY USE INTERNAL

Case Study

18 www.juniper.net

Transit and Egress Node Classification

The CoS functional block diagram on the slide indicates the CoS processing functionality configured first. This diagram shows that configuration of the transit and egress nodes starts with BA classification. The goal is to achieve a consistent set of classification and loss priority recognition in transit and egress nodes.

INTERNAL U

SE ONLY

Page 19: Case Study ONLY USE INTERNAL

Case Study

www.juniper.net 19

Defining a Custom DSCP Classification Table

Recall that a custom DSCP rewrite table was placed into effect at the ingress node to accommodate the distinction between BE traffic with low and high loss priorities. This configuration step defines a DSCP classification table that is compatible with the code points set by the ingress node. The approach here is to define a custom DSCP classifier that is prepopulated with the code points associated with the default DSCP classifier table. A custom entry for BE traffic with high loss priority is then added to the table. The voip-dscp-classifier table is placed into effect on ingress interfaces by applying it as a classifier at the [edit class-of-service interfaces interface-name unit unit-number] hierarchy.

INTERNAL U

SE ONLY

Page 20: Case Study ONLY USE INTERNAL

Case Study

20 www.juniper.net

Transit and Egress Node Scheduling and WRED

The CoS functional block diagram on the slide reflects the CoS processing functionality configured next. The diagram indicates that transit and egress node schedulers and WRED are next on the configuration checklist. Note that schedulers are defined for each forwarding class and that WRED can be configured to act on a packet’s loss priority.

In this case study, the ingress and transit/egress node scheduler and WRED configurations are identical. The separation of ingress node configuration from that of a transit or egress node is designed to reinforce the different roles normally associated with edge and core devices.

INTERNAL U

SE ONLY

Page 21: Case Study ONLY USE INTERNAL

Case Study

www.juniper.net 21

Transit and Egress Node Schedulers

Transit and egress nodes use a set of scheduler definitions that match those in effect at the ingress node. This setup is logical and confirms the need for consistent end-to-end packet handling in a CoS design. After all, what advantage could possibly be achieved by having some nodes in the communications paths affording BE traffic to 30 Mbps of high-priority bandwidth while others provide only 1 Mbps of low-priority servicing?

INTERNAL U

SE ONLY

Page 22: Case Study ONLY USE INTERNAL

Case Study

22 www.juniper.net

Transit and Egress Node Drop Profiles

As was the case with schedulers, transit and egress nodes use the same set of WRED drop profiles for the BE forwarding class to ensure consistent and predictable end-to-end performance.

INTERNAL U

SE ONLY

Page 23: Case Study ONLY USE INTERNAL

Case Study

www.juniper.net 23

Link Schedulers and Apply to an Egress Interface

You must use a scheduler map to logically group the BE, EF, and NC schedulers so that they can be applied on the egress interfaces of transit and egress nodes. The slide highlights how the voip-case scheduler-map is correctly listed under the transit node's so-0/1/1 egress interface. Note that the fe-0/0/1 interface, which functions as an ingress interface for the case study, is correctly associated with a DSCP BA classifier. A scheduler map is needed to handle egress traffic only.

INTERNAL U

SE ONLY

Page 24: Case Study ONLY USE INTERNAL

Case Study

24 www.juniper.net

Transit and Egress Packet Header Rewrite

The CoS functional block diagram on the slide reflects the CoS processing functionality configured next. In this case, the diagram shows that packet header rewrite functionality is next on the configuration check list.

In this case study, the ingress, transit, and egress node DSCP rewrite tables are identical. The separation of ingress node configuration from that of a transit or egress node is designed to reinforce the different roles performed by edge and core devices; however, many aspects of their configurations are similar.

Note that a DSCP rewrite table is not strictly required on the transit and egress nodes in this topology, because of their limited role in this case study topology. Also, the lack of an explicit (or default) DSCP rewrite table results in the incoming DSCP being left unaltered as the packet transits the router. Equipping transit and egress routers with a consistent DSCP rewrite table certainly causes no harm, and this approach is generally considered as a best practice because having the appropriate rewrite tables in effect allows a node that formally acted as strictly transit and egress to begin accepting ingress traffic as well.

INTERNAL U

SE ONLY

Page 25: Case Study ONLY USE INTERNAL

Case Study

www.juniper.net 25

Transit and Egress Packet Header Rewrite

Transit nodes must rewrite the DSCP of egress traffic in the same manner as the ingress node so that routers further downstream make consistent classification decisions for ingress traffic. The slide shows a custom DSCP rewrite table named voip-dscp-rewrite that is applied to the transit node’s egress interface.

Egress Conditioning

In some applications, the egress node of a DiffServ domain is expected to condition traffic so that it makes sense to the device that receives it. This requirement might involve resetting the DSCP or precedence fields, or it could necessitate the mapping of DSCP/MPLS EXP values into a Layer 2 field, such as the IEEE 802.1p priority field. In the example on the slide, the egress node does not technically require an explicit rewrite table configuration (recall that only the MPLS EXP rewrite table is in effect by default) because no specific conditioning is required, and no traffic is destined to the servers ingresses at the Montreal node. In this example, however, we assume that the egress node is configured with a copy of the voip-dscp-rewrite table used for both the ingress and egress nodes.

INTERNAL U

SE ONLY

Page 26: Case Study ONLY USE INTERNAL

Case Study

26 www.juniper.net

Transit and Egress Node CoS Configuration Summary: Part 1

The slide shows the first part of a complete CoS-related configuration for the transit and egress nodes.

Once again, note that this configuration reflects a unidirectional CoS design, as described in the case study overview.

INTERNAL U

SE ONLY

Page 27: Case Study ONLY USE INTERNAL

Case Study

www.juniper.net 27

Transit and Egress Node CoS Configuration Summary: Part 2

The slide completes the CoS-related configuration for the transit and egress nodes.

INTERNAL U

SE ONLY

Page 28: Case Study ONLY USE INTERNAL

Case Study

28 www.juniper.net

INTERNAL U

SE ONLY