multicast traffic weathermap

16
MULTICAST TRAFFIC WEATHERMAP Basileios Maglaris NTUA NETMODE Laboratory, Heroon Polytechneiou 9, Zografou Campus, Athens email: [email protected] Dimitrios Kalogeras NTUA NETMODE Laboratory, Heroon Polytechneiou 9, Zografou Campus, Athens email: [email protected] Athanasios Douitsis NTUA NETMODE Laboratory, Heroon Polytechneiou 9, Zografou Campus, Athens email: [email protected] Abstract Managing a multicast enabled network can become a daunting task because although there are numerous tools and methodologies for monitoring unicast traffic in a network, the same does not apply to the multicast domain. As a result, much time is often devoted by network engineers while trying to ascertain the nature of a problem and where it is located, rather than focusing on solving the problem itself. This outcome is partly due to the lack of efficient tools for the monitoring and presentation of the appropriate variables for monitoring multicast traffic. A nouvelle method of representing the information about the multicast traffic that is flowing through a network is presented. The proposed method is efficient in representing both the quantitative (as in traffic volume that travels through the network) and the qualitative (as in to which group each coefficient of the traffic belongs to) state of the multicast network. 1 Introduction The task of providing the network manager with accurate and sufficient information about the status of a multicast network is based on multiple aspects each with its own drawbacks and advantages. The method of presentation that is proposed must fulfil the following requirements that are considered essential for providing accurate information about the multicast functions of the network. 1. Accuracy of information. A multicast monitoring tool must represent only factual data about the network, avoiding making assumptions as much as

Upload: others

Post on 04-Feb-2022

5 views

Category:

Documents


0 download

TRANSCRIPT

possible. This is very important especially when dealing with information that is inherently not simple to produce and display. Parallel to accuracy is integrity, which means that the representation of partial data can be often misleading and hence must also be avoided. A tool that aspires to present a correct picture of a multicast network must be not only complete but also accurate.

2. Intuitiveness of presentation. Complex information invariably presents interesting problems when it comes to presenting it in a clever way. The success of a presentation method also depends on the ability of the tool to present this data with as much easy way to understand as possible. For example, it is probably bad practice to present the network manager with huge amounts of unsorted data in a table, because the probability that important information might go unnoticed is being increased. Rather, the practice of using charts, varying colors, graphs etc seems more appropriate as the person that is using the tool can visualize the status of the network more easily.

3. Simplified usage. The way a tool is being used is the cornerstone of its functionality. Like in the previous requirements, here also the simplicity of the user interface plays an important role in how much useful it will become. The engineers that manage multicast networks have already spent a great amount of time and effort to become familiar with multicast technologies and protocols. The tool of their work must not present another difficult task to handle. Rather, it must be as easy as possible to use, even for the person which has only a very limited amount of time to learn to use it.

4. Standards compliance. Different hardware vendors use different types of management schemes and technologies to provide the management information. In order to be able to work inside different environments, a multicast management tool must be able to use the common denominator of all these technologies. Sticking to established standards seems to be the safe way to accomplish that.

5. Adequate speed. Up to date information about the various aspects of multicast must be available when needed. The methods used must also have the appropriate granularity to provide only the information that is currently needed and nothing else, to maximize speed. For example, a function that is interested only in the traffic of a given multicast group must not retrieve information about all groups from the network equipment.

6. Sufficient backlog. When the knowledge of the present status is not sufficient, large store of previous data must also be easily accessible. An efficient and fast way of storing data is required to accomplish this task.

7. Generalized Design. In engineering a tool that will be used to monitor traffic in various types of networks, one must keep in mind at all times the fact that different needs might dictate different approaches and solutions in the design of the network. For example, there are organizations whose infrastructure is not multicast enabled in its entirety, rather only some machines use multicast, so only a portion of the whole may fulfil the required functions. So, a method that would rely on the assumption that all the machines inside the organization are enabled to use multicast, would possibly fail to accommodate some setups. Consequently, the design of a multicast monitoring tool must be generalized enough as to make as few assumptions as possible about the characteristics of the underlying infrastructure.

In the first section of this paper, a survey of the existing tools available to monitor the multicast functions of an IP network is being made, while at the same time some of these tools are evaluated in terms of integration, efficiency and intuitiveness of how the

management information is presented. The second section deals mainly with the presence of end hosts in a network and their usage of the IGMP [13] protocol to subscribe to multicast groups. The knowledge of the IGMP memberships within the managed infrastructure is required to predict the portions of the network which should be receiving multicast traffic. A method of collecting the IGMP membership data which is based on SNMP [12] is then described. The next section is about discovering the multicast enabled network topology and measuring the traffic in its links. Depending on which multicast routing protocols are being used, two distinct methods are made available. The first is a generalized one which supports all possible multicast routing protocols, but requires prior knowledge of the IP topology. The second method requires the presence of Protocol Independent Multicast [14], but is much simpler, more robust and does not require any prior knowledge. An implementation based on SNMP and standardized MIB’s is then presented. Having solved the problem regarding overall multicast traffic measurement, in the forth section a method of calculating the traffic for a specific multicast group is given. The problems that are addressed here are the discovery of the distribution tree of the group in interest and the measurement of the traffic belonging to this group. The proposed implementation is again, based on SNMP and standardized MIB’s. Guidelines for efficient presentation of the data which is available are given in the fifth section, and the advantages and disadvantages of the proposed methods are presented in the last section.

2 Existing Multicast Monitoring Tools A fairly new approach to the problem of multicast monitoring is the Multicast Beacon[1]. The main components of this approach are the beacon server and the beacon clients. According to the multicast beacon scheme, each machine that participates on the multicast beacon system has a beacon client running. The client is a piece of software that is run in each machine that participates in a given session, and its function is to monitor the performance characteristics of the multicast transmission. Each client in the system implements two distinct functions. It serves as a beacon for other clients to receive, but also functions as a receiver for the transmissions of the other clients. On the other hand, the function of the beacon server is simply to collect the performance data from all the clients and possibly present it in an intuitive manner. The multicast beacon software can also be used for other purposes, such as content delivery. While this approach has been known to yield very accurate results in its measurements, naturally it is based on an active probing of the multicast network putting an additional burden on the routers that comprise it. Additionally, although the method of active probes can be very effective when it comes to isolating faults within the multicast enabled infrastructure, it helps very little when the measurement of the activity of other multicast groups is required. Furthermore, the deployment of multiple beacon clients to multiple sites can also present a difficult task, especially for small organizations. A tool that draws particular interest is the multicast visualization tool or simply MuVi [2]. Among the capabilities of MuVi are the discovery of the logical multicast topology, the monitoring of individual multicast groups' activity, and the collection of statistics for the aforementioned data. The mechanism that MuVi uses to collect data from the routes is SNMP. The collection of data regarding the discovery of the multicast topology is based on the PIM MIB [10], while the traffic statistics for the individual groups are polled from the IP multicast routing MIB [11]. The user interface can present a layout of the existing topology as well as diagrams for the traffic of particular multicast groups.

The entire tool is based on java [18], except for the traffic diagrams which are based on rrdtool [19]. Mmon [3] is an HP Network Node Manager plug-in that is specially designed for the monitoring and management of multicast enabled networks. Its capabilities are quite complete, including the generation of multicast-specific maps and the monitoring of the traffic of pre-selected multicast groups. It is noteworthy that the entire tool is built on top of HP Network Node Manager, which is required for it to function. Mhealth [4] is a tool that provides real time monitoring and visualization using existing tools to collect comprehensive data about Real-Time Protocol (RTP) based streaming audio/video sessions. The mechanisms that provide that data that is to be visualized are the mtrace function and the monitoring of the real time control protocol messages that are exchanged within the network. While one the main advantages of Mhealth is that it provides nearly real time data about the monitored network, it relies on a protocol that is not necessarily being used on all occasions and is even often impossible to use in many common cases. Another tool that is designed to monitor and report statistics about RTP [8] sessions is rtpmon [5]. Like in the previous tool, the gathering of the statistics is based on RTCP [8] control data. Among its capabilities is the intuitive presentation of the available information about a given session, like RTP packet loss and jitter. One of the advantages of rtpmon is the fact that the reception of the actual RTP packets is not necessary, so only the reception of the RTCP control packets is required in order to carry out its functions. Again, this tool is quite useful, but only for the purpose of debugging and monitoring RTP sessions. Mantra is somewhat different of the rest of the tools that are being mentioned here in the sense that it does not employ the notion of the managed network but goes beyond that. Mantra uses MBGP data to provide a comprehensive view of the entire multicast Internet. Its usefulness is therefore limited regarding the monitoring of a specific network. Multimon [7] is a tool that can collect, organize and display statistics about the IP multicast traffic that is present on a specific segment. Although the set of information provided is fairly complete, its scope is unfortunately limited only to the local network to which the Multimon server is installed. An interesting property is the client-server architecture that permits the deployment of Multimon servers in many subnets and their ability to report data to Multimon clients running elsewhere. Mtrace [9] is probably the most often used tool for multicast troubleshooting. Mtrace is a function similar to the one that traceroute provides. The difference lies in the fact that mtrace starts from the node that is to receive multicast packets, and works its way through by tracing the reverse path to the specified source of the packets. For that reason, is arguments are at least two, in contrast with traceroute which requires only one argument to function. While the mtrace command has limited functionality and non-existent visualization, a factor often crucial when it comes to multicast management tools, its importance lies on its simplicity of usage and pervasiveness. In fact the mtrace command is available in virtually all operating systems which support multicast, including Cisco IOS [9].

3 End Host Discovery In the sections that follow, it will be shown that tracing the routes that various groups of multicast packets follow inside a network and measuring their usage is possible and can be implemented with efficiency. But before classifying and measuring the multicast traffic, it should be taken into consideration that its presence certainly indicates that entities have requested to receive packets belonging to various multicast groups. Those entities are often end hosts inside the network, each one of which has the ability to subscribe to groups according to its needs. So, before attempting to categorize and measure the multicast traffic that is flowing through a network, it is useful to have some knowledge of which end hosts have requested to receive multicast traffic and, if possible, to know which groups each end host has chosen to receive. Consequently, the other side of the coin regarding multicast management is the discovery of which groups have been requested and by whom. This information is particularly useful when it is required to find out whether the requested group for each category arrives at its requested destinations. It should be noted however that the presence of end hosts inside a network is not always guaranteed. In fact, transit networks seldom have end hosts attached to their routers, yet forward large amounts of multicast traffic to other networks that require it. Thus, discovery of multicast subscriptions inside a network has meaning only when there are end hosts present inside the managed network.

3.1 Multicast Subscription via IGMP End hosts attached to routers which support multicast invariably use some version of the IGMP [13] protocol to manage subscriptions to the multicast groups they require. When a host decides that it needs to receive packets belonging to a certain multicast group, it sends an IGMP JOIN message to the subnet it belongs, so the router that is attached to the subnet may know that a host that requires the group is present on the subnet. While the intricacies of IGMP will not be analyzed here, it is worth noting that the latest version of IGMP, IGMPv3 [16] incorporates support for Source Specific Multicast or SSM [20]. In that way, a host not only can specify which group it is interested in, but also specify the source from which it is willing to receive.

3.2 IGMP membership discovery implementation The assertion of the multicast group memberships which a router has received can be done in various ways. Among these, the direct use of special IGMP protocol messages can instruct the machine to report the required information. An easier alternative is to query the groups using SNMP and the Internet Group Management Protocol MIB [13]. The IGMP MIB is basically very simple and its data is located in two tables. The first, igmpInterfaceTable lists all interfaces which have the IGMP protocol enabled. Among other information, the version of IGMP which is enabled on the specified interface, the number of subscribers and the number of subscribed groups are reported. The second table, igmpCacheTable, is used to report which groups have been subscribed to and to which interface each one of them. Using these two tables on each managed node, the construction of a comprehensive list of the nodes that should be receiving each particular multicast group can be constructed. The usefulness of this process is not excessively

important when it comes to simply monitoring the multicast traffic within the network, but can be very helpful if troubleshooting is to be carried out. For example, when a managed node indicates that there are subscribers attached to it for a specific multicast group, but the distribution tree does not seem to reach that node, then this is a sign that something has gone wrong and packets fail for some reason to arrive at one of their destinations.

4 Overall Multicast Topology Discovery

4.1 IP topology discovery Discovering the basic layer-3 topology and which multicast routing protocols are present, is clearly the first major step for the development of an efficient multicast monitor. While the layout of the distribution trees can be generally incongruent with the unicast topology, the knowledge of the latter must be considered granted in order to proceed. Also, one of the main preliminary assumptions that must be made before anything else is laid out, is the focus on network layer devices and equipment. All layer 2 devices such as switches, hubs, repeaters etc are considered transparent regarding the design of the methodology which will follow. The focus will be diverted only to nodes that have at least a minimal layer 3 multicast capabilities. More specifically, the data that must be available are:

1. The number of the layer-3 nodes that comprise the managed network. 2. All IP address for each of the aforementioned nodes, including loopback ones if

present. 3. The number of interfaces present in each node. It is also important to have

knowledge of which of these interfaces has a layer-3 address. 4. The status of each interface in each node. 5. For each individual interface, the nodes that are being connected by it. For point

to point interfaces, two and only two nodes are connected. For multi access interfaces such as Ethernet, it is possible that more than two nodes are directly connected by it.

The above set of information must be considered given, possibly obtained by a unicast network management tool of some sort. It is also possible to easily extract it from the MIB-2[21] sub tree, if a node supports it. The layer-3 topology can generally be represented by a graph in which every node is a managed device and each link is a pair of interfaces connecting two machines. Note that in the case of multi access mediums such as Ethernet, there must be a distinct link between each pair of devices that are attached to the medium. For n nodes in a medium, the will be n(n-1)/2 numbers of links. For example, for 4 nodes present in an Ethernet segment, the representation will consist of 6 links connecting each pair of nodes. While this representation seems rather awkward, its necessity stems from the fact that in order to draw a correct picture of the multicast traffic inside such a segment, accurate information about who is transmitting and who is receiving (and how much) inside the subnet will be necessary. In any case in the following sections an alternative approach to this problem will be presented, one which completely waives the need for this type of assumptions.

4.2 Multicast Topology Discovery using IP topology data The multicast enabled topology is an overlay of the layer-3 topology which is described above. The discovery of the multicast topology can be accomplished by simply observing which of the interfaces in each router are multicast enabled. In the general case, each interface might be using a different multicast routing protocol. Also, some interfaces may be using more than one protocol. A subgraph that represents the multicast topology can be constructed by discovering the subset of all the interfaces that are using multicast. This multicast graph reveals the routes that are available to a multicast packet to traverse the network. It must be however noted that the knowledge of which interfaces are multicast enabled and which protocol each one of them is using, does not necessarily mean that there is traffic at all in any of them. In fact some links might be completely unused if there is no need for multicast in certain areas of the topology.

4.3 Overall Multicast Traffic Measurement To construct the overall multicast traffic image of the network, two additional parameters are required: The amount of multicast traffic that has been received by an interface and the amount of multicast traffic that has been transmitted by an interface. With these two values available for each interface a detailed graph depicting the image of the entire network can be laid out. Note that at this point there is no word about the characteristics of the traffic that is measured. The traffic can be addressed to any number of multicast groups, and possibly coming from an unspecified number of sources. The goal at this point is to measure the aggregate traffic flowing through the network regardless of group or source.

4.4 Overall Multicast Traffic Measurement Implementation The information to construct the aggregate traffic graph can be obtained from the interface table of the IP Multicast Routing MIB [11]. The interface table provides with the information necessary to lay out the multicast enabled part of any network. The data which is gathered from this subtree rarely changes, since it reflects on decisions which have been made during the setup of the network. It is safe therefore to assume that the interfaces table seldom changes. Each line of the interfaces table represents a multicast enabled interface on the router. Also available are the protocols in use, rate limit (if applicable) and the inward and outward multicast traffic for this interface. The objects concerned are the ipMrouteInterfaceIfIndex, ipMrouteInterfaceProtocol, ipMrouteInterfaceRateLimit, ipMrouteInterfaceInMcastOctets and ipMrouteInterfaceOutMcastOctets.

4.5 Direct Multicast Topology Discovery using PIM Although the above method, which is based on the multicast routing MIB, is sufficient to discover the topology of the multicast enabled portion of any network, it relies heavily

on the fact that the topology of the links is considered granted from the start. There are cases that the migration and transfer of the information from the topology discovery system may prove troublesome. Even worse, there are very few cases where the exact link topology of the network might be difficult to ascertain, even using the full information provided by the interface table and the IP table of the MIB-2 standard. For that reason, an alternative approach to determine the multicast topology is described here. This approach makes a different set of assumptions, and consequently is based on different grounds in terms of standards. As mentioned above, the method that uses the interfaces table of the IP multicast routing MIB which was described in the previous section, makes no assumption whatsoever about the set of multicast routing protocols that are used throughout a given network. Thus the advantage of not having to rely on the information of any specific protocol is gained. While this seems at first a good idea, in the real world only a handful of networks still use for example, Core Based Trees. Today, the most dominant multicast routing protocol is by far Protocol Independent Multicast [14] Sparse Mode (PIM-SM).

4.6 PIM Characteristics PIM is a mature routing protocol in many ways, especially the way it handles the RPF problem of the multicast routes. One peculiar detail about PIM is that it actually is comprised of two distinct modes, PIM dense and PIM sparse which work quite different from one another. PIM dense mode is much simpler and is appropriate for networks where multicast is heavily used and there is a great degree of probability that a full distribution tree may be need that reaches most of the nodes. The dense mode PIM uses the notion of flood and prune mechanism which was used in the past by other approaches [17]. On the other hand, PIM sparse mode is more complex but at the same time much more efficient even for moderately populated distribution trees. The cornerstone of the sparse mode mechanism is the notion of the rendezvous point router, which is used to construct shared trees for the distribution of multicast packets. Provisions are in place so that any shared tree can mutate to a source tree if the traffic amount for a particular group exceeds a specified threshold. One of the fundamental characteristics of PIM, one which will be exploited heavily in the following description, is the internal mechanism of neighbour discovery. When PIM becomes enabled on an interface, its primary function is to enable the neighbour discovery mechanism in order to probe whether there are any PIM enabled neighbours on the other side of the link to which the interface is attached. If any PIM neighbours are discovered through this process, the protocol then proceeds to discover their IP address, their uptime, and the mode in which they are configured. This discovery is performed by a series of protocol messages specially designed for this purpose. Consequently, any router that uses PIM is in a position to know which neighbours use PIM and under which mode each one. One interesting detail to note is that PIM neighbour discovery is crafted in such a way that a specific router may be identified by various different addresses depending on the interface the process will be carried out. To ensure the integrity and continuity of the multicast structure that will be detected, it is therefore necessary to have full knowledge of all the IP addresses that each router possesses.

4.7 PIM topology discovery algorithm The neighbour discovery process is -naturally- useful to PIM so that it can function correctly, but if the information becomes available elsewhere it can be exploited for other purposes too. One of these is to replace the topology discovery process which was described earlier in the previous section. So, instead of requiring that the IP topology is given and using that information to construct the multicast topology, a method can be laid out to extract this information from the PIM neighbour information inside each router. An algorithm for that process is quite simple actually. Starting from a seed router that is known to have PIM enabled, its neighbour list can be extracted, constructing a graph of level 1, then its neighbours' neighbours for the level two graph and so on. More formally the approach would involve the following steps:

1. Find one router R of the network which has PIM enabled. Initiate an empty stack S and push R in it.

2. Pop an element R from the stack S and if it is not processed already, ascertain its set of PIM neighbours S(R)=(R1,R2,....Rn). The links (R,R1),(R,R2),...,(R,Rn) are marked present in the graph. Then push all the R1,R2...Rn elements into the stack, and additionally mark R as processed. The marking process ensures that even if the topology includes circles, the algorithm will not loop forever.

3. Keep going back to step two until there are no more elements which are not processed.

The criterion by which the routers are admitted into the topology in the algorithm above can be augmented by adding a certain address prefix to which all routers must belong so that they can be admitted to the graph, or by counting the number of hops from the initial element and limiting their number to a certain threshold. In the latter case, special care must be taken so that the starting point is one that corresponds to the centre of the conceptual graphs to which the nodes of the network belong.

4.8 Drawbacks of the PIM based method The discovery of the multicast topology using PIM neighbour discovery is probably preferable to using the interfaces table from the IP multicast routing MIB, mainly because it does not require any prior knowledge of the underlying layout. However there are a few cases which the assumptions under which the PIM method cannot be utilized so a fall back to the previous method must be carried out. Some of these situations are:

1. When PIM is not enabled throughout the entire infrastructure. If some routers contain static configurations for their multicast routing the PIM methodology may not apply.

2. When PIM is not the only internal multicast routing protocol, thus the notion of PIM neighbours is not present in all nodes.

In rare situations, it may be difficult to automatically construct the graph of the multicast network, because either a network prefix/mask or a maximum depth threshold cannot easily make the distinction between the routers that must be discovered and those that must not. Alternatively in this case the problem can be solved by utilizing elaborate schemes like using more than one prefix combined with maximum threshold checks together.

4.9 PIM topology discovery implementation The list of PIM neighbours that a specific machine knows about can be obtained by various methods. Most routers nowadays include some form of command line interface by which some form of command to list the known PIM neighbours can be issued. A program that wishes to implement the previously mentioned algorithm could possibly carry out this command via rsh or telnet. A more elegant approach involves the use of SNMP and the PIM MIB. The PIM MIB is currently at the process of standardization, but it is highly unlike that any changes will be carried out to the PIM Interface Table which lists the currently known PIM neighbours of the managed device. The aforementioned table includes entries which reveal the address, network mask, mode (sparse or dense) and status of each neighbour. One thing that remains is how to measure the overall multicast traffic in a given link of the graph when the PIM method was preferred. In that case, if the IP multicast routing MIB is present, the inward and outward traffic counters can be accessed in exactly the same way that it was described previously. If on the other hand, the multicast routing MIB is for any reason not present, a more elaborate approach is in order. In that case, the non unicast traffic counters of an interface can be used. However, the traffic that is reported for an interface includes not only the multicast packets that are sent through it, but also the broadcast ones. In most cases the extrapolation of the multicast from the broadcast amount is virtually impossible, but if the specific platform includes a layer-2 broadcast traffic counter, the amount of multicast traffic can be obtained by simply subtracting the broadcast amount from the overall non-unicast traffic.

5 Group specific Multicast Monitoring To this point, the method that was described is sufficient to monitor the overall multicast traffic within a network. Using it, a manager would be able to know which parts of the network are getting multicast packets and make traffic volume measurements about it. Although the usefulness of this piece of information is not to be underestimated at any case, a more fine grained view of the network may be required to make other more useful observations. These observations could help not only to monitor the multicast traffic with great detail, but also to debug and trace misconfiguration or problems that have risen.

5.1 Description of Group Distribution Tree The overall multicast traffic can be divided in discrete multicast groups which are defined by the multicast destination address in each IP packet. Each group has its own members, some of which are producing traffic and some of which are listening for traffic which belongs to the group. In the context of a given group, all hosts that generate packets belonging to the group needn't receive packets from other sources within the group. On the other hand, the hosts that have joined a certain group, must by definition be able to receive all packets which are destined for a particular group regardless of their origin. This means that in a given network, traffic belonging to a group must flow from all sources to all destinations that have joined the group. The information that is required to effectively monitor each multicast group in a network would thus be:

1. Given a certain group, the links that have multicast traffic that belongs to the

group. With the knowledge of those links, a graph can be constructed that represents the flow of the packets of that group in the network. The graph would be naturally unidirectional to be able to represent two distinct directions between each pair of neighbouring nodes.

2. The traffic amount that is involved in each directional link for the group. The measurement of traffic would have to be constantly monitored to create a sufficient history log from which more interesting analysis can emerge.

3. For each node, the links that have hosts that have subscribed to the particular group. This typically represents the interfaces of a router that have received IGMP JOIN messages from hosts attached to that link. This problem has already been addressed earlier, so no further discussion is necessary at this point.

5.2 Construction of the Group Specific Distribution Tree With this data, the distribution tree of the group in interest and the amount of traffic flow can be laid out. The construction of this graph is more elaborate than the previous case where there was no interest in particular groups, and involves two individual steps which are required to complete all the information. In the following paragraph all the descriptions that are given assume implicitly that measurements are specific to a pre-selected group and not to overall multicast parameters. Since multicast is built on top of IP, multicast datagrams are normal IP datagrams and thus include source and destination IP. Other parameters of the IP header except of these are of no particular interest here, except perhaps from the TTL field which determines the number of hops that a packet will be able to travel before it is discarded. A multicast router classifies the incoming packets based on their group and source, so given a specific group, a detailed measurement of the traffic from each source can be made. This approach is extremely accurate because not only it classifies and measures the traffic of incoming packets according to the group they belong, but also according to the source from which they came. More so, for each pair of group and source a list of interfaces to which the packet should be forwarded exists. Also notable is that for each interface there may be more than one next hop multicast address. For example NBMA interfaces may have many next hop multicast addresses associated with them. One more special case are routes that may have been marked as pruned, which in effect means that the packets are not forwarded for these entries. The importance here thus lies to next hop entries that are not pruned and the amount of traffic that is associated with each one of them. As was stated before, the objective is to ascertain the distribution tree of any given group, in the form of a unidirectional graph and then calculate the traffic that flows through each link of it. The method that will be outlined in the following paragraph relies on the assumption that there is a graph of the overall multicast enabled portion of the network available. This graph is produced using one of the two methods that were outlined in the previous section, either with the generalized one or with the PIM neighbour method.

5.3 Generalized Group Specific Method The process involves two distinct steps and requires the knowledge of the multicast enabled graph and the multicast group in interest as input. The output is a subset of the initial link graph, each link accompanied by information regarding the traffic volume flowing through it. The topology of the subset graph can be easily constructed by examining the multicast routing forwarding table in each node of the graph. Of interest here are the entries that are not pruned and their next hop interface is connected to another node in the initial graph. If both those assumptions are correct, a link of the subset graph can be added to the group specific subgraph. The traffic flowing through the link is considered known according to the previous paragraph. It is also possible that there may be multiple entries for the same outgoing interface for different packet sources. These entries must be added in order to calculate the aggregate traffic flow for the group through the interface. One case which requires special consideration is when there are more than 2 routers present on a subnet. Such interfaces can be for example Ethernet segments. In these cases the interface to which the traffic is going is known but the next hop could be any, some or all of the other routers in the segment. In that situation it is impossible to derive the information from the multicast forwarding table alone. However, it is possible to examine the incoming groups on each of the other routers in the segment and see if their corresponding entries for the interface that is attached to the link include the multicast group of interest. For the interfaces that do include it, the amount of traffic can be calculated either from the aggregate traffic measurement in the outgoing router, or from the corresponding entries in the incoming router.

5.4 Group Specific Method Using PIM The fact that the incoming groups and their traffic are available on each router opens up some interesting capabilities that in some cases can simplify the above suggested procedure so that it does not require the examination of the next hop entries in the nodes. These cases include most commonplace networks today, in which PIM is used. In the case of PIM, for each pair of incoming group and packet source the address of the upstream neighbour is available. The upstream neighbour is usually one of the PIM neighbours of the node. Consequently, when PIM is used, the upstream neighbour is also available so the information can be used to measure the traffic through the link between them in the graph. Although it cannot be used when CBT or other similar protocols are enabled, this method is much more straightforward and does not require the use of the next hop forwarding table for multicast. The revised algorithm in this case would be to simply group the incoming group, source pairs by upstream neighbour, calculate the overall traffic by adding the individual flows of each pair, and if the upstream neighbour is part of the initial given multicast link graph, add the corresponding link to the group subgraph.

5.5 Group Specific Measurement implementation The multicast routing information parameters which were enumerated is the previous description can be obtained from the routing equipment using various methods the most suitable of these being the use of the SNMP protocol and the standardized IP multicast

routing MIB. Other portions of this MIB have been used previously to facilitate the discovery of the multicast topology and measure the overall multicast traffic regardless of group. In the present case the tables that can be used are the ipMroute table and the ipMrouteNextHop table. The ipMroute table contains the information about the incoming multicast packet flows, while the ipMrouteNextHop table lists information about the next hop forwarding entries for the various classes of multicast IP packets. The ipMroute table contains entries, each one of which is representing a particular multicast group, source pair. Along with the group and source addresses which are the indexes of this table, are available the upstream neighbour for the pair (ipMrouteUpStreamNeighbor), the interface index (ipMrouteInIfIndex) from which this kind of packets are expected, the IP address of the upstream neighbor if applicable (ipMrouteUpstreamNeighbor), the “age” of this entry (ipMrouteUpTime), the protocol via which this forwarding entry was learned (ipMrouteProtocol), as well as the traffic counter for this entry (ipMrouteOctets). With at least these objects available one can have sufficient information about the incoming multicast packet flows that are arriving on the router. The ipMrouteNextHopTable contains entries which represent the multicast routing table for the machine. Each entry is indexed by multicast group, source, interface index and next hop neighbour address. As mentioned earlier, both the interface identifier and the next hop IP address are required because there might be multiple next hop neighbor addresses for the same interface. Other objects of interest are the ipMRouteNextHopState which can be either Forwarding or Pruned, the ipMrouteNextHopProtocol which reveals the protocol via which this entry was generated, and the ipMrouteNextHopPkts which is the packet counter of the entry. One particular detail is that while the ipMrouteTable is fitted with both an Octet and a high capacity octet counter (ipMrouteOctets and ipMrouteHCOctets), the ipMrouteNextHop Table is fitted with only a packet counter which means that byte counters may or may not make it into the standard. In any case however, the methods which were described previously can use the available byte counters present in the ipMrouteTable to ascertain the required data. In the case where a link is a point to point one (in the sense that there are only two machines connected by it), then there is no problem as the traffic volume can be accurately measured by the machine that is in the downstream side of the link. If on the other hand the link is multi-access (for example Ethernet), it appears to beneficial to use the ipMroute counters on the receiving end instead of trying to find where the traffic is going by examining the ipMrouteNextHop entries on the sending end.

6 Presentation Details In the previous sections it has been shown that it is possible to construct a comprehensive image of the multicast traffic that is flowing through a network using data which is generally available in all modern routing equipment. In some cases it is required that a previous snapshot of the layer-3 topology of the network is available so that a generalized method can be applicable. But it has also been demonstrated that in the vast majority of modern deployments where Protocol Independent Multicast is being used, this requirement can also be lifted, in which case a rather straightforward approach can be followed.

The cornerstone of this whole process is to describe with accuracy the method of presentation of the information which was described above. Before doing so, a summary list of all the categories of data which will be available is in order. The layer-3 topology of the network in question must be available before all. As was stated before, the layer-3 topology of the network will be available in the form of a graph which shows all IP enabled connections between the existing nodes. The multicast enabled topology of the network. Since multicast is a subset of IP, this means that the graph that represents the multicast enabled links will be a subgraph of the IP topology graph. Ways of determining this subgraph have been described earlier. Normally this subgraph will be connected at all times. The traffic flows that are present in the links of the multicast enabled graph. The fact that a link may be multicast enabled, does not guarantee that there must be multicast traffic present in it, so it is quite natural that some connections may not have any traffic present. Note that the links which contain non-zero multicast traffic form a subgraph of the multicast graph. This subgraph should probably be connected as well, but that needn't be true at all times. For any group that may be elected, the subgraph of the multicast graph whose links have non-zero multicast traffic present. The methods to learn which links are these have been described in the previous chapter and are based on the IP multicast routing MIB. This graph will probably have the form of the source tree or the shared tree which is formed by the PIM SM, depending on the configuration. The representation of the overall multicast traffic can be done by drawing the graphs in the first three bullets in an overlay. This means that the IP topology may be drawn first and above it the portion of it that is multicast enabled may be drawn n bold form. Alternatively, the multicast enabled topology may be drawn directly, in which case the non-multicast portion of the network will not be visible. Considering however that the non enabled portion is unimportant in the representation that is being done here, this solution may be beneficial in terms of visual clarity. More so, most multicast enabled networks today are globally enabled so the question will not arise that often. Overlaid on the multicast graph, the links which have non-zero traffic should be coloured with various colours representing the amount of traffic flowing through it. All the idle links may be left uncoloured. The same approach can be followed for the representation of the group-specific graph. The active links must get coloured while the others remain as they are.

7 Conclusion The advantages of using a simplified approach to the discovery and representation of the multicast traffic present inside a network are quite obvious. First of all, the intuitiveness of this method of presentation is based on the fact that most network administrators have a tendency to regard their network as a two dimensional picture of a graph consisting of machines represented by vertices and connections represented by links. So, the proposed method tends to comply with that notion therefore it would probably express its information to the operator with ease. Additionally, the use of colours to represent various levels of traffic makes the transmission of another level of information possible without bogging down the subject with large arrays of numerical data.

Using this method will also be extremely easy. The only input which is required by the operator would be in the case the latter would like to monitor a specific group in which case he would have to supply it as an argument. No other interactions are defined in operating this method of presentation. On the other hand, setting up a system which would be based on the data gathering processes described in the previous sections, would require fairly basic knowledge of SNMP. The usage of SNMP guarantees the standards compliance of the whole process. The protocol itself is the cornerstone of all network management, while the MIB's that where used are all standardized or nearing standardization. Also, the accuracy of the methods which were described is fairly satisfactory. Because these method are based on numbers which are retrieved directly from the managed machines, there are no assumptions made that jeopardize the accuracy of the data. The usage of SNMP has finally another important benefit. Any implementation would have to use only the SNMP protocol and because of the minimalistic nature of the latter, using it even in fairly large organizations should not pose a significant performance barrier.

References

[1] Multicast Beacon. http://dast.nlanr.net/Projects/Beacon/ [2] Multicast Visualization Tool. http://muvi.man.poznan.pl/ [3] HP NNM IP multicast management.

http://www.hpl.hp.com/research/mmon/ [4] MHealth: A Real-Time Multicast Tree Visualization and Monitoring

Tool. http://imj.ucsb.edu/mhealth/ [5] rtpmon: A Third-Party RTCP Monitor.

http://bmrc.berkeley.edu/~drbacher/projects/mm96-demo/ [6] Mantra - Monitoring Multicast on a Global Scale.

http://www.caida.org/tools/measurement/mantra/ [7] MultiMON - an IP multicast Monitor.

http://www.merci.crc.ca/mbone/MultiMON/ [8] RFC 3550 - RTP: A Transport Protocol for Real-Time Applications. [9] Mtrace command.

http://www.cisco.com/univercd/cc/td/doc/product/software/ios122/122sup/122csum/csum1/122csip3/1sftools.htm

[10] RFC 2934 - Protocol Independent Multicast MIB for IPv4. [11] RFC 2932 - IPv4 Multicast Routing MIB. [12] RFC 1157 - Simple Network Management Protocol [13] RFC 2933 - Internet Group Management Protocol MIB. [14] RFC 2362 - Protocol Independent Multicast-Sparse Mode (PIM-SM):

Protocol Specification. [15] RFC 2236 - Internet Group Management Protocol version 2. [16] RFC 3376 - Internet Group Management Protocol version 3. [17] RFC 2201 - Core Based Trees (CBT) Multicast Routing Architecture. [18] The java programming language - http://java.sun.com [19] Rrdtool – http://people.ee.ethz.ch/~oetiker/webtools/rrdtool/ [20] Source Specific Multicast – http://www.ietf.org/html.charters/ssm-

charter.html

[21] RFC 1213 - Management Information Base for Network Management of TCP/IP-based internets - MIB-II