ars-msr 1 mcast 2014
TRANSCRIPT
-
8/10/2019 Ars-msr 1 Mcast 2014
1/44
Multicast Communications
Arhitecturi pentru retele
si servicii (ARS)
Managementul serviciilor si retelelor (MSR)
-
8/10/2019 Ars-msr 1 Mcast 2014
2/44
Octavian Catrina 2
Multicast communications
Multicast communications Data delivered to a group of receivers. Typical examples:
One-to-many (1:N)
one-way
MS
MR
Many-to-many
(N : M)
MS/R
MS/R
One-to-many (1:N)
two-way
MS
MR/US
Chapter outline
What applications use multicast? What are the requirements
and design challenges of multicast communications?
What multicast support does IP provide (network layer)?
After an overview of multicast applications, we'll focus on IP
multicast: service model, addressing, group management, and
routing protocols.
M=Multicast; U=Unicast
S= Sender; R=Receiver
-
8/10/2019 Ars-msr 1 Mcast 2014
3/44
Octavian Catrina 3
Multicast applications (examples)
One-to-many Real-time audio-video distribution:
lectures, presentations, meetings,
movies, etc. Internet TV. Time
sensitive. High bandwidth.
Push media: news headlines,
stock quotes, weather updates,sports scores. Low bandwidth.
Limited delay.
File distribution: Web content
replication (mirror sites, content
network), software distribution.
Bulk transfer. Reliable delivery.
Announcements: alarms, service
advertisements, network time, etc.
Low bandwidth. Short delay.
Many-to-many Multimedia conferencing: multiple
synchronized video/audio streams,
whiteboard, etc. Time sensitive.
High bandwidth, but typically only
one sender active at a time.
Distance learning: presentationfrom lecturer to students, questions
from students to everybody.
Synchronized replicated resources,
e.g., distributed databases. Time
sensitive.
Multi-player games: Possibly high
bandwidth, all senders active in
parallel. Time sensitive.
Collaboration, distributed
simulations, etc.
-
8/10/2019 Ars-msr 1 Mcast 2014
4/44
Octavian Catrina 4
Sender
Rcvrs
-
65
21
43
7
Multi-unicast
delivery
Multicast requirements (1)
Efficient and scalable delivery
Multi-unicast repeats each data item.
Wastes sender and network resources.
Cannot scale up for many receivers
and/or large amounts of data.
Timely and synchronized delivery
Multi-unicast uses sequential transmission.
Results in long, variable delay for large
groups and/or for large amounts of data.
In particular, critical issue for real-time
communications (e.g., videoconferencing).
We need a different delivery paradigm.
-
8/10/2019 Ars-msr 1 Mcast 2014
5/44
Octavian Catrina 5
Multi-unicast vs. Multicast tree
Multi-unicast delivery 1:N transmission handled as N
unicast transmissions.
Inefficient, slow, for N1: multiple
packet copies per link (up to N).
Multicast tree delivery Transmission follows the edges of
a tree, rooted at the sender, and
reaching all the receivers.
A single packet copy per link.
Sender
Rcvr
Rcvrs
5
3
4
2
1-
432
1
Multicast treedelivery
Sender
Rcvr
Rcvrs
5
3
4
2
1-
432
1
Multi-unicastdelivery
-
8/10/2019 Ars-msr 1 Mcast 2014
6/44
Octavian Catrina 6
Multicast requirements (2)
Group management
Group membership changes dynamically.
We need join and leave mechanisms (latency may be critical). For many applications, a sender must be able to send without
knowing the group members or having to join (e.g., scalability).
A receiver might need to select the senders it receives from.
Multicast group identification Applications need special identifiers for multicast groups.
(Could they use lists of host IP addresses or DNS names?)
Groups have limited lifetime.
We need mechanisms for dynamic allocation of unique
multicast group identifiers (addresses).
-
8/10/2019 Ars-msr 1 Mcast 2014
7/44 Octavian Catrina 7
Multicast requirements (3)
Session management
Receivers must learn when a multicast session starts and
which is the group id (such that they can "tune in").
We need session description & announcement mechanisms.
Reliable delivery
Applications need a certain level of reliable data delivery.
Some tolerate limited data loss. Others do not tolerate any loss
(e.g., all data to all group members - hard problem).
We need mechanisms that can provide the desired reliability.
Heterogeneous receivers Receivers within a group may have very different capabilities
and network connectivity: processing and memory resources,
network bandwidth and delay, etc.
We need special delivery mechanisms.
-
8/10/2019 Ars-msr 1 Mcast 2014
8/44 Octavian Catrina 8
Requirements: Some conclusions
Multi-unicast delivery is not suitable Multi-unicast does not scale up for large groups and/or large
amounts of data: it becomes either very inefficient, or does not
fulfill the application requirements.
We need new mechanisms and protocols,
specially designed for multicast.
Specific functional requirements Specific multicast functions, which are not needed for unicast:
group management, heterogeneous receivers.
General functions, which are also needed for unicast, but
become much more complex for multicast: addressing, routing,
reliable delivery, flow & congestion control.
-
8/10/2019 Ars-msr 1 Mcast 2014
9/44 Octavian Catrina 9
Which layers should handle multicast?
Data link layer
Efficient delivery within a multi-access network.
Multicast extensions for LAN and WAN protocols.
Network layer
Multicast routing for efficient & timely delivery.
IP multicast extensions. Multicast routing protocols.
Transport layer
End-to-end error control, flow control, and congestion control
over unreliable IP multicast.
Multicast transport protocols.
Application layer multicast
Overlay network created at application layer using existing
unicast transport protocols. Easier deployment, less efficient.
Still an open research topic.
-
8/10/2019 Ars-msr 1 Mcast 2014
10/44 Octavian Catrina 10
IP multicast model (1)
"Transmission of an IP datagram to a group of hosts"
Extension of the IP unicast datagram service.
IP multicast model specification: RFC 1112, 1989.
Multicast address
Unique (destination) address for a group of hosts.
Different datagram delivery semantics A distinct range of
addresses is reserved in the IP address space.
Who receives? Explicit receiver join
IP delivers datagrams with a destination address G only to
applications that have explicitly notified IP that they aremembers of group G (i.e., requested to join group G).
Who sends? Any host can send to any group
Multicast senders need not be members of the groups they
send to.
-
8/10/2019 Ars-msr 1 Mcast 2014
11/44 Octavian Catrina 11
IP multicast model (2)
No restrictions for group size and member location
Groups can be of any size.
Group members can be located anywhere in an internetwork.
Dynamic group membership
Receivers can join and leave a group at will.
The IP network must adapt the multicast tree accordingly.
Anonymous groups
Senders need not know the identity of the receivers.
Receivers need not know each-other.
Analogy: A multicast address is like a radio frequency, on whichanyone can transmit, and anyone can tune in.
Best-effort datagram delivery
No guarantees that: (1) all datagrams are delivered (2) to all
group members (3) in the order they have been transmitted.
-
8/10/2019 Ars-msr 1 Mcast 2014
12/44 Octavian Catrina 12
IP multicast model: brief analysis
Applications viewpoint
Simple, convenient service interface. Same send/receive ops
as for unicast, plus join/leave ops.
Anybody can send/listen to a group. Security, billing?
Extension to reliable multicast service? Difficult problem.
IP network viewpoint Scales up well with the group size.
Single destination address, no need to monitor membership.
Does not scale up with the number of groups. Conflicts with
the original IP model (per session state in routers).
Routers must discover the existence/location of receivers and
senders. They must maintain dynamic multicast tree state per-
group and even per-source and group.
Dynamic multicast address allocation. How to avoid allocation
conflicts (globally)? Very difficult problem.
-
8/10/2019 Ars-msr 1 Mcast 2014
13/44 Octavian Catrina 13
IPv4 multicast addresses
IPv4 multicast addresses
IP multicast in LANs Relies on the MAC layer's native multicast.
Mapping of IP multicast addresses to MAC
multicast addresses:
31 28 27 0
1110 multicast address
228addresses
Class Address range
D224.0.0.0 to
239.255.255.255
1 1 1 0 28 bits (228addresses)
IPv4 multicast address
Ethernet multicast address
0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 1 1 1 1 0 0 23 bits
group bit
Ethernet LAN
Remark: 32 IP multicast addresses map to the same MAC multicast address.
-
8/10/2019 Ars-msr 1 Mcast 2014
14/44 Octavian Catrina 14
Multicast scope
Multicast scope
Limited network region where multicast packets are forwarded.
Motivation: Address allocation. Efficiency. Application-specific.
TTL-based scopes or administrative scopes (RFC 2365).
IPv4 administrative scopes: Organization-local (239.192.0.0/14).
Local scope (239.255.0.0/16).
Link-local scope (224.0.0.0/24).
Global scope: No boundary, all
remaining multicast addresses.
32
4
1
5
Local A
6Internet
Organization-local
Local B
Administrative scopes
Delimited by configuring boundary routers: do not forward someranges of multicast addresses on some interfaces.
Connected, convex regions.
Nested and/or overlapping.
-
8/10/2019 Ars-msr 1 Mcast 2014
15/44 Octavian Catrina 15
Group management: local
Multicast service requirements
Multicast routers have to discover
the locations of the members of
any multicast group & maintain a
multicast tree reaching them all.
Dynamic group membership.
Sender
Rcvr
Rcvrs
5
3
4
2
1
-
432
1
IGMP
IGMPIGMP
Multicasttree
Local (link) level
Multicast applications must notify
IP when they join or leave a
multicast group (API available).
Internet Group Management Protocol (IGMP) allows multicast
routers to learn which groups have members, at each interface.
Dialog between hosts and a (link) local multicast router.
-
8/10/2019 Ars-msr 1 Mcast 2014
16/44
Octavian Catrina 16
Group management: internetwork
Implicit vs. explicit join Implicit: Multicast tree obtained by
pruning a default broadcast tree.
Nodes must ask to be removed.
Explicit: Nodes must ask to join.
Global (internetwork) level
Multicast routing protocols
propagate information about
group membership and allow
routers to build the tree.
Sender
Rcvr
Rcvrs
5
3
4
2
1
-
432
1
IGMP
IGMPIGMP
Multicasttree
MulticastRoutingProtocol
Data-driven vs. control-driven multicast tree setup
Data-driven: tree built/maintained when/while data is sent.
Control-driven: tree set up & maintained by control messages
(join/leave), independently of the sender(s) activity.
-
8/10/2019 Ars-msr 1 Mcast 2014
17/44
Octavian Catrina 17
Local groups
at IF i1:
Groups:
224.1.2.3
i1
Groups:
224.1.2.3
Groups:
none
224.1.2.3
Group Management: IGMP (1)
Internet Group Management Protocol
Enables a multicast router to learn, for each of its directly
attached networks, which multicast addresses are of interest to
the systems attached to these networks.
IGMPv1: join + refresh + implicit leave (timeout). IGMPv2: adds
explicit leave (fast). IGMPv3 (2002): adds source selection.
IGMPv3 presented in the following, IGMPv1/v2 in the annex.
Periodic General Queries: Refresh/update group list
Reports are randomly delayed to avoid bursts.(Duplicate reports are completely suppressed in IGMPv1 & v2.)
(2) IP packet to 224.0.0.22
IGMPv3 Current State Report:
member of group 224.1.2.3
(1) IP packet to 224.0.0.1 (all systems on this subnet)
IGMPv3 General Query: Anybody interested in any group?
-
8/10/2019 Ars-msr 1 Mcast 2014
18/44
Octavian Catrina 18
IGMP (2)
Host joins a group
Local groups
at IF i1:i1
Groups:
+ 224.1.2.3
Host leaves a group Router must check if there are other members of that group.
Local groups
at IF i1:
224.1.2.3?
Groups:
none/left
i1
Groups:
224.1.2.3
Groups:
none
(1) IP packet to 224.0.0.22
IGMPv3 State Change Report:
not member of 224.1.2.3
(2) IP packet to 224.0.0.1 (all systems on this subnet)
IGMPv3 Group-Specific Query: Anybody interested in 224.1.2.3?
IP packet to 224.0.0.22 (all IGMPv3 routers)
IGMPv3 State Change Report: joined group 224.1.2.3add 224.1.2.3
(3) IP packet to 224.0.0.22
IGMPv3 Current State Report:
member of 224.1.2.3
maintained
-
8/10/2019 Ars-msr 1 Mcast 2014
19/44
Octavian Catrina 19
Rcvr
5
43
Sender
Rcvr
32
1
-
1
4
6
5
7
Rcvr2
Rcvrs
Multicast trees
What kind of multicast tree?
Minimize tree diameter(path
length, delivery delay) or tree
cost(network resources)?
Special, difficult case of minimum
cost spanning tree (Steiner tree).No good distributed algorithm!
Shortest paths tree(e.g., unicast routing).
Practical solution
Take advantage of existing
unicast routing: Shortest path
tree based on routing info from
unicast routing protocol.
Multicast extension of a unicast
routing protocol, or separate
multicast routing protocol.
Min cost tree.
-
8/10/2019 Ars-msr 1 Mcast 2014
20/44
Octavian Catrina 20
Source-based vs. shared trees
Source-based trees
One tree per sender.
Tree rooted at the
sender. Typically
shortest-path tree.
Shared trees
One tree for all senders.
Examples: Minimum diameter tree or minimum cost tree, etc.
-
8/10/2019 Ars-msr 1 Mcast 2014
21/44
Octavian Catrina 21
Rcvrs53
Sender
Rcvr
32
1
-
1
4
6
5
Rcvr2
Sender +
74
172.16.5.0/24
172.20.2.0/24
Interface notation:
N = North (up); S = South (down)
W = West (left); E = East (right)NW = North-West (up-left). Etc.
Source-based trees (1)
In general: M1 senders/group M sources transmit to a group.
Session participants may be
senders, receivers, or both.
A separate source-based tree has
to be set up for each sender.
Source-based tree Tree rooted at a sender which
spans all receivers.
Typically, shortest-path tree.
Router 2: Multicast forwarding table
Source prefix Multicast group In IF Out IF
172.16.5.0/24 224.1.1.1 N S, E, SE
172.20.2.0/24 224.1.1.1 E N
...
-
8/10/2019 Ars-msr 1 Mcast 2014
22/44
Octavian Catrina 22
Source-based trees (2)
Pros
Per-source tree optimization.
Shortest network path & transfer delay.
Tree created/maintained only when/while a source is active.
Cons
Does not scale for multicast sessions with M>>1 sources. The network must create and maintain M separate trees:
per-source & group state in routers, higher control traffic and
processing overhead.
Examples
PIM-DM, DVMRP, MOSPF. Mixed solution: PIM-SM.
PIM-DM, DVMRP, MOSPF: Data-driven tree setup.
PIM-SM: Explicit join, control-driven tree setup.
-
8/10/2019 Ars-msr 1 Mcast 2014
23/44
Octavian Catrina 23
Rcvrs53
Sender
Rcvr
32
1
-
1
4
6
5
Rcvr2
Sender +
74
Core
Interface notation:
N = North (up); S = South (down)
W = West (left); E = East (right)
NW = North-West (up-left). Etc.
Shared trees (1)
Core-based shared tree The multicast session uses a
single distribution tree, with
the root at a "core" node, and
spanning all the receivers
("core-based" tree).
Each sender transmits its
packets to the core node,
which delivers them to the
group of receivers.
Typically, shortest-path tree,
with the central root node.
Router 5: Multicast forwarding table
Multicast group
(any sender)
In IF Out IF
224.1.1.1 W N, E
...
-
8/10/2019 Ars-msr 1 Mcast 2014
24/44
Octavian Catrina 24
Shared trees (2)
Pros More efficient for multicast sessions with M>>1 sources.
The network creates a single delivery tree shared by all senders:
only per-group state in routers, less control overhead.
Tree (core to receivers) created and maintained independently
of the presence and activity of the senders.
Cons Less optimal/efficient trees.
Possible long paths and delays, depending on the relative
location of the source, core, and receiver nodes.
Traffic concentrates near the core node. Danger of congestion.
Issue: (optimal) core selection.
Examples
PIM-SM, CBT.
Explicit join, control-driven (soft state, implicit leave/prune).
-
8/10/2019 Ars-msr 1 Mcast 2014
25/44
Octavian Catrina 25
DVMRP
DVMRP: Distance Vector Multicast Routing Protocol
First IP multicast routing protocol (RFC 1075, 1988).
DVMRP at a glance Source-based multicast trees, data-driven tree setup.
Distance vector unicast routing (DVR).
Reverse path multicast (RPM). Support for multicast overlays: tunnels between multicast
enabled routers through networks not supporting multicast.
Used to create the Internet MBone (Multicast Backbone).
Routing info base DVMRP incorporates its own unicast DVR protocol.
Separated routing for unicast service and multicast service.
DVR protocol derived from RIP and adapted for RPM.
E.g., routers learn the downstream neighbors on the multicast
tree for any source address prefix.
-
8/10/2019 Ars-msr 1 Mcast 2014
26/44
Octavian Catrina 26
Sender s
Rcvr
32
1
-
1
4
6
5
Rcvr
2
7
172.16.5.0/24
Rcvr
3
Reverse Path Broadcast
Broadcast tree for source s The unicast route matching s indicates
a router's parent in the broadcast tree
for source s (child-to-parent pointer).
Route entries matching thebroadcast sender's address.
Reverse Path Forwarding (RPF) Broadcast/multicast packet from
source s received on interface i:- If i is the interface used to forward a
unicast packet to s, then forward the
packet on all interfaces except i.
- Otherwise discard the packet.
Reverse Path Broadcast (RPB) RPF still allows unnecessary copies.
Add parent-child pointer: A router
learns which neighbors use it as next
hop for each route. Forward a packet
only to these neighbors.Unnecessary packet copiessent by RPF.
-
8/10/2019 Ars-msr 1 Mcast 2014
27/44
Octavian Catrina 27
Reverse Path Multicast (1)
Truncated RPB Uses IGMP to avoid unnecessary
broadcast in leaf multi-access networks.
Sender
Rcvr
32
1
-
1
4
6
5
Rcvr
2
7
172.16.5.0/24
Rcvr
3
IGMP
IGMP
IGMP IGMP
IGMP
IGMP
Prune
Prune
Reverse Path Multicast (RPM) Creates a multicast tree by pruning
unnecessary tree branches from the
(truncated) RPB broadcast tree.
Prune mechanism A router sends a Prune message to
its upstream (parent) router if:
- its connected networks do not
contain group members, and
- its neighbor routers either are not
downstream (child) routers, or have
sent Prune messages.
Both routers maintain Prune state.
-
8/10/2019 Ars-msr 1 Mcast 2014
28/44
Octavian Catrina 28
Reverse Path Multicast (2)
Sender
Rcvr
32
1
-
1
4
6
5
Rcvr
2
7
172.16.5.0/24
Rcvr
3
IGMP
IGMP
IGMP IGMP
IGMP
IGMP
Adapting the multicast tree to
group membership changes Pruning can remove branches when
members leave.
A mechanism is necessary to add
branches when members join.
Periodic broadcast & prune The multicast tree can be updated by
repeating periodically the broadcast
& prune process (a parent removes
the prune state after some time).Graft
Graft
Rcvr
4 Graft mechanism
Faster tree extension.
A router sends a Graft message,
which cancels a previously sent
Prune message.
-
8/10/2019 Ars-msr 1 Mcast 2014
29/44
Octavian Catrina 29
DVMRP operation
Data-driven: multicast tree setup when
the source starts sending to the group. Initially, RPB: All routers receive the
packets, learn about the session
(source-group), & record state for it.
Next, RPM: Unnecessary branches
are pruned from the data paths (butthe routers still maintain state).
Tree update by periodic broadcast &
prune, and graft.
Sender: Sends to224.1.1.1, 224.5.6.7
Rcvr
32
1
-
1
4
6
5
Rcvr
2
7
172.16.5.0/24
Rcvr
3
IGMP
IGMP
IGMP IGMP
IGMP
IGMP
Route entries matching thebroadcast sender's address.
Router 2: Multicast forwarding cache
Source prefix Multicast group In IF Out IF172.16.5.0/24 224.1.1.1
224.5.6.7
N E, SE
S(Prune)
Router 4: Multicast forwarding cache
Source prefix Multicast group In IF Out IF
172.16.5.0/24 224.1.1.1
224.5.6.7
N(Prune) S(Prune)
-
8/10/2019 Ars-msr 1 Mcast 2014
30/44
Octavian Catrina 30
DVMRP conclusions
DVMRP & RPM shortcomings
Several design solutions limit DVMRP scalability & efficiency.
Tree setup and maintenance by periodic broadcast & prune.
Can waste a lot of bandwidth, especially for a sparse group spread
over a large internetwork (OK for dense groups).
Per-group & source state in all routers, both on-tree & off-tree.
Due to source-based trees and to enable fast grafts.
Controversial feature: Embedded DVR protocol.
New generation RPM-based protocol: PIM-DM
Protocol Independent Multicast: Uses existing unicast routing
table, from any routing protocol. No embedded unicast routing. Dense Mode: Intended for "dense groups" - concentrated in a
network region (rather than thinly spread in a large network).
Uses RPM as described on previous slides (similar to DVMRP).
No parent-to-child pointers, hence redundant transmissions in broadcast phase.
-
8/10/2019 Ars-msr 1 Mcast 2014
31/44
Octavian Catrina 31
PIM-SM
PIM: Protocol Independent Multicast
Uses exiting unicast routing table, from any routing protocol.No embedded unicast routing.
No solution matches well different application contexts. Two
protocols, different algorithms.
PIM-DM: Dense Mode Efficient multicast for "dense" (concentrated) groups.
RPM, source-based trees, implicit-join, data-driven setup.
PIM-DM is similar to DVMRP, except it relies on existing unicast routing,
hence it does not avoid redundant transmissions in the broadcast phase.
PIM-SM: Sparse Mode Efficient multicast for sparsely distributed groups.
Shared trees, explicit join, control-driven setup.
After initially using the group's shared tree, members can set
up source-based trees. Improved efficiency and scalability.
-
8/10/2019 Ars-msr 1 Mcast 2014
32/44
Octavian Catrina 32
Rendezvous Points
Rendezvous Point (RP) router
Core of the multicast shared tree. Meeting
point for the group's receivers & senders.
At any moment, any router must be able to
uniquely mapa multicast address to an RP.
Resilience & load balancing a set of RP.
RP discovery and mapping
Several routers are configured as RP-
candidate routers for a PIM-SM domain.
They elect a Bootstrap Router (BSR).
Rcvr
3
Sender
Rcvr
R3R2
R1
1
R4
R6
R5
Rcvr2
Sender +
R74
RP
Rcvr
BSR monitors the RP candidates and distributes a list of RP
routers (RP-Set) to all the other routers in the domain.
A hash function allows any router to uniquely map a multicast
address to an RP-Set router.
-
8/10/2019 Ars-msr 1 Mcast 2014
33/44
Octavian Catrina 33
R3R2
R1
R4
R6
R5
RP(G)
R7
RP-tree
Shared tree (RP-tree) setup
Designated Router (DR) Unique PIM-SM router responsible
for multicast routing in a subnet.
Receiver join To join a group G, a receiver
informs the local DR using IGMP. IGMP
IGMP
IGMP
IGMP
IGMP
IGMP
Rcvr(*,G)4
Rcvr(*,G)
2 Join
(*,G)
3
Rcvr(*,G)
Join
(*,G)
Rcvr(*,G)
1
Join
Join
(*,G)
(*,G)
(*,G)
DR join DR adds (*,G) multicast tree state
(group G and any source).
DR determines the group's RP,
and sends a PIM-SM Join(*,G)
packet towards the RP.
At each router on the path, if (*,G)
state does not exist, it is created, &
the Join(*,G) is forwarded.
Multicast tree state is soft state: Refreshed by periodic Join messages.
-
8/10/2019 Ars-msr 1 Mcast 2014
34/44
Octavian Catrina 34
Sending on the shared tree
Register encapsulation The sender's local DR encapsulates
each multicast data packet in a PIM-
SM Register packet, and unicasts it
to the RP.
RP decapsulates the data packet
and forwards it onto the RP-tree. Allows the RP to discover a source,
but data delivery is inefficient.
When the (S,G) path is complete, RP stops the encapsulation by
sending (unicast) a PIM-SM Register-stop packet to the sender's DR.
Register-Stop RP reacts to a Register packet by
issuing a Join(S, G) towards S. At each router on the path, if (S,G)
state does not exist, it is created, &
the Join(S, G) is forwarded.
R3R2
R1
R4
R6
R5
RP(G)
R7
IGMP
IGMP
IGMP
IGMP
IGMP
IGMP
Rcvr(*,G)4
Rcvr(*,G)
2 3
Rcvr(*,G)
Rcvr(*,G)
1
(*,G) (*,G)
(*,G)
(*,G)
(*,G)
Join(S,G)
Join(S,G)
(S,G)
(S,G)
Sender
Register-encapsulated
data packetRegister-Stop
-
8/10/2019 Ars-msr 1 Mcast 2014
35/44
Octavian Catrina 35
R3R2
R1
R4
R6
R5
RP(G)
R7
IGMP
IGMP
IGMP
IGMP
IGMP
IGMP
Rcvr(*,G)4
Rcvr(*,G)
2 3
Rcvr(*,G)
Rcvr(*,G)
1
(*,G) (*,G)
(*,G)(*,G)
Source-specific trees
Shared vs. source-specific tree
Routers may continue to receive
data on the shared RP-tree.
Often inefficient: e.g., long detour
from sender to receiver 1.
PIM-SM allows routers to create a
source-specific shortest-path tree.
(S,G)
Join(S,G)
(S,G)
(S,G)
Sender
Prune
(S,G)
Transfer to source-specific tree
A receiver's DR sends a Join(S, G)
towards S creates (S,G)
multicast tree state at each hop. After receiving data on the (S,G)
path, DR sends a Prune(S,G)
towards the RP removes S from
G's shared tree at each hop.
Example: transfer to SPT for receiver 1
-
8/10/2019 Ars-msr 1 Mcast 2014
36/44
Octavian Catrina 36
PIM-SM conclusions
Advantages
Independence of unicast routing protocol.
Better scalability, especially for sparsely distributed groups:
- Explicit join, control-driven tree setup no data broadcast,
no flooding of group membership information. Per session
state maintained only by on-tree routers.
- Shared trees routers maintain per-group state, instead of
per-source-group state.
Flexibility and performance: optional, selective transfer to
source-specific trees (e.g., triggered by data rate).
Weaknesses Much more complex than PIM-DM.
Control traffic overhead (periodic Joins) to maintain soft
multicast tree state.
-
8/10/2019 Ars-msr 1 Mcast 2014
37/44
Octavian Catrina 37
MOSPF
MOSPF
Natural multicast extension of the
OSPF (Open Shortest Path First)
link-state unicast routing protocol.
Backbone area
OSPF hierarchical network structure
ABR ABR ABR
Area 1 Area 3Area 2
MOSPF at a glance
Source-based shortest-path multicast trees, data-driven setup.
Multicast extensions for both intra-area and inter-area routing.
Extends the OSPF topology database (per-area) with info about
the location of the groups' members.
Extends the OSPF shortest path computation (Dijkstra) todetermine multicast forwarding:
For each pair source & destination-group, each router:
computes the same shortest path tree rooted at the source,
finds its own position in the tree,
and determines if and where to forward a multicast datagram.
-
8/10/2019 Ars-msr 1 Mcast 2014
38/44
Octavian Catrina 38
OSPF review (single area)
Link state advertisements Each router maintains a links state
table describing its links (attached
networks & routers). It sends Link
State Advertisements (LSA) to all
other routers (hop-by-hop flooding).
Topology database All routers build from LSAs the same
network topology database (directed
graph labeled with link costs).
Routing table computation
Each router independently runs thesame algorithm (Dijkstra) on the
topology, to compute ashortest-path
tree rooted at itself, to all destinations.
A destination-based unicast routing
table is derived from the tree.
Example: OSPF topology (link state)
database for one OSPF area.Shortest-path treecomputed using
Dijkstra algorithm by router R1.
N6
N4
N1R1
R4
R6
R5
R7
N2 R2 R3
N7
N5
N3
-
8/10/2019 Ars-msr 1 Mcast 2014
39/44
Octavian Catrina 39
MOSPF: topology database
Local group database
Records group membership in a
router's directly attached network.
Created using IGMP.
Group-membership LSA
Sent by a router to communicatelocal group members to all other
routers (local transit vertices that
should remain on a group's tree).
Rcvr, m1
2
Sender
Rcvr, m1
R3R2
R1
-
1
R4
R6
R5
Rcvr, m1
2
R7IGMP IGMP
IGMP
IGMP IGMP
IGMP
Flood G-M LSA(R3, m1)
Flood G-M LSA(R5, m1)
Flood G-M LSA(R7, m1)
Topology database extension
for multicast A router or a transit network is
labeled with the multicast groups
announced in Group-membership
LSAs.
-
8/10/2019 Ars-msr 1 Mcast 2014
40/44
Octavian Catrina 40
MOSPF: multicast tree (intra-area)
Source-based multicast tree Shortest-path tree from source to
group members (receivers).
Data-driven tree setup A router computes the tree and the
multicast forwarding state when it
receives the first multicast datagram(i.e., learns about the new session).
Router 2: Multicast forwarding cache
Source Multicast group In IF Out IF
172.16.5.1 224.1.1.1 N E, SE
N6
N4
Source (172.16.5.1),
sends to m1= 224.1.1.1
172.16.5.0/24
N1R1
R4
R6
R5
R7
N2 R2 R3
N7
N5
N3
m1
m1
m1
MOSPF link state database (one area).
Shortest-path tree for (N1, m1).
Multicast tree & state Routers determine independently
the same shortest path tree rooted
at the source, using Dijkstra. The tree is pruned according to
group membership labels.
The router finds its position in the
pruned tree, and derives the
forwarding cache entry.
OS
-
8/10/2019 Ars-msr 1 Mcast 2014
41/44
Octavian Catrina 41
MOSPF conclusions
Advantages
OSPF is the interior routing protocol recommended by IETF.
MOSPF is the natural choice of multicast routing protocol in
networks using OSPF.
More efficient than DVMRP/RPM: no data broadcast.
Weaknesses Various features limit scalability and efficiency:
Dynamic (!) group membership advertised by flooding.
Multicast state per-group & per-source, maintained in on-
tree, as well as off-tree routers.
Relatively complex computations to determine multicast
forwarding: for each new multicast transmission (source-
group), repeated when the group/topology change.
Few implementations?
-
8/10/2019 Ars-msr 1 Mcast 2014
42/44
Annex
-
8/10/2019 Ars-msr 1 Mcast 2014
43/44
Octavian Catrina 43
IGMP v1/v2 - Group Management
IGMP 2 G M t
-
8/10/2019 Ars-msr 1 Mcast 2014
44/44
IGMP v.2 - Group Management
IGMP v2 enhancements: Election of a querier router (lowest IP address).
Explicit leave (reduce leave latency).