winter 2008 overlay and i31 overlay, end system multicast and i3 general concept of overlays some...
Post on 15-Jan-2016
217 views
TRANSCRIPT
winter 2008 Overlay and i3 1
Overlay, End System Multicast and i3
• General Concept of Overlays • Some Examples• End-System Multicast
– Rationale– How to construct “self-organizing” overlay– Performance in support conferencing applications
• Internet Indirection Infrastructure (i3)– Motivation and Basic ideas– Implementation Overview– Applications
Readings: read the required papers
winter 2008 Overlay and i3 2
Overlay Networks
winter 2008 Overlay and i3 3
Overlay NetworksFocus at the application level
winter 2008 Overlay and i3 4
Overlay Networks• A logical network built on top of a physical network
– Overlay links are tunnels through the underlying network
• Many logical networks may coexist at once– Over the same underlying network– And providing its own particular service
• Nodes are often end hosts– Acting as intermediate nodes that forward traffic– Providing a service, such as access to files
• Who controls the nodes providing service?– The party providing the service (e.g., Akamai)– Distributed collection of end users (e.g., peer-to-peer)
winter 2008 Overlay and i3 5
Routing Overlays• Alternative routing strategies
– No application-level processing at the overlay nodes– Packet-delivery service with new routing strategies
• Incremental enhancements to IP– IPv6– Multicast– Mobility– Security
• Revisiting where a function belongs– End-system multicast: multicast distribution by end hosts
• Customized path selection– Resilient Overlay Networks: robust packet delivery
winter 2008 Overlay and i3 6
IP Tunneling• IP tunnel is a virtual point-to-point link
– Illusion of a direct link between two separated nodes
• Encapsulation of the packet inside an IP datagram– Node B sends a packet to node E– … containing another packet as the payload
A B E FtunnelLogical view:
Physical view:A B E F
winter 2008 Overlay and i3 7
6Bone: Deploying IPv6 over IP4
A B E F
IPv6 IPv6 IPv6 IPv6
tunnelLogical view:
Physical view:A B E F
IPv6 IPv6 IPv6 IPv6
C D
IPv4 IPv4
Flow: XSrc: ADest: F
data
Flow: XSrc: ADest: F
data
Flow: XSrc: ADest: F
data
Src:BDest: E
Flow: XSrc: ADest: F
data
Src:BDest: E
A-to-B:IPv6
E-to-F:IPv6
B-to-C:IPv6 inside
IPv4
B-to-C:IPv6 inside
IPv4
winter 2008 Overlay and i3 8
MBone: IP Multicast• Multicast
– Delivering the same data to many receivers– Avoiding sending the same data many times
• IP multicast– Special addressing, forwarding, and routing schemes– Not widely deployed, so MBone tunneled between nodes
unicast multicast
winter 2008 Overlay and i3 9
End-System Multicast• IP multicast still is not widely deployed
– Technical and business challenges– Should multicast be a network-layer service?
• Multicast tree of end hosts– Allow end hosts to form their own multicast tree– Hosts receiving the data help forward to others
winter 2008 Overlay and i3 10
RON: Resilient Overlay NetworksPremise: by building application overlay network,
can increase performance and reliability of routing
Two-hop (application-level)Berkeley-to-Princeton route
application-layer router
Princeton Yale
Berkeley
winter 2008 Overlay and i3 11
RON Can Outperform IP Routing
• IP routing does not adapt to congestion– But RON can reroute when the direct path is congested
• IP routing is sometimes slow to converge– But RON can quickly direct traffic through intermediary
• IP routing depends on AS routing policies– But RON may pick paths that circumvent policies
• Then again, RON has its own overheads– Packets go in and out at intermediate nodes
• Performance degradation, load on hosts, and financial cost– Probing overhead to monitor the virtual links
• Limits RON to deployments with a small number of nodes
winter 2008 Overlay and i3 12
Secure Communication Over Insecure Links
• Encrypt packets at entry and decrypt at exit• Eavesdropper cannot snoop the data• … or determine the real source and destination
winter 2008 Overlay and i3 13
Communicating With Mobile Users
• A mobile user changes locations frequently– So, the IP address of the machine changes often
• The user wants applications to continue running– So, the change in IP address needs to be hidden
• Solution: fixed gateway forwards packets– Gateway has a fixed IP address– … and keeps track of the mobile’s address changes
gatewaywww.cnn.com
winter 2008 Overlay and i3 14
Unicast Emulation of Multicast
End Systems
Routers
Gatech
CMU
Stanford
Berkeley
winter 2008 Overlay and i3 15
IP Multicast
•No duplicate packets
•Highly efficient bandwidth usage
Key Architectural Decision: Add support for multicast in IP layer
Berkeley
Gatech Stanford
CMU
Routers with multicast support
winter 2008 Overlay and i3 16
Key Concerns with IP Multicast
• Scalability with number of groups– Routers maintain per-group state– Analogous to per-flow state for QoS guarantees– Aggregation of multicast addresses is complicated
• Supporting higher level functionality is difficult– IP Multicast: best-effort multi-point delivery service– End systems responsible for handling higher level
functionality – Reliability and congestion control for IP Multicast complicated
• Deployment is difficult and slow– ISP’s reluctant to turn on IP Multicast
winter 2008 Overlay and i3 17
End System MulticastStanford
Gatech Stan1
Stan2
Berk1
CMU
Stan1
Stan2
Berk2
Overlay TreeGatech
Berk1
Berkeley Berk2
CMU
winter 2008 Overlay and i3 18
• Scalability– Routers do not maintain per-group state– End systems do, but they participate in very few groups
• Easier to deploy• Potentially simplifies support for higher level
functionality– Leverage computation and storage of end systems– For example, for buffering packets, transcoding, ACK
aggregation– Leverage solutions for unicast congestion control and reliability
Potential Benefits
winter 2008 Overlay and i3 19
Design Questions
• Is End System Multicast Feasible?• Target applications with small and
sparse groups • How to Build Efficient Application-Layer
Multicast “Tree” or Overlay Network? – Narada: A distributed protocol for constructing
efficient overlay trees among end systems– Simulation and Internet evaluation results to
demonstrate that Narada can achieve good performance
winter 2008 Overlay and i3 20
Performance Concerns
CMU
Gatech Stan1
Stan2
Berk1
Berk2
Duplicate Packets:
Bandwidth Wastage
CMU
Stan1
Stan2
Berk2
Gatech
Berk1
Delay from CMU to
Berk1 increases
winter 2008 Overlay and i3 21
What is an efficient overlay tree?• The delay between the source and receivers is
small• Ideally,
– The number of redundant packets on any physical link is low
Heuristic used:– Every member in the tree has a small degree – Degree chosen to reflect bandwidth of connection to
Internet
Gatech
“Efficient” overlay
CMU
Berk2
Stan1
Stan2
Berk1Berk1
High degree (unicast)Berk2
Gatech
Stan2CMU
Stan1
Stan2
High latency
CMU
Berk2
Gatech
Stan1
Berk1
winter 2008 Overlay and i3 22
Why is self-organization hard?
• Dynamic changes in group membership – Members may join and leave dynamically– Members may die
• Limited knowledge of network conditions– Members do not know delay to each other when they join– Members probe each other to learn network related
information – Overlay must self-improve as more information available
• Dynamic changes in network conditions – Delay between members may vary over time due to
congestion
winter 2008 Overlay and i3 23
Narada Design
Berk2 Berk1
CMU
Gatech
Stan1Stan2
CMU
Berk2 GatechBerk1
Stan1Stan2
Step 1
•Source rooted shortest delay spanning trees of mesh
•Constructed using well known routing algorithms– Members have low degrees
– Small delay from source to receivers
“Mesh”: Richer overlay that may have cycles and
includes all group members• Members have low degrees
• Shortest path delay between any pair of members along mesh is small
Step 2
winter 2008 Overlay and i3 24
Narada Components
• Mesh Management: – Ensures mesh remains connected in face of membership
changes
• Mesh Optimization:– Distributed heuristics for ensuring shortest path delay
between members along the mesh is small
• Spanning tree construction:– Routing algorithms for constructing data-delivery trees – Distance vector routing, and reverse path forwarding
winter 2008 Overlay and i3 25
Optimizing Mesh Quality
• Members periodically probe other members at random • New Link added if
Utility Gain of adding link > Add Threshold
• Members periodically monitor existing links• Existing Link dropped if
Cost of dropping link < Drop Threshold
Berk1
Stan2CMU
Gatech1
Stan1
Gatech2
A poor overlay topology
winter 2008 Overlay and i3 26
The terms defined
• Utility gain of adding a link based on– The number of members to which routing delay
improves– How significant the improvement in delay to each
member is
• Cost of dropping a link based on– The number of members to which routing delay
increases, for either neighbor
• Add/Drop Thresholds are functions of:– Member’s estimation of group size – Current and maximum degree of member in the mesh
winter 2008 Overlay and i3 27
Desirable properties of heuristics
• Stability: A dropped link will not be immediately readded
• Partition Avoidance: A partition of the mesh is unlikely to be caused as a result of any single link being dropped
Delay improves to Stan1, CMU
but marginally.
Do not add link!
Delay improves to CMU, Gatech1
and significantly.
Add link!
Berk1
Stan2CMU
Gatech1
Stan1
Gatech2
Probe
Berk1
Stan2CMU
Gatech1
Stan1
Gatech2Probe
winter 2008 Overlay and i3 28
Used by Berk1 to reach only Gatech2 and vice versa.
Drop!!
An improved mesh !!
Gatech1Berk1
Stan2CMU
Stan1
Gatech2
Gatech1Berk1
Stan2CMU
Stan1
Gatech2
winter 2008 Overlay and i3 29
Performance Metrics• Delay between members using Narada• Stress, defined as the number of identical copies
of a packet that traverse a physical link
Berk2
GatechStan1
Stress = 2CMU
Stan2
Berk1
Berk2CMU
Stan1
Stan2Gatech
Berk1
Delay from CMU to Delay from CMU to
Berk1 increasesBerk1 increases
winter 2008 Overlay and i3 30
Factors affecting performance
• Topology Model– Waxman Variant – Mapnet: Connectivity modeled after several ISP backbones – ASMap: Based on inter-domain Internet connectivity
• Topology Size– Between 64 and 1024 routers
• Group Size– Between 16 and 256
• Fanout range– Number of neighbors each member tries to maintain in the
mesh
winter 2008 Overlay and i3 31
ESM Conclusions• Proposed in 1989, IP Multicast is not yet widely deployed
– Per-group state, control state complexity and scaling concerns
– Difficult to support higher layer functionality– Difficult to deploy, and get ISP’s to turn on IP Multicast
• Is IP the right layer for supporting multicast functionality?• For small-sized groups, an end-system overlay approach
– is feasible– has a low performance penalty compared to IP Multicast– has the potential to simplify support for higher layer
functionality– allows for application-specific customizations
winter 2008 Overlay and i3 32
Supporting Conferencing in ESM
• Framework– Unicast congestion control on each overlay link– Adapt to the data rate using transcoding
• Objective– High bandwidth and low latency to all receivers along the overlay
D
C
A
B2 Mbps
2 Mbps 0.5 MbpsSource rate
2 MbpsUnicast congestion control
Transcoding
(DSL)
winter 2008 Overlay and i3 33
Enhancements of Overlay Design
• Two new issues addressed– Dynamically adapt to changes in network conditions– Optimize overlays for multiple metrics
• Latency and bandwidth
• Study in the context of the Narada protocol – Techniques presented apply to all self-organizing protocols
winter 2008 Overlay and i3 34
• Capture the long term performance of a link – Exponential smoothing, Metric discretization
Adapt to Dynamic Metrics• Adapt overlay trees to changes in network condition
– Monitor bandwidth and latency of overlay links
• Link measurements can be noisy– Aggressive adaptation may cause overlay instability
time
band
wid
th
raw estimatesmoothed estimatediscretized estimate
transient: do not react
persistent:react
winter 2008 Overlay and i3 35
Optimize Overlays for Dual Metrics
• Prioritize bandwidth over latency• Break tie with shorter latency
SourceReceiver X
30ms, 1Mbps
60ms, 2MbpsSource rate
2 Mbps
winter 2008 Overlay and i3 36
Example of Protocol BehaviorM
ean
Rec
eive
r B
andw
idth
Reach a stable overlay• Acquire network info• Self-organization
Adapt to network congestion
• All members join at time 0• Single sender, CBR traffic
winter 2008 Overlay and i3 37
Evaluation Goals
• Can ESM provide application level performance comparable to IP Multicast?
• What network metrics must be considered while constructing overlays?
• What is the network cost and overhead?
winter 2008 Overlay and i3 38
Evaluation Overview
• Compare performance of ESM with– Benchmark (IP Multicast)– Other overlay schemes that consider fewer network
metrics
• Evaluate schemes in different scenarios– Vary host set, source rate
• Performance metrics– Application perspective: latency, bandwidth– Network perspective: resource usage, overhead
winter 2008 Overlay and i3 39
Benchmark Scheme
• IP Multicast not deployed• Sequential Unicast: an approximation
– Bandwidth and latency of unicast path from source to each receiver
– Performance similar to IP Multicast with ubiquitous deployment
C
A B
Source
winter 2008 Overlay and i3 40
Overlay Schemes
Overlay Scheme Choice of Metrics
Bandwidth Latency
Bandwidth-Latency
Bandwidth-Only
Latency-Only
Random
winter 2008 Overlay and i3 41
Experiment Methodology
• Compare different schemes on the Internet– Ideally: run different schemes concurrently– Interleave experiments of schemes– Repeat same experiments at different time of day– Average results over 10 experiments
• For each experiment– All members join at the same time– Single source, CBR traffic– Each experiment lasts for 20 minutes
winter 2008 Overlay and i3 42
Application Level Metrics
• Bandwidth (throughput) observed by each receiver
• RTT between source and each receiver along overlay
D
C
A
B
Source Data path
RTT measurement
These measurements include queueing and processing delays at end systems
winter 2008 Overlay and i3 43
Performance of Overlay Scheme
“Quality” of overlay tree produced by a scheme• Sort (“rank”) receivers based on performance• Take mean and std. dev. on performance of same rank
across multiple experiments• Std. dev. shows variability of tree quality
Rank1 2
RTTCMU
MIT
Harvard
CMU
MIT
Harvard
Exp2Exp1
Different runs of the same scheme mayproduce different but “similar quality” trees
32ms
42ms
30ms
40ms
Exp1Exp2
Mean Std. Dev.
winter 2008 Overlay and i3 44
Factors Affecting Performance
• Heterogeneity of host set– Primary Set: 13 university hosts in U.S. and Canada– Extended Set: 20 hosts, which includes hosts in Europe,
Asia, and behind ADSL
• Source rate– Fewer Internet paths can sustain higher source rate– More intelligence required in overlay constructions
winter 2008 Overlay and i3 45
Three Scenarios Considered
• Does ESM work in different scenarios?• How do different schemes perform
under various scenarios?
Primary Set1.2 Mbps
Primary Set2.4 Mbps
Extended Set2.4 Mbps
(lower) “stress” to overlay schemes (higher)
Primary Set1.2 Mbps
winter 2008 Overlay and i3 46
BW, Primary Set, 1.2 Mbps
Naïve scheme performs poorly even in a less “stressful” scenario
Internet pathology
winter 2008 Overlay and i3 47
Scenarios Considered
• Does an overlay approach continue to work under a more “stressful” scenario?
• Is it sufficient to consider just a single metric?– Bandwidth-Only, Latency-Only
Primary Set1.2 Mbps
Primary Set2.4 Mbps
Extended Set2.4 Mbps
(lower) “stress” to overlay schemes (higher)
winter 2008 Overlay and i3 48
BW, Extended Set, 2.4 Mbps
no strong correlation betweenlatency and bandwidth
Optimizing only for latency has poor bandwidth performance
winter 2008 Overlay and i3 49
RTT, Extended Set, 2.4Mbps
Bandwidth-Only cannot avoidpoor latency links or long path length
Optimizing only for bandwidth has poor latency performance
winter 2008 Overlay and i3 50
Summary so far…
• For best application performance: adapt dynamically to both latency and bandwidth metrics
• Bandwidth-Latency performs comparably to IP Multicast (Sequential-Unicast)
• What is the network cost and overhead?
winter 2008 Overlay and i3 51
Resource Usage (RU)Captures consumption of network resource of overlay
tree• Overlay link RU = propagation delay• Tree RU = sum of link RU UCSD
CMU
U.Pitt
UCSD
CMU
U. Pitt
40ms
2ms
40ms
40ms
Efficient (RU = 42ms)
Inefficient (RU = 80ms)
IP Multicast 1.0
Bandwidth-Latency 1.49
Random 2.24
Naïve Unicast 2.62
Scenario: Primary Set, 1.2 Mbps(normalized to IP Multicast RU)
winter 2008 Overlay and i3 52
Protocol Overhead
• Results: Primary Set, 1.2 Mbps– Average overhead = 10.8% – 92.2% of overhead is due to bandwidth probe
• Current scheme employs active probing for available bandwidth– Simple heuristics to eliminate unnecessary probes– Focus of our current research
Protocol overhead = total non-data traffic (in bytes)
total data traffic (in bytes)
winter 2008 Overlay and i3 53
Internet Indirection Infrastructure (i3)
Motivations• Today’s Internet is built around a unicast point-
to-point communication abstraction:– Send packet “p” from host “A” to host “B”
• This abstraction allows Internet to be highly scalable and efficient, but…
• … not appropriate for applications that require other communications primitives:– Multicast – Anycast – Mobility– …
winter 2008 Overlay and i3 54
Why?• Point-to-point communication implicitly
assumes there is one sender and one receiver, and that they are placed at fixed and well-known locations– E.g., a host identified by the IP address 128.32.xxx.xxx is
located in Berkeley
winter 2008 Overlay and i3 55
IP Solutions• Extend IP to support new communication
primitives, e.g., – Mobile IP – IP multicast– IP anycast
• Disadvantages:– Difficult to implement while maintaining Internet’s
scalability (e.g., multicast)– Require community wide consensus -- hard to achieve in
practice
winter 2008 Overlay and i3 56
Application Level Solutions• Implement the required functionality at
the application level, e.g., – Application level multicast (e.g., Narada, Overcast,
Scattercast…)– Application level mobility
• Disadvantages:– Efficiency hard to achieve– Redundancy: each application implements the same
functionality over and over again– No synergy: each application implements usually
only one service; services hard to combine
winter 2008 Overlay and i3 57
Key Observation
• Virtually all previous proposals use indirection, e.g., – Physical indirection point mobile IP– Logical indirection point IP multicast
“Any problem in computer science can be solved by adding a layer of indirection”
winter 2008 Overlay and i3 58
i3 Solution
• Use an overlay network to implement this layer– Incrementally deployable; don’t need to change IP
Build an efficient indirection layer
on top of IP
IP
TCP/UDP
Application
Indir.layer
winter 2008 Overlay and i3 59
Internet Indirection Infrastructure (i3): Basic Ideas
• Each packet is associated an identifier id• To receive a packet with identifier id, receiver R
maintains a trigger (id, R) into the overlay network
Sender
id Rtrigger
iddata
Receiver (R)
iddata
Rdata
winter 2008 Overlay and i3 60
Service Model
• API– sendPacket(p);– insertTrigger(t);– removeTrigger(t) // optional
• Best-effort service model (like IP)• Triggers periodically refreshed by end-
hosts• ID length: 256 bits
winter 2008 Overlay and i3 61
Mobility
• Host just needs to update its trigger as it moves from one subnet to another
Sender
Receiver(R1)
Receiver(R2)
id R1id R2
winter 2008 Overlay and i3 62
iddata
Multicast
• Receivers insert triggers with same identifier• Can dynamically switch between multicast
and unicast
Receiver (R1)id R1
Receiver (R2)
id R2
Sender
R1data
R2data
iddata
winter 2008 Overlay and i3 63
Anycast
• Use longest prefix matching instead of exact matching– Prefix p: anycast group identifier
– Suffix si: encode application semantics, e.g., location
Sender
Receiver (R1)p|s1 R1
Receiver (R2)p|s2 R2
p|s3 R3
Receiver (R3)
R1datap|adata p|adata
winter 2008 Overlay and i3 64
Service Composition: Sender Initiated
• Use a stack of IDs to encode sequence of operations to be performed on data path
• Advantages– Don’t need to configure path– Load balancing and robustness easy to achieve
SenderReceiver (R)
idT Tid R
Transcoder (T)
T,iddata
iddata
Rdata
idT,iddata idT,iddata
winter 2008 Overlay and i3 65
Service Composition: Receiver Initiated
• Receiver can also specify the operations to be performed on data
Receiver (R)
id idF,R
Firewall (F)
Sender idF F
idF,Rdata
Rdata
F,Rdata
iddata iddata
winter 2008 Overlay and i3 66
Quick Implementation Overview
• i3 is implemented on top of Chord– But can easily use CAN, Pastry, Tapestry, etc
• Each trigger t = (id, R) is stored on the node responsible for id
• Use Chord recursive routing to find best matching trigger for packet p = (id, data)
winter 2008 Overlay and i3 67
Routing Example• R inserts trigger t = (37, R); S sends packet p = (37, data)• An end-host needs to know only one i3 node to use i3
– E.g., S knows node 3, R knows node 35
3
7
20
35
41
37 R
37
20
35
41
37 R
S
R
trigger(37,R)
send(37, data)
send(R, data)
Chord circle
S
R
02m-1
[8..20]
[4..7]
[21..35]
[36..41]
[40..3]
winter 2008 Overlay and i3 68
Sender (S)
Optimization #1: Path Length• Sender/receiver caches i3 node mapping a
specific ID• Subsequent packets are sent via one i3 node
[42..3]
[4..7]
[8..20]
[21..35][36..41]
37 R
37data
Rdatacache node Receiver (R)
winter 2008 Overlay and i3 69
Optimization #2: Triangular Routing
• Use well-known trigger for initial rendezvous• Exchange a pair of (private) triggers well-located• Use private triggers to send data traffic
[42..3]
[4..7]
[8..20]
[21..35][36..41]
37 RR[2]
2 S37[2]
2 [30]30 R
S [30]30data
Rdata
Receiver (R)
Sender (S)
winter 2008 Overlay and i3 70
Example 1: Heterogeneous Multicast
• Sender not aware of transformations
Receiver R1(JPEG)
id_MPEG/JPEG S_MPEG/JPEG
id (id_MPEG/JPEG, R1)
send(id, data)
S_MPEG/JPEG
Sender(MPEG)
send((id_MPEG/JPEG, R1), data)
send(R1, data)
id R2
Receiver R2(MPEG)
send(R2, data)
winter 2008 Overlay and i3 71
Example 2: Scalable Multicast
• i3 doesn’t provide direct support for scalable multicast– Triggers with same identifier are mapped onto the same i3 node
• Solution: have end-hosts build an hierarchy of trigger of bounded degree
R2
R1
R4R3
g R2
g R1
gx
x R4
x R3
(g, data)
(x, data)
winter 2008 Overlay and i3 72
Example 2: Scalable Multicast (Discussion)
Unlike IP multicast, i31. Implement only small scale replication
allow infrastructure to remain simple, robust, and scalable
2. Gives end-hosts control on routing enable end-hosts to – Achieve scalability, and– Optimize tree construction to match their needs, e.g.,
delay, bandwidth
winter 2008 Overlay and i3 73
Example 3: Load Balancing• Servers insert triggers with IDs that have random suffixes• Clients send packets with IDs that have random suffixes
S1
1010 0101 S2
1010 1010 S3
1010 1101 S4
S1
S2
S3
S4
A
B
send(1010 0110,data)
send(1010 1110,data)
1010 0010
winter 2008 Overlay and i3 74
Example 4: Proximity• Suffixes of trigger and packet IDs encode
the server and client locations
1000 0010 S1
1000 1010 S21000 1101 S3
S1
S2S3
send(1000 0011,data)
winter 2008 Overlay and i3 75
Outline
• Implementation• Examples• SecurityApplications
Protection against DoS attacks– Routing as a service– Service composition platform
winter 2008 Overlay and i3 76
Applications: Protecting Against DoS
• Problem scenario: attacker floods the incoming link of the victim
• Solution: stop attacking traffic before it arrives at the incoming link– Today: call the ISP to stop the traffic, and hope for the
best!
• Our approach: give end-host control on what packets to receive– Enable end-hosts to stop the attacks in the network
winter 2008 Overlay and i3 77
Why End-Hosts (and not Network)?
• End-hosts can better react to an attack– Aware of semantics of traffic they receive– Know what traffic they want to protect
• End-hosts may be in a better position to detect an attack– Flash-crowd vs. DoS
winter 2008 Overlay and i3 78
Some Useful Defenses1. White-listing: avoid receiving packets on
arbitrary ports2. Traffic isolation:
– Contain the traffic of an application under attack– Protect the traffic of established connections
3. Throttling new connections: control the rate at which new connections are opened (per sender)
winter 2008 Overlay and i3 79
1. White-listing• Packets not addressed to open ports are dropped
in the network– Create a public trigger for each port in the white list– Allocate a private trigger for each new connection
IDS S
Sender (S)
Receiver (R)
S [IDR]
IDS [IDR]IDR
RRdata
IDPR
R[IDS]
IDP[IDS]IDR data
IDP – public trigger IDS, IDR – private triggersIDP – public trigger IDS, IDR – private triggers
winter 2008 Overlay and i3 80
2. Traffic Isolation• Drop triggers being flooded without affecting
other triggers– Protect ongoing connections from new connection
requests– Protect a service from an attack on another services
Victim (V)
Attacker(A)
Legitimate client(C)
ID2V
ID1 V
Transaction server
Web server
winter 2008 Overlay and i3 81
2. Traffic Isolation (cont’d)• Drop triggers being flooded without affecting
other triggers– Protect ongoing connections from new connection
requests– Protect a service from an attack on another services
Victim (V)
Attacker(A)
Legitimate client(C)
ID1 V
Transaction server
Web server
Traffic of transaction serverprotected from attack on web server
Traffic of transaction serverprotected from attack on web server
winter 2008 Overlay and i3 82
3. Throttling New Connections
• Redirect new connection requests to a gatekeeper – Gatekeeper has more resources than victim – Can be provided as a 3rd party service
Server (S)Client (C)IDC C
X S
puzzle
puzzle’s solution
Gatekeeper (A)
IDPA
winter 2008 Overlay and i3 83
Service Composition Platform
• Goal: allow third-parties and end-hosts to easily insert new functionality on data path– E.g., firewalls, NATs, caching, transcoding, spam filtering,
intrusion detection, etc..
• Why i3? – Make middle-boxes part of the architecture– Allow end-hosts/third-parties to explicitly route through
middle-boxes
winter 2008 Overlay and i3 84
Example
• Use Bro system to provide intrusion detection for end-hosts that desire so
M
client Aserver B
i3
Bro (middle-box)
idM MidBA B
idAB A
(idM:idBA, data)(idBA, data)
(idM:idAB, data)(idAB, data)
winter 2008 Overlay and i3 85
Design Principles
1) Give hosts control on routing– A trigger is like an entry in a routing table!– Flexibility, customization– End-hosts can
• Source route• Set-up acyclic communication graphs • Route packets through desired service points• Stop flows in infrastructure• …
2) Implement data forwarding in infrastructure– Efficiency, scalability
winter 2008 Overlay and i3 86
Design Principles (cont’d)
Host Infrastructure
Internet &Infrastructure overlays
Data plane
Control plane
p2p & End-host overlays
Data plane
Control planei3 Data planeControl plane
winter 2008 Overlay and i3 87
Conclusions• Indirection – key technique to implement
basic communication abstractions– Multicast, Anycast, Mobility, …
• This research – Advocates for building an efficient Indirection Layer on
top of IP – Explore the implications of changing the communication
abstraction; already done in other fields• Direct addressable vs. associative memories• Point-to-point communication vs. Tuple space (in
Distributed systems)