a high performance packet core for next generation ...swadhin/reading_group/slides/pepc...a high...

25
A High Performance Packet Core for Next Generation Cellular Networks 1 Zafar Qazi + Melvin Walls , Aurojit Panda + , Vyas Sekar, Sylvia Ratnasamy + , Scott Shenker + § § +

Upload: dinhnga

Post on 14-Jun-2018

215 views

Category:

Documents


0 download

TRANSCRIPT

A High Performance Packet Core for Next Generation Cellular Networks

1

Zafar Qazi +

Melvin Walls , Aurojit Panda+ , Vyas Sekar, Sylvia Ratnasamy+ , Scott Shenker+

§

§+

Explosive Cellular Growth

2

Many Diverse Devices: 3B IoTs by 2019*

Signaling traffic growth: 50% faster growth than data+

Demanding applications 3/4 of data traffic will be video*

Cisco visual networking index* Nokia Study+

Evolved Packet Core (EPC)

3

EPC1

EPC3

EPC2

Radio Access Network

Internet

IP Multimedia System (IMS)

Data Traffic Signaling Traffic Voice Traffic

Existing Cellular Core Cannot Keep Up!

4

Concerns from Operators

Academic Studies

Industrial Efforts

EPCs are factored based on functions

5

Distributed user state

Component-1 Component-2 Component-N

user-1

user-k

...

user-1

user-k

...

user-1

user-k

...

Performant EPC (PEPC)

PEPC: EPC functions factored around state

6

An abstraction of independent and customizable EPC slices

Rest of the Talk …

• Scalability challenges

• Design of PEPC

• Implementation and Evaluation

7

Traditional EPCs

8

Packet Data Network Gateway (P-GW)Serving Gateway (S-GW)

Data Gateways

Mobility Management Entity (MME)

Signaling Function

EPC

Policy Server

Policy and Charging Rules Function (PCRF)

Subscriber Database

Home Subscriber Server (HSS)

Backend servers

Implemented as hardware appliances Statically provisioned at a few central locations

User state in EPC

9

State Type MME S-GW P-GW Update Frequency

Per-user QoS/policy state w+r w+r w+r per-signaling_event

User id w+r w+r w+r per-signaling_event

User location w+r w+r NA per-signaling_event

Per-user control tunnel state w+r w+r w+r per-signaling_event

Per-user data tunnel state w+r w+r w+r per-signaling_event

Per-user bandwidth counters NA w+r w+r per-packet

Distributed User State is Problematic!

10

• Performance overheads + high complexity - Frequent cross component synchronization

• Migration is hard - Distributed user state

• Customization is hard - Distributed user state + distributed computation

user1

user2

usern

… MME

S-GW

P-GW

user signaling

traffic

user data

traffic

GTP-C

GTP-C

user1

user2

usern

user1

user2

usern

Rest of the Talk …

• Scalability challenges

• Design of PEPC

• Implementation and Evaluation

11

Existing EPC vs. PEPC

12

Slice1signaling traffic

data traffic

user1 stateR

R R

R

Slice2signaling traffic

data traffic

user2 stateR

R R

R

user signaling

traffic

user data

traffic

user1 state

user2 state

user1 state

user2 state

user1 state

user2 state

MME

S-GW

P-GW

Existing EPCs PEPC

PEPC Slice

Separation of control and data thread —> avoid HOL blocking

• Processing time for signaling messages > 10X higher • Control and data threads assigned to separate cores

13

user

signa

ling t

raffic

Control Thread

Data Thread

user data traffic

shared state

PEPC Slice

Partition shared state at two levels —> reduces contention • By user • Per-user state whether control or data thread writes to it • Use fine grained locks —> up to 5X improvement over coarse grained locks

14

shared state

userN counter

state

userN control

stateus

er sig

nalin

g traf

ficControl Thread

Data Thread

user data traffic

RW

R RW

R

user1 counter

state

user1 control

state

RW R

RWR

PEPC Server

15

Slice1signaling

traffic

data traffic

user1 stateR

R R

R

SliceNsignaling traffic

data traffic

userN stateR

R R

R

Manage Slices Manage Migration

SchedulerInterface with

backend servers

Proxy

signaling traffic

data traffic

Dem

ux

• Pause + snapshot user state —> simplifies state migration

• Modify slice data/control flow —> simplifies customization

Rest of the Talk …

• Scalability challenges

• Design of PEPC

• Implementation and Evaluation

16

Implementation

17

• Data plane functions - GPRS Tunnelling Protocol (GTP) - Policy and charging enforcement function

• Signaling functions - Implements, S1AP, the protocol used to interface with the base stations - Supports state updates for attach request

• Support for efficient state migration across slices

• PEPC uses the NetBricks* programming framework

* Panda et al. NetBricks: Taking the V out of NFV. OSDI’16

PEPC Customization/Optimization Examples

• Two level user state storage

18

Active devices Attached but Inactive devices

user1 user2 user3 user4

Improves state lookup performance for data packets

• Customization for a class of IoT devices (like smart meters) • Devices that run a single application • Reduce state and customize data processing

Evaluation and Methodology

• How does PEPC compare with other vEPCs?

• How scalable is PEPC with increasing signaling traffic?

• How scalable is PEPC is with increasing state migrations?

• Benefits of PEPC’s customization/optimizations?

• Methodology - DPDK based traffic generator - Replays cellular data and signaling traffic traces - Traces from OpenAirInterface and ng4T

19

Baselines

• Industrial#1: An industrial software EPC implementation - DPDK based - Ran as a process inside the host OS, not inside a VM/Container - S-GW/P-GW on one server, and MME on an another server

• OpenAirInterface (OAI): An opensource EPC software - Ran as a process inside the host OS - Ran all EPC functions on the same server

• OpenEPC on Phantomnet: Software EPC implementation - MME, S-GW, P-GW on different servers

20

Data plane performance comparison

21

10K attach-requests/s250K users

PEPC can sustain data plane throughput (~6 Mpps) for 1:10 signaling to data ratio

In contrast, Industrial#1 throughput drops significantly (0.1 Mpps) for more than 1:100 signaling to data ratio

0

1

2

3

4

5

6

7

OAI OpenEPC Industrial#1 PEPC

Data

-pla

ne th

roug

hput

(M

pps)

PEPC Customization Benefits

22

For smart meter like devices, can achieve up to 38% improvement

SIGCOMM ’17, August 21-25, 2017, Los Angeles, CA, USA Zafar Qazi et al.

Figure 12: Comparison of di�erent shared state implemen-tations.

Figure 13: The impact on data plane performance by batch-ing updates.

Figure 14: Performance improvement with two-level statetables over a single state table.

shared state implementations, (ii) impact of batching, (iv) impact oftwo-level state tables, (v) impact of customization.

7.1 Shared state implementationsWe consider three di�erent implementations for the shared statein PEPC slice. The “Giant lock" uses a single giant lock to protectaccess to the entire state table, consisting of state of multiple users.

0

5

10

15

20

25

30

35

40

5 25 50 75 100

Impr

ovem

ent i

n da

ta p

lane

th

roug

hput

(%)

% of stateless IoT devices

Figure 15: Bene�ts of customizing IoT devices in PEPC.

In the “Datapath writer" implementation, we have a �ne-grainedRead/Write lock associated with each user state, but there is a singleper-user shared state, and the data-plane also haswrite access to thisshared state. “PEPC" uses �ne-grained per user locks, but separateout per user charging record, for which the data plane is the onlywriter and control plane only reads it. In Figure 12, we observe witha ‘Giant lock", data plane performance drops severely, for largestate updates. For 3M state updates the performance drops close to1 Mpps, whereas with �ne grained locks both “Datapath writer" and“PEPC" maintain a consistent data plane throughput irrespectiveof the number of state updates. However, with write accesses toboth data and control plane, in the case of “Datapath writer" wesee only upto 0.3Mpps drop in data plane throughput as comparedto “PEPC".

7.2 Impact of batching updatesPEPC batches updates to the data plane, related to the insertionor deletion of a speci�c user state Figure 13 shows the bene�ts ofbatching updates at the data plane in the case of an attach eventwhich leads to a new user state being inserted. In the case of batchedupdates, data plane syncs updates from the control plane only every32 packets. Figure 13 shows that with a signaling to data tra�cratio of 1:1, batched updates result in a performance improvementof more than 1Mpps.

7.3 Impact of two-level state tablesIn Figure 14, we show the data plane performance improvementwith two-level state tables as compared to a single state table. Ourgoal is to (i) measure the data plane performance improvementwith two state tables as a function of number of always-on devicesand (ii) investigate how this is impacted by the churn between theprimary and secondary state tables.

We consider a total of 1M devices, and vary the fraction of always-on devices; the state for these devices is always maintained in theprimary state table. The remaining devices are maintained in thesecondary state table, and depending on the level of churn, we movethe state of some of these devices into the primary state table, andsimilarly evict the state of some of the devices from the primarytable to the secondary table. ‘Low Churn’ refers to 1% of all devicesmoving into the primary state table per second and 1% of all devicesgetting evicted from the primary state table. Similarly, ‘High Churn’

Scalability with State Migrations

23

• State migration across

two slices within a server

0

20

40

60

80

100

1 10 100 1000 10000 100000

Drop

in d

ata

plan

e th

roug

hput

(%)

Number of migrations per second

Less than 5% drop in data plane throughput with 10K migrations per sec

Related work• SDN based cellular designs

- SoftCell [CoNEXT’13], SoftMoW [CoNEXT’15]

• Virtual EPCs - KLEIN [SOSR’16], SCALE [CoNEXT’15]

24

Summary

• Existing EPC systems cannot keep up with cellular growth - Key reason: user state is distributed

• New system architecture: PEPC - Refactoring of EPC functions based on user state - Enables horizontal slicing of EPC by users into independent and

customizable slices

• PEPC performs 3-7x better and scales well with increasing user devices, signaling traffic, and state migrations

25