virtualizing your live-tv headend: …...with multicast traffic and complex brown field...

8
1 facebook.com/redhatinc @redhatnews linkedin.com/company/red-hat redhat.com Swisscom TV migrated their first Live-TV channel processing to an IP-based virtualized infrastructure on OpenStack. Linear TV processing is one of the most challenging workloads to be run on a cloud due to carrier-grade network and service availability requirements combined with multicast traffic and complex brown field environments. In this paper we will explain why and how Swisscom, the market leading TV provider in Switzerland, is utilizing OpenStack to increase business agility and drastically reducing cost by virtualizing and orchestrating the management and production of linear TV channels. We will go into detail of what technical challenges were faced during the project phase and how they were solved: • How are NFV principles and reference architectures applicable to media workloads? • How can Swisscom save cost in the production of TV channels in a virtual headend? • Why did multicast traffic not work on Open vSwitch when the project started, and how was this problem solved? • What are the differences to containerized implementations and what is needed to make it work there? MOTIVATION TO VIRTUALIZE The media industry is facing a general trend to move from its current discrete and proprietary appliances towards Ethernet and IP, and further towards standard hardware and virtual workloads. The components of a traditional headend infrastructure consist of all functions to transform an uncompressed raw video stream into a distribution format (e.g. DVB-T, DVB-S, IPTV, etc.). The main task is to generate the various codecs and qualities (en- and transcoding) from UHD-screen to mobile consumption and multiplex the channels to fill the bandwidths of the transponder channels. Other tasks are encryption/DRM and probes for quality control. All these functions reside in appliances today which are connected on the ingress side with a proprietary but highly effective network (SDI). Channel availability demands are being met by doubling the devices in the local compute center and doubling the compute centers themselves. Because the devices are specialized, introduction of new formats and codecs often require costly hardware replacements. New channel deployments also involve new hardware purchase processes. WERNER GOLD RED HAT Munich, Germany [email protected] MARCO LÖTSCHER HEWLETT PACKARD ENTERPRISE Dübendorf, Switzerland [email protected] VIRTUALIZING YOUR LIVE-TV HEADEND: MULTICAST AND ZERO PACKET LOSS ON OPENSTACK WHITEPAPER

Upload: others

Post on 29-May-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: VIRTUALIZING YOUR LIVE-TV HEADEND: …...with multicast traffic and complex brown field environments. In this paper we will explain why and how Swisscom, the market leading TV provider

1

facebook.com/redhatinc@redhatnews

linkedin.com/company/red-hat

redhat.com

Swisscom TV migrated their first Live-TV channel processing to an IP-based virtualized infrastructure on OpenStack. Linear TV processing is one of the most challenging workloads to be run on a cloud due to carrier-grade network and service availability requirements combined with multicast traffic and complex brown field environments.

In this paper we will explain why and how Swisscom, the market leading TV provider in Switzerland, is utilizing OpenStack to increase business agility and drastically reducing cost by virtualizing and orchestrating the management and production of linear TV channels. We will go into detail of what technical challenges were faced during the project phase and how they were solved:

• How are NFV principles and reference architectures applicable to media workloads?

• How can Swisscom save cost in the production of TV channels in a virtual headend?

• Why did multicast traffic not work on Open vSwitch when the project started, and how was this problem solved?

• What are the differences to containerized implementations and what is needed to make it work there?

MOTIVATION TO VIRTUALIZE

The media industry is facing a general trend to move from its current discrete and proprietary appliances towards Ethernet and IP, and further towards standard hardware and virtual workloads.

The components of a traditional headend infrastructure consist of all functions to transform an uncompressed raw video stream into a distribution format (e.g. DVB-T, DVB-S, IPTV, etc.). The main task is to generate the various codecs and qualities (en- and transcoding) from UHD-screen to mobile consumption and multiplex the channels to fill the bandwidths of the transponder channels. Other tasks are encryption/DRM and probes for quality control.

All these functions reside in appliances today which are connected on the ingress side with a proprietary but highly effective network (SDI).

Channel availability demands are being met by doubling the devices in the local compute center and doubling the compute centers themselves. Because the devices are specialized, introduction of new formats and codecs often require costly hardware replacements. New channel deployments also involve new hardware purchase processes.

WERNER GOLDRED HATMunich, [email protected]

MARCO LÖTSCHERHEWLETT PACKARD ENTERPRISEDübendorf, [email protected]

VIRTUALIZING YOUR LIVE-TV HEADEND:MULTICAST AND ZERO PACKET LOSS ON OPENSTACK

WHITEPAPER

Page 2: VIRTUALIZING YOUR LIVE-TV HEADEND: …...with multicast traffic and complex brown field environments. In this paper we will explain why and how Swisscom, the market leading TV provider

redhat.com WHITEPAPER Virtualizing your Live-TV Headend 2

Therefore, broadcasters and service providers seek to benefit from the trend towards virtualization and standardization in two ways. First and foremost, they want to achieve cost reduction by moving away from proprietary, hardware based infrastructures. Secondly, they want to leverage the agility and optimization of their service deployment.

A virtualized headend brings the following benefits:

• Automated channel deployment

• Non-proprietary infrastructure

• Quick and automated extensions

• Simplicity: One interface for all

• Mix and match media functions (i.e. encoders, muxers, scramblers, probes) from various vendors

IMPLEMENTATION AT SWISSCOM

Swisscom started to provide IPTV services in 2006. They became the market leader in TV Services in Switzerland. Today Swisscom is one of the first nationwide TV companies to run virtualized live linear IPTV services in production.

In Swisscom’s legacy headend, almost all systems are based on proprietary hardware and multicast delivery. Several transcoder vendors are available, but each with its own control panel and complexity. Very little or no automation is present, with costly and long provisioning time.

In order to achieve low delays comparable to those in satellite deployments, the video streams are being delivered via IP Multicast within Swisscom’s own network. Swisscom started the virtualization journey with the “heavy lifting” functions like live transcoding which created most benefits in business process acceleration and reduction of demands of new hardware purchases. This way economic benefits could be achieved very early within the project (Figure 1).

IP from Satellite

IP from Studio

Receiver

IPTV Encoder

Ingress Probe

(T5 and packet errors)

Egress Probe

(decoding streams and service

probeing)

Multiplexer(separate Fullscreen/

PiP, scramble)

IPTV Transcoder

OTT Transcoder

Unicast server(OTT LiveTV, ReplayTV, nPVR, VOD)

CBR Constant BitRate

ABR Adaptive Bit Rate

TS Transport Streaming

ABS Adaptive Bit Streaming

DCM Digital Content Manager

multicast

unicast

WebCalls

TS CBR

ABS ABR

OTT Clients(PC, Tablet,

Smartphone)

Managed-IP network

OTT network

Set-top box / TVRewrapper

DRM

FCC/RET

DCM(ingest no-signal

picture)

Encoder Controller

Receiver Controller

Transcoder Controller

Transcoder Controller

MUX Controller

Acquisition

1st step virtualization 2nd step virtualization

3rd step virtualization

Multicast Delivery (Live)

Unicast Delivery (Live & OnDemand)

1st screen Client

n-screen Client

Figure 1: Functional view of the Swisscom headend

Page 3: VIRTUALIZING YOUR LIVE-TV HEADEND: …...with multicast traffic and complex brown field environments. In this paper we will explain why and how Swisscom, the market leading TV provider

redhat.com WHITEPAPER Virtualizing your Live-TV Headend 3

APPLYING NFV PRINCIPLES

In 2012 the European Telecommunications Standards Institute (ETSI) formed an Industry Specification Group (ISG) to develop the standards to virtualize classical Telco functions and applications. Since then, the ETSI model has become a blueprint for managed virtual infrastructures as well as the orchestration and management of virtual functions.

Due to the carrier grade and real-time requirements, it made sense to apply the same model to the virtualization and management infrastructure of media functions within the virtual headend. That step will allow to break through the traditional barriers between headend and delivery networks at a later stage. It enables carriers to move headend functions closer towards the network edge, which in addition will allow to reduce traffic and increase quality as well as customer satisfaction.

TECHNICAL REQUIREMENTS

The live experience of TV streaming is constrained by the delay of the stream to be received by the consumer. There are lags, that are caused by the digital processing in production. They are the same for all digital distribution channels and are not specific to IPTV. The headend adds delays throughout processing as well (transcoding, multiplexing, packaging, DRM). These typically sum up to a few hundred milliseconds and need to be considered, when moving from SDI and appliances towards IP and virtual signal processing.

Another source of delay is the jitter buffer on the client side. Internet packets can take different routes and the order in which they are received may differ from the order in which they have been sent. The jitter buffer gives the client the chance to collect and reorder packets, before the data gets decoded. Otherwise picture quality will suffer from missing data. This is typically between about 100ms and a few seconds.

A pure OTT service needs to distribute data in a point-to-point stream (unicast). Every viewer adds to the required total bandwidth and distribution from origin servers doesn’t scale this way. One solution to the problem is segment caching. The headend delivers the data in HLS and MPEG-DASH formats, which are packed into segments of 2-10 seconds. These segments can be cached and redistributed out of CDN caches. Each cache stage adds scalability to the playout infrastructure but also adds a delay of at least one segment. This typically results in total delays of 30 seconds and more for TCP-based unicast content delivery.

OSS/BSS

Virtualization Layer

NS and VNF Catalogs

NFV Orchestrator

VNF Manager(s)

Virtualized Infrastructure

Manager(s)

Service, VNF and Infrastructure Description

EMS1

VNF1

Virtual Computing

Virtual Storage

Virtual Network

EMS2

VNF2

EMS3

VNF3

NFV MANO

NF

VI

Computing Hardware

Storage Hardware

Network Hardware

Hardware Resources

Service Orchestration

Transcoder

vHE Manager

OpenStack

Muxer

Probe

KVM, OVS, Ceph

Standard Infastructure

Figure 2: Mapping of the ETSI/NFV Model to the Headend

Page 4: VIRTUALIZING YOUR LIVE-TV HEADEND: …...with multicast traffic and complex brown field environments. In this paper we will explain why and how Swisscom, the market leading TV provider

redhat.com WHITEPAPER Virtualizing your Live-TV Headend 4

This is where IP-multicast comes into play. Multicast is a one-to-many protocol, with which the receiver subscribes to a data stream. Using multicast over UDP for content distribution is a much better option for live TV. It scales very efficiently, because packets need to be distributed only once within a certain network section. Additionally, we do not need caches for scaling. But it has two constraints. First, the broadcaster needs to have full control over the distribution network to benefit from multicast support. Second, there is no way to recover lost packets. Each lost packet has a negative impact on the picture quality. Therefore, it is absolutely critical to deliver lossless streams with minimal jitter.

The architecture requires a tight integration of the headend and the distribution network and puts limits onto public cloud hosted headends for large scale live broadcast. First and foremost, because they don’t support multicast at all, but especially because there is no end-to-end multicast from the headend to the settop box with comprehensive quality control.

RED HAT OPENSTACK PLATFORM REQUIREMENTS

OpenStack has its own standalone service for networking (as a service). Its main process is the Neutron server which exposes the networking API and uses a set of plugins for additional processing. The most important plugin for this use case is Open vSwitch (OVS), which provides a virtual network switch with VLAN capability.

Neutron does not provide a special API for multicast networks, but newer releases of the OVS plugin are able to deal with multicast packets and IGMP messages over VLAN connections. When the first tests have been conducted, the results were showing a maximum throughput of 50Mbit/s on standard OVS before packet loss occurred. This is far from the requirement for the ability to process around 10 High Definition channels per compute node, which would total approximately 600Mbit/s of traffic. Native Open vSwitch forwards packets via the kernel space data path, which includes the so-called switching “fastpath” that consist of a table that indicates forwarding rules for packets.

Packets that don’t match existing entries in the kernel fastpath are sent to user space for processing. Afterwards, the fastpath flow table is updated, so that following packets can be processed without having to be sent to user space.

This approach however limits the forwarding bandwidth according to the Linux network stack, which is not suitable for use cases requiring high packet processing rates or very low packet-loss.

DPDK is a set of user space libraries that allow the creation of optimized performant packet processing applications, offering a series of Poll Mode Drivers which enable direct transfer of packets between user space and the physical interface

Through the integration of OVS with DPDK (OVS-DPDK), the switching fastpath is moved to user space, generally eliminating the need for packed forwarding between kernel space and user space and hence boosting performance of OVS.

However, OVS-DPDK has a set of prerequisites such as separate NICs and a limited subset of supported NICs. Further on it requires fine tuning of some parameters for optimal performance:

DPDK packet processing needs separate cores to be assigned to it. Especially the transcoding function is CPU intensive and not having enough CPU resources for packet handling will result in unwanted packet loss as well. Mechanisms to minimize packet loss within OpenStack are:

• Make use of CPU pinning to optimize core assignments to transcode and network and keep tasks local within a NUMA host.

• Make use of huge pages in networking to reduce overhead

• Use OVS-DPDK (OVS 2.5+) with the Poll Mode Drivers (PMDs) being assigned to dedicated cores.

• Carefully select NICs. Especially because OVS-DPDK needs separate NICs and 4k SMPTE 2022-6 requires more than 10Gbit/s per stream. At Swisscom only HD transcoding with 3G 2022-6 has been virtualized at this time.

Page 5: VIRTUALIZING YOUR LIVE-TV HEADEND: …...with multicast traffic and complex brown field environments. In this paper we will explain why and how Swisscom, the market leading TV provider

redhat.com WHITEPAPER Virtualizing your Live-TV Headend 5

SIZING

In virtualization of live media processing—especially transcoding workloads—the limiting factors are:

(1) packet processing of the virtualized network and

(2) compute power that is required for transcoding.

As both of them are compute intensive, the CPU can be considered the limiting scaling factor. The exact density (channels per server) depends on the CPU model as well as the transcoding application and codec of choice. With the newest Intel Skylake CPU architecture, it is possible to run 1-2 UHD channels or up to 16 SD channels on one dual-socket, single HE server.

VIRTUAL HEADEND ARCHITECTURE

An IPTV headend is the part of a TV solution that is responsible for the acquisition, transcoding, metadata enrichment, scrambling and multiplexing of Live IPTV and OTT Streams or VOD Assets. Traditionally, a physical headend is comprised of multiple functional building blocks of various vendors. Those building blocks usually are deployed as farms of physical appliances that are managed by a specific controlling instance. Therefore over time, many operators are being challenged with the following issues:

• Many different controllers and managers

• Clusters per appliance type

• Costly Equipment Upgrades/Exchanges

• Operational complexity

• High effort for adding new channels

The solution for virtualized media functions that are part of a headend on an open cloud platform (OpenStack on standard x86 Hardware) was a joint development of Swisscom, HPE and Red Hat. In order to overcome the complexity of managing the different vendor applications HPE developed a vendor agnostic virtual headend manager (vHM).

Egress

Active Headend

Transcoder A

Multiplexer X

Transcoder A

Multiplexer Y

Multiplexer Z

Transcoder A

Transcoder A

Transcoder B

Transcoder B

Transcoder C

Multiplexer X

Multiplexer Y

Multiplexer Z

Transcoder A

Transcoder A

Transcoder B

Transcoder B

Transcoder C

Ch. 1

Ch. 2

Ch. 3

Ch. 4

Ch. 5

Ch. N

Ch. 1

Ch. 2

Ch. 3

Ch. 4

Ch. 5

Ch. N

Transcoder A

Transcoder B

Transcoder C

Multiplexer XManager

Multiplexer YManager

Multiplexer ZManager

Video Signal

Dedicated Management

Figure 3: Initial Situation, each transcoding ISV with its own (VNF-)Manager

Page 6: VIRTUALIZING YOUR LIVE-TV HEADEND: …...with multicast traffic and complex brown field environments. In this paper we will explain why and how Swisscom, the market leading TV provider

redhat.com WHITEPAPER Virtualizing your Live-TV Headend 6

ECONOMIC ASPECTS

The complexity of a video headend and the growth on video distribution implies high equipment costs. Using a virtualized video processing environment, there is a possibility to incrementally reduce capital expenses by moving traditional equipment toward software-based solutions, which after an initial cost of purchase can be further amortized using an on-demand model. The orchestration of virtualized functionalities optimizes the video processing infrastructure, releases resources and results in lower capital costs.

BUSINESS IMPACT RATIONALES

I. LOWER CAPEX

Virtualization decouples the application from the infrastructure and enables the use of the available infrastructure in a most efficient way. Functional upgrades or changes in standards and codecs can be achieved without the need for new hardware.

On top of that, virtualization and orchestration enables different concepts for handling disaster cases and backup channels. In the case of a legacy appliance, the cost for a backup channel is more or less identical to an active channel, whereas with a virtualized and orchestrated approach, new models for disaster cases become available (i.e. pay per use licenses).

Depending on the specific configuration, the total capital expenses for a virtualized transcoder will be significantly lower than the ones of appliances.

II. MORE EFFICIENT OPEX

Standardized APIs and the consolidation of applications under a vendor agnostic management system enables the flexibility to comprise a live-TV channel out of a chain of functions of different best of breed applications. The unified management removes the complexity of separate dedicated control systems and it drastically increases efficiency of headend operations.

The ability to easily upgrade software and services while removing vendor lock-in issues creates lower liabilities over time and further improves the operational efficiency.

Egress

Virtualized Headend

Video Signal

Figure 4: Unified vHE Management

One interface to manage it all

Transcoder A

Multiplexer X

Multiplexer X

Multiplexer Y

Multiplexer Y

Multiplexer W

Multiplexer Z

Transcoder A

Transcoder A

Transcoder A

Transcoder B

Transcoder B

Transcoder B

Transcoder C

HPE Virtual Headend Manager

Transcoder D

Ch. 1

Ch. 2

Ch. 3

Backup

Ch. 4

Ch. 5

Backup

Ch. N

Channels on the fly

Fast time to launch new service

One click channel deployment

Non-proprietary infrastructure

Quick and automated lifecycles

Simplicity: One interface for all

CAPEX and OPEX reduction

Page 7: VIRTUALIZING YOUR LIVE-TV HEADEND: …...with multicast traffic and complex brown field environments. In this paper we will explain why and how Swisscom, the market leading TV provider

redhat.com WHITEPAPER Virtualizing your Live-TV Headend 7

OpenStack

VIRTUAL HEADEND MANAGEMENT

Service providers are using Network Function Virtualization (NFV) and Software Defined Networking (SDN) in their networks. These technologies can be extended to video processing to bring the same benefits enjoyed by the IT and Communications industries.

In a virtualized headend, the addition of new services or channels can be accomplished quicker as it becomes a matter of automated software configuration. It also enables the shift from a traditional model in which appliances had to be purchased at project outset and amortized over multiple years to a model where resources are consumed on demand. Being part of the media cloud initiative at Swisscom, HPE has contributed with the development of a vendor agnostic management system for media functions: HPE Virtual Headend Manager (VHM).

HPE VHM runs standard x86 off-the-shelf IT hardware and has been designed to run on an OpenStack Cloud. It is the single tool to manage the lifecycle of a live TV channel.

Management of the integrated components includes configuration, management and monitoring of different transcoder and probing virtual machines to put in place a processing chain for the IPTV or OTT live channel, as well as monitoring of the quality of generated streams. HPE VHM provides both GUI and API modes of operation.

CONTAINERIZED HEADEND?

Container Technology in general allows to implement the media functions in a leaner environment. The promises are higher density of functions and less overhead, because the technology doesn’t make use of hypervisors. Furthermore, the Container Technology adds paths to higher agility and greater flexibility, because it supports the introduction of microservices and the decomposition of complex functions into smaller and lighter parts.

The Container Technology will find much more adoption in future headend, post production and production functions, removing barriers between these steps for content creation and delivery. Many ISVs for transcoding and other headend functions do already ship their software in a containerized format. But today these containers—like at Swisscom—are being used inside of virtual machines.

Direct deployment of containers on non-virtualized systems looks very promising, because of its much smaller overhead. But today features like OVS-DPDK, that are officially supported in OpenStack, are still under development in Docker, Kubernetes and OpenShift.

Early PoCs for containerized transcoding are underway as we write this paper. We can expect first results at NAB 2018.

vHM

Portal

Config Management API

Multicast input Multicast output

Figure 5: vHE Manager, Functional view

Transcoder 1 Probe

Transcoder 2 Probe

Transcoder 3 Probe

Page 8: VIRTUALIZING YOUR LIVE-TV HEADEND: …...with multicast traffic and complex brown field environments. In this paper we will explain why and how Swisscom, the market leading TV provider

8

facebook.com/redhatinc@redhatnews

linkedin.com/company/red-hat

redhat.com

Copyright © 2018 Red Hat, Inc. Red Hat, Red Hat Enterprise Linux, the Shadowman logo, and JBoss are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries in the United States and other countries. Linux® is the registered trademark of Linus Torvalds in the U.S. and other countries.

NORTH AMERICA 1 888 REDHAT1

ABOUT RED HAT

Red Hat is the world’s leading provider of open source software solutions, using a community- powered approach to provide reliable and high-performing cloud, Linux, middleware, storage, and virtualization technologies. Red Hat also offers award-winning support, training, and consulting services. As a connective hub in a global network of enterprises, partners, and open source communities, Red Hat helps create relevant, innovative technologies that liberate resources for growth and prepare customers for the future of IT.

EUROPE, MIDDLE EAST, AND AFRICA 00800 7334 2835 [email protected]

ASIA PACIFIC +65 6490 4200 [email protected]

LATIN AMERICA +54 11 4329 7300 [email protected]