iptv systems

Upload: fidel-gil-valeriano

Post on 14-Apr-2018

214 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/29/2019 IPTV Systems

    1/7IEEE Network November/December 201240 0890-8044/12/$25.00 2012 IEEE

    Urban Sedlar, Mojca Volk, Janez Sterle, and Andrej Kos, University of Ljubljana

    Radovan Sernec, Telekom Slovenije, d.d.

    AbstractThis article describes the architecture and design of an IPTV network monitoringsystem and some of the use cases it enables. The system is based on distributedagents within IPTV terminal equipment (set-top box), which collect and send thedata to a server where it is analyzed and visualized. In the article we explore howlarge amounts of collected data can be utilized for monitoring the quality of ser-vice and user experience in real time, as well as for discovering trends and

    anomalies over longer periods of time. Furthermore, the data can be enrichedusing external data sources, providing a deeper understanding of the system bydiscovering correlations with events outside of the monitored domain. Four support-ed use cases are described, among them using weather information for explainingaway the IPTV quality degradation. The system has been successfully deployed andis in operation at the Slovenian IPTV provider Telekom Slovenije.

    Contextualized Monitoring andRoot Cause Discovery in

    IPTV Systems Using Data Visualization

    nternet Protocol television (IPTV) has become a vitalpart of modern triple play offerings and is being deployedworldwide [1]. However, the complexity of such systems is

    much greater than that of classical broadcast systems,where there was nothing but air medium and an occasionalrelay node between the broadcaster and the subscriber. Mod-ern IPTV solutions are a complex chain of systems where thepath from source to destination of video stream must crossmultiple devices and multiple levels of network hierarchy.Thus, the potential for introduction of network and applica-tion-level errors is much greater. In addition, due to the matu-rity of traditional broadcast systems, users have cultivatedhigh expectations of video quality as well as the quality of theoverall service consumption experience; hence, it is imperativefor any modern IPTV provider to focus on the assurance ofquality of service (QoS) and quality of (user) experience(QoE) [2], as well as the required level of network and servicemonitoring to achieve such a goal.

    The state of the art in IPTV network and service monitor-ing has significantly advanced in the last decade [3] andencompasses solutions ranging from advanced network probesto client-side probes. The former capture and analyze networktraffic at key points in the network and provide detailedreporting, but lack the granularity associated with hundreds ofthousands of subscribers [4]. On the other hand, the client-side solutions collect data at user premises and provide ahigh-resolution snapshot of an entire network, but are typical-ly very rigid and limited to a set of predefined use cases [5].The high cost of setting up and maintaining such systemsplays an important role as well.

    Attempting to break out of the constraints of commerciallyavailable systems and to address the issues of flexibility, ease

    of deployment, price, and the possibilities that the collected

    data provides, we have designed and implemented a scalablesystem for monitoring the state of an IPTV network. We havedeployed it in the network of a medium-sized provider. This

    article presents the lessons we have learned along the way aswell as the use cases we have identified by working with theprovider and its support personnel.

    The proposed solution is by nature a highly distributed sys-tem of probes, deployed at the end users equipment (STB: set-top box). As such it provides a possibility of 100 percent networkcoverage, but the production coverage has been limited toapproximately 5 percent of the terminal network nodes to limitthe amount of collected data. By its nature, a video qualityprobe located as close to the user as possible can provide a widevariety of network and application-level metrics, which would bedifficult to obtain by any other means. The software agentimplemented as a part of the STB operating system is attachedto key subsystems of the STB, including video decoding and net-work stacks, which can, in addition to network-level monitoring,provide a way for application-level monitoring. This makes itpossible to also take into account any hypothetical decodererrors, which could, for example, arise due to a faulty decoderimplementation. In that respect, the monitoring is end-to-end inthe true sense of the term, and we strongly believe such anapproach is applicable to many other systems as well, rangingfrom smart home automation [6] to Internet of Things (IoT)applications [7], as well as for increasing situational awarenessby providing a network common operating picture (COP).

    System ArchitectureThe presented end-to-end solution is designed as a looselycoupled system, comprising a server side capable of receiving,

    storing and processing messages in real time, and a large

    II

  • 7/29/2019 IPTV Systems

    2/7IEEE Network November/December 2012 41

    number of dedicated software agentsrunning on distributed IPTV STBsacross the IPTV providers network.The data collection was initially imple-mented using a simple batch processingmodel, but the message-orientedapproach presented here was later cho-sen, mainly driven by the needs of the

    IPTV provider to be able to monitorthe health of the system in real time.

    Figure 1 outlines the high-level systemarchitecture. Each IPTV STB acts as anIPTV quality sensor node in a distributednetwork, capturing quality and telemetry-related data and sending it to the serverside of the system. The data source (STB)generates both periodic messages andmessages triggered by user activity (i.e.,every time the user changes channel). Atthe server side, all generated messagesare collected by a SNMP trap server and injected into a messagequeue, which broadcasts them to two subscribers.

    The first subscribed process handles the long-term archivalstorage of the data in raw form, which serves many purposes:firstly, it provides a reference point to observe the actual sys-tem input before the data has been tampered with. From thisarchival database, messages can be replayed at any time,which simpli fies the development and test ing of differentevent-processing approaches. Additionally, the horizontallyscalable key-value store for data archival is well suited tomap-reduce analytics, which represents a promising approachfor analysis of large quantities of data.

    The second subscribed process performs message preprocess-ing; it operates as a parsing engine that interprets the binary val-ues of the originating messages and converts them to structuredobjects with numeric and textual data values. In the process italso filters the data and discards the messages with invalid or

    erroneous combinations of values, which indicate that the mes-sage has been corrupted. Finally, the parsing module injects themessage with structured data into the parsed message queue.

    The parsed message queue also has two subscribers to whichthe messages are broadcast simultaneously. The first one is areal-time event processing subsystem, which can perform com-plex event processing and displays key performance indicators(KPIs) and metrics in the form of a dashboard, while the sec-ond subscriber stores the structured data in a relationaldatabase, where it can be queried by different analytics tools.

    System ImplementationThe described system has been developed and implemented incooperation with the Slovenian IPTV provider TelekomSlovenije and an STB manufacturer. An agent was deployedon all STBs in the IPTV providers network during an auto-mated system-wide firmware upgrade. This allows us toachieve unprecedented user coverage for gathering telemetryinformation and different metrics of the video decoding pro-cess, which correlate tightly with the QoE. To accommodatesuch a system, and allow real-time collection, analytics, andvisualization of data, an event-driven data analysis platformwas assembled using readil y availab le open source compo-nents, as described below.

    Data SourceThe source of the data is a distributed network of softwareagents implemented within the IPTV STBs. The agent is

    hooked into the video decoding and networking processes,

    which allows it to collect both network-level events (bufferunderruns, Ethernet-level errors) and application-level events

    (MPEG transport stream discontinuities, channel change times,etc.). The agent can be controlled remotely by means of SimpleNetwork Management Protocol (SNMP) messaging (enablingand disabling the reporting functionality, querying a limited setof parameters, and setting the reporting period). Once activat-ed, the agent gathers information in real time and reports peri-odically the summary of video decoding and network-levelevents to the server side using standard SNMP trap messages.Reporting is performed either periodically or at each zappingevent (i.e., each time the user changes the channel).

    The following information is collected with each message,representing a set of IPTV telemetry metrics: Originating IP Date and time of message reception (used as the timestamp

    of an event)

    Duration of the interval being reported (either equal to thepredefined reporting period or smaller, indicating channelchange)

    Number of transport stream discontinuities in the reportedtime period

    Number of seconds with at least one transport stream dis-continuity

    Number of buffer underruns Zapping time required to tune into the reported channel Multicast IP of the current channel Additional proprietary fields

    Data VolumeAll of the above information is transmitted either periodicallyor when a channel change occurs. The relevant information,together with UDP, IP, and Ethernet frame overhead yields180 bytes on wire per SNMP message. Assuming the reportingperiod of 30 s and messages being evenly distributed through-out the day, a medium-sized provider with 100,000 subscriberswould yield the required bandwidth of 4.8 Mb/s. An increasein peak hours has to be taken into account, raising by 30 per-cent the peak number of messages and required bandwidthdue to zapping and an increased audience. However, this isstill a modest data rate, which represents no problems for theevent-driven modules of the system and leaves some room forgrowth. Nonetheless, the number of messages per day undersuch conditions would reach 288 million and would require anet size of 48 Gbytes to store. Additionally, to get any reason-able historical trending and analysis, more than one day of

    data would have to be considered.

    Figure 1. System architecture.

    Inbound rawdata queue

    Server sideIPTV providers network

    Message parsing andfiltering

    Offlinedata

    analysis

    Raw dataarchival

    Traphandler

    SNMPtrap

    server

    Traphandler

    SNMPtrap

    server

    Monitoringagent

    STB

    STB

    STB

    STB

    STB Parsed datastorage

    Complexevent

    processing

    Dashboard

    Relationaldatabase

    Pre-processeddata

    No SQLdatabaseRaw data(archive)

    Parsedmessagequeue

  • 7/29/2019 IPTV Systems

    3/7IEEE Network November/December 201242

    For the reason of keeping down the volume of data, weincreased the reporting period to 300 s, which reduces thenumbers by a factor of 10; additionally, only 5 percent net-wo rk co ve ra ge wa s ch os en , fu rthe r re du ci ng th e stora gerequirements to 370 Mbytes/day. It is important to note thatdata summarization cannot be used to reduce the storagerequirements because some of the use cases below require

    temporal resolution of less than an hour.

    Server SideThe SNMP trap messages generated by the STB agents arecollected by a standard SNMP trap server (Linux-basedsnmptrapd), extended by a lightweight trap handler script topush the messages in raw format into an inbound AMQPmessage queue (RabbitMQ). The trap handler also prependssome necessary metadata to the message: the originating IPaddress and SNMP trap reception timestamp.

    The inbound message queue delivers the message to theraw data archival module and the message parsing and filter-ing module. The raw data archival module stores the messagesto a key-value (NoSQL) database (Apache Cassandra) forarchival purposes. Such implementation allows high horizontalscalability and provides a future upgrade path for big-dataanalytics systems.

    The message parsing and filtering module first deserializesthe binary payload of the message and interprets the values ofindividual data fields. Next, each message is inspected toensure its values do not fail a predefined set of constraints(i.e., containing erroneous combinations of values or enumer-ated fields with unsupported values). If the originating IP iswhitelisted in the customer support database, the preprocess-ing module also performs a lookup in a DHCP log databaseto resolve originating IP address into the MAC address of theSTB, which serves to identify the user later in the process.

    Pre-processed events are pushed to the parsed messageAMPQ queue (RabbitMQ), which again has two subscribers:

    the event processing module and the parsed data storage

    module. The parsed data storage module stores the prepro-cessed data to a relational database (MySQL) where it isavailable for offline analysis by a set of external tools, whilethe event processing module performs a set of basic data sum-marizations in real time.

    Data Analysis

    As mentioned, two types of analytics are employed to accom-modate both real-time needs and long-term historical trend-ing.

    The traditional approach to long-term data analysis isbased on performing queries against the data. The amount ofdata is large and persistent, which allows drilling down andrefining the queries until the desired hypothesis is confirmedor rejected. However, large amounts of data also require longprocessing times, and the user is required to define the exactsteps to be performed during the analysis [8]. This limits thepotential user base to data scientists, statisticians, or anyonesufficiently familiar with the domain-specific knowledge of thesystem being analyzed.

    We have performed most of our offline analyses with theaim of extracting as much valuable information as possiblefrom the dataset; some of our use cases are described in thenext section. As our tools, we have used the Matlab packageand Tableau for high-level analyses. Additionally, we havecreated a set of predefined visualization templates availablethrough a web-based user-interface (Fig. 2). Such charts, aswell as the Matlab/Tableau analyses, are currently static andgenerated on demand.

    The second approach to data analysis, on the other hand, isbased on real-time event processing and is less common intraditional analytics tools; here, instead of running queriesagainst the data, the data is sent through a set of queries.Once the data passes through, it is discarded; only the resultsremain, and the process cannot be repeated over the samedata by simply readjusting the query. This allows us to execute

    complex event detection, analysis, and visualizations in real

    Figure 2. Web-based dashboard for interactive visualization of different error types.

  • 7/29/2019 IPTV Systems

    4/7

    time by using a predefined set of rules within the event-drivensubsystem. We have created a simple real-time dashboard dis-playing system-wide metrics and use Esper for complex eventprocessing.

    Use CasesIn this section the scenario-specific aspects of the IPTV met-rics, data collection, analysis, and visualization are presented.

    The described system has been deployed in cooperationwith the Slovenian IPTV provider Telekom Slovenije. A con-senting initial user base of 100 participants was added to thesystem in the testing phase; later, 6500 anonymous probeswere activated in a manner that ensured all of the 65 broad-band network gateways (BNGs) had a sample ofN= 100probes evenly distributed over all multiservice access nodes(MSANs). In addition to that, users with stream quality issueswho give consent are added to the system when needed by thehelpdesk personnel. In the timespan of 10 months, we havethus collected 200 million events and are currently receivingabout 1 million events/day.

    Application-Level IPTV Quality MonitoringFirst and foremost, our goal was to estimate the quality ofuser experience and the degradation thereof, which can beachieved by simply monitoring the described application-levelmetrics through time. By establishing a baseline level of appli-cation-level metrics (e.g., transport stream discontinuities) andnetwork-related metrics (e.g., Ethernet errors, buffer under-runs) per user, per channel, per MSAN or per BNG, any sig-nificant increase in errors can be detected, suggesting asub-par experience for the users. All metrics are displayed astime series on live charts (similar to Fig. 2), together with thepredefined thresholds, which serve both as a guide for theoperator and for triggering the alarms.

    Thus collected and visualized data has also confirmed large

    spikes of errors during different network maintenance proce-dures and can in the future help detect potentially unknowncauses that affect the degradation of end users experience.

    Integration with Customer SupportThere is a variety of other use cases for such data; by integrat-

    ing a comprehensive reporting system with the operations sup-port system (OSS)/business support system (BSS), theproviders helpdesk personnel can streamline the user com-plaint management: instead of opening a ticket and forward-ing the problem to the field team, the data collection can beenabled on-the-fly if the user opts in over the phone. Thuscollected data allows more than just confirmation of the sub-par experience and can guide further decisions that need tobe undertaken to mediate the problem.

    By working with the helpdesk personnel, we have createda special group of users, which is not anonymized, butundergoes an additional address resolution process; a web-based interface is then used to add new us ers or manageexisting ones. Once the reporting is enabled on the STB, thedata starts flowing in and is received by the incoming mes-sage queue. It is stored in the raw message archival databasein a similar (anonymized) way as described before; however,if the source IP of the message is matched in a helpdeskwhitelist, it is sent to a medium access control (MAC) reso-lution process, which performs a lookup in the DynamicHost Configuration Protocol (DHCP) logs to determine theMAC address of the user; this step is necessary, since theterminals are assigned new IP addresses when their DHCPlease expires.

    The data is then displayed in a near-real-time manner in aweb interface, where the status of the network and applica-tion-level errors can be monitored for a specific user; thisshortens the delay between a decision and its results fromdays down to hours and allows the errors to be caught as they

    happen.

    IEEE Network November/December 2012 43

    Figure 3.IPTV network topology map obtained by working back from the end nodes (tree leaves) and matching them to commonancestors. The first graph shows the entire network from the source (SRC) to MSANs with the BNGs numbered from 1 to 65; the sec-ond picture shows a zoomed-in section of a larger graph with an additional hierarchical level: the end users. Node size in both graphs isa function of logical distance from the source. Node color indicates percentage of transport stream errors (green is low, red is high).BNGs are marked with numbers, while MSANs in the second graph are marked with a circle. End-node labels in both graphs wereomitted for clarity. Created using the open source GePhi software.

    STB

    3.44%

    % Err

    0.00%

    MSAN

    BNGMSAN

    BNG

  • 7/29/2019 IPTV Systems

    5/7IEEE Network November/December 201244

    Network Topology Mapping

    Since precise network topology map was unavailable to us dueto the significant administrative investment required, weverecreated the network graph by exploiting the knowledge ofIP addressing hierarchy.

    We obtained the network masksand names of BNGs, which allowedus to map the clusters of users togeography. We integrated the map-based view into our dashboards asseen in Fig. 2.

    Additionally, since the networkis hierarchically subdivided, know-

    ing the MSAN and BNG netmasksallowed us to reverse engineer andvi sual ly represent the entire net-work topology and present it in theform of a graph (Fig. 3), where thesource is in the center, and eachleaf node is one of the 6729 activeusers. Such a representation allowsvisual exploration of the networkhierarchy and quick identificationof problematic nodes by their color.

    Error LocalizationBy having available a network

    topology map, it is further possibleto localize errors in the networkhierarchy. In Fig. 4 we have visual-ized the percentage of errors (nor-malized to the entire monitoringduration) per BNG over time.Weve chosen the time resolutionof 1 day and the topology resolu-tion of the BNG level; by compact-

    ing all 65 BNG time series, weve created a long-termvisualization of transport stream discontinuities (application-level errors on the STB) in the time span of 115 days, asshown in Fig. 4. This representation is well suited for visualanalytics [8, 9] and allows the patterns to be discovered quick-ly. The first thing that becomes obvious is the horizontal

    Figure 4.A heatmap of error severity (ratio of errored seconds over the monitored duration)

    for all channels; each pixel represents a single day on a single BNG. Minimum relative errorin the picture is 0.005% (dark blue), maximum relative error is 8.20% (dark red); verticalaxis represents broadband network gateways numbered from 1 to 65, horizontal axis repre-sents days from Dec 1 2011 to Mar 23 2012. Bright horizontal streaks can be observed thatindicate BNGs with worst performance (see example: BNG-17). Additionally, some verticalstreaks are clearly observable (see example: day 103), which imply a channel-wide degrada-tion felt over a large part of the network.

    Days10

    BNG

    Percentage

    oferrored

    seco

    nds

    BNG-17 BNG-17

    Day 103

    Day 103

    20 30 40 50 60 70 80 90 100 110

    60

    0.0%

    2.5%

    5.0%

    7.5%8.2%

    50

    40

    30

    20

    10

    Figure 5.Error localization within the multicast tree.

    Aggregation

    Multicast stream

    STB CPE MSANCore

    routerContentproviderBRAS/BNG

    Monito

    reddomain

    Error detected

    Error detected

    Error detected

    Error detected

    Overload in outputqueue

    Error propagation

  • 7/29/2019 IPTV Systems

    6/7IEEE Network November/December 2012 45

    streaks, which indicate long-running underperfor-mance of an individual BNG.

    Additionally, some clustering is observable inthe form of vertical streaks; this either impliesthere was a connection between geographicallyindependent BNGs, which means the errors musthave originated upstream, or that there was asimilar usage pattern and the errors are a mani-

    festation of similarly underprovisioned resources.The resolution of such visualizations can beincreased both on the temporal axis (from days tohours or minutes) and on the network topologyaxis by expanding each BNG to MSANs or evenindividual users.

    Additionally, since the errors in the multicastvideo distribution architecture propagate in thedirection from the root of the hierarchical multi-cast tree to the STBs, it is possible to pinpoint thesource of the error by correlating the reportsfrom the terminal nodes and localizing the affect-ed user sites in the network topology map. Theconcept is presented in Fig. 5, showing that a syn-

    chronous error occurrence in the monitoreddomain (STBs) can be used to pinpoint the com-mon ancestor in the multicast chain (BNG) wherethe error likely originated. Such detection can befurther contextualized by using additional datasources (e.g., network management system logs),which can provide a deeper understanding of thenature of detected events.

    Correlation with Weather PhenomenaIt is well known that lightning strikes create a large amount ofimpulse noise, which manifests itself as an observable and/oraudible disturbance in various communication systems. Theeffect is felt in analog wireless and wired communications aswell as in digital systems (e.g. , xDSL). IPTV systems without

    forward error correction (FEC) are especially susceptible tosuch disturbances, since a data corruption as small as a bit fliphappening at the exactly right time (i.e., inside an intra-frame,which represents a starting point for decoding multiple sec-onds of subsequent video) can have significant effects. Sucherrors are highly localized and little can be done to eliminatethem altogether. Therefore, recognizing the errors, which hap-pen due to natural causes, is a vital step in the process ofexplaining away the unavoidable and focusing the attention onthe preventable. A visualization of such an occurrence isshown in Fig. 6.

    Other Use Cases and Further WorkSince many of the described use cases rely on visual analytics,automation would present an important improvement of sucha system. For that, the signature characteristics of individualevents would have to be captured and suitable algorithmsdeveloped to enable automated event discovery.

    In addition to the described cases, the collected data alsoconveys information about how the subscribers use andinteract with the IPTV system. Such information can beused anonymously in concert with EPG data to provide rat-ings for individual TV shows. Furthermore, since everychannel zapping is reported as well, a large sample of userscan be used to detect near-synchronous channel changes,which would imply that undesi rable content (e.g., ads) wasbeing broadcast.

    Lastly, subject to users opting in, the personal TV activitydata could in the future be stored without anonymization and

    mined, which would pave the way for different context-aware

    services. However, this scenarioalthough commonlyemployed on the web to serve better content to the usersiscommonly shunned due to the sensitive nature of the data andraises red flags with the providers and regulators alike.

    Conclusions

    IPTV systems have been widely deployed around the worldfor years, but have yet to live up to their true potential. Thereturn communication channel in such systems already servesas a basis for many innovative services, but it is often neglect-ed as a means for sensing the state of the entire system fromthe end users point of view.

    In this article, we have presented an architecture andimplementation of a scalable real-time IPTV monitoring sys-tem, which has been deployed in cooperation with the Slove-nian IPTV provider Telekom Slovenije. The system uses theexisting STB terminals, upgraded with a data reporting agentto collect a variety of application and network-level metrics.To process large amounts of messages generated by the termi-nals, we employed a message-based event-driven model. Fur-thermore, we explored some of the use cases that can besupported by analyzing and visualizing large amounts of col-lected data. However, use cases described in this article repre-sent just the tip of the iceberg when compared to the truepotential of combining such data with a variety of externaldata sources.

    AcknowledgmentsThe authors would like to thank company Telekom Slovenijefor excellent cooperation on the research and developmentproject Automated system for triple-play QoE measure-ments.

    Part of the work was supported by the Ministry of HigherEducation, Science and Technology of Slovenia, the SlovenianResearch Agency, and the Open Communication Platform

    Competence Center (OpComm).

    Figure 6.Explaining away the video quality degradation by comparing IPTVdecoding errors with weather radar map; red areas indicate high rainfall rate,which often coincides with lightning strikes. Blue dots represent BNGs with

    low error rates; magenta dot (indicated by the arrow) represents a BNG withhigh error rate. Visualization is sparse due to a small number of volunteers atthe time of pilot deployment and the hour of report (noon). The errors werereported by the user as well and confirmed that they coincide with lightning.Weather radar map courtesy of Slovenian Environment Agency (ARSO).

  • 7/29/2019 IPTV Systems

    7/7IEEE Network November/December 201246

    References[1] M. N. O. Sadiku and S.R. Nelatury, IPTV: An Alternative to Tradition-

    al Cable and Satellite Television, IEEE Potentials, vol. 30, no. 4,JulyAug. 2011, pp. 4446.

    [2] M. Volk et al., An Approach to Modeling and Control of QoE inNext Generation Networks [Next Generation Telco IT Architectures],IEEE Commun. Mag., vol. 48, no. 8, Aug. 2010, pp. 12635.

    [3] P. Gupta, P. Londhe, and A. Bhosale, IPTV End-to-End Performance Mon-itoring, Advances in Computing and Communication, Communications inComputer and Information Science, vol. 193, part 5, 2011, pp. 51223.

    [4] J. R. Goodall et al., Preserving the Big Picture: Visual Network TrafficAnalysis with TNV, IEEE Wksp. Visualization for Computer Security,26 Oct. 2005, pp. 4754.

    [5] J. Valerdi, A. Gonzalez, and F. J. Garrido, Automatic Testing andMeasurement of QoE in IPTV Using Image and Video Comparison, 4thIntl. Conf. Digital Telecommunications, 2025 July 2009, pp. 7581.

    [6] M. Umberger, S. Lumbar, and I. Humar, Modeling the Influence ofNetwork Delay on the User Experience in Distributed Home-AutomationNetworks, Information Systems Frontiers, vol. 14, no. 3, July 2012,pp. 57184.

    [7] G. M. Lee and N. Crespi, Shaping Future Service Environments withthe Cloud and Internet of Things: Networking Challenges and ServiceEvolution, Leveraging Applications of Formal Methods, Verification,and Validation, LNCS, vol. 6415, 2010, pp. 399410.

    [8] D.A. Keim et al., Visual Analytics: Scope and Challenges, VisualData Mining: Theory, Techniques and Tools for Visual Analytics, LNCS,Springer, 2008.

    [9] L. Xiao, J. Gerth, and P. Hanrahan, Enhancing Visual Analysis of Net-

    work Traffic Using a Knowledge Representation, 2006 IEEE Symp. Visu-al Analytics Science And Technology, Oct. 31Nov. 2 2006, pp.10714.

    BiographiesURBAN SEDLAR ([email protected]) was awarded his Ph.D. from theFaculty of Electrical Engineering, University of Ljubljana, in 2010. He is cur-

    rently working as a researcher at the Laboratory for Telecommunications ofthe Faculty of Electrical Engineering. His research focuses on Internet sys-tems and web technologies, QoE in fixed and wireless networks, convergedmultimedia service architectures, and applications in IoT systems.

    MOJCA VOLK ([email protected]), was awarded her Ph.D. from theFaculty of Electrical Engineering, University of Ljubljana, in 2010. She iscurrently with the Laboratory for Telecommunications as a researcher. Hermain research interests include advanced fixed-mobile communications sys-tems and services, converged contextualized and IoT solutions, and analy-

    sis, visualization, admission control, and quality assurance areas in thenext-generation multimedia systems and services.

    JANEZ STERLE ([email protected]) graduated in 2003 from the Facultyfor Electrical Engineering, University of Ljubljana, where he is currently pur-suing a postgraduate degree. His educational, research, and developmentwork is oriented toward design and development of next-generation net-works and services. Current research areas include next-generation Internetprotocol, network security, traffic analyses, QoE modeling, QoS measur-ing, and development and deployment of new integrated services intofixed and mobile networks.

    RADOVAN SERNEC ([email protected]) was awarded his Ph.D.from the Faculty of Electrical Engineering, University of Ljubljana, in 2000.He is working for Telekom Slovenias R&D department as a seniorresearcher and strategist. His research interests include network architec-tures and topologies of interconnection networks, also for data centers, sus-tainable renewable energy models for telco operators, and innovationmanagement within enterprises.

    ANDREJ KOS ([email protected]) is an assistant professor at the Fac-ulty of Electrical Engineering, University of Ljubljana. He has extensiveresearch and industrial experience in analysis, modeling, and design ofadvanced telecommunications elements, systems, and services. His cur-rent work focuses on managed broadband packet switching and next-generation intelligent converged services.