qaistc.comqaistc.com/.../2017/08/...superior-performance-in-a-fluid-e…  · web viewenterprises...

14
Final Paper/Tutorial Assuring Superior Performance in a Fluid Environment

Upload: others

Post on 12-Aug-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: qaistc.comqaistc.com/.../2017/08/...superior-performance-in-a-fluid-e…  · Web viewEnterprises look to bring data much closer to the source to bring in enhanced efficiencies and

Final Paper/Tutorial

Assuring Superior Performance in a Fluid Environment

Vikas Chakravarthy Srinivas, AssociateKunal Karnani, Lead Architect

Cognizant Technology Solutions

Page 2: qaistc.comqaistc.com/.../2017/08/...superior-performance-in-a-fluid-e…  · Web viewEnterprises look to bring data much closer to the source to bring in enhanced efficiencies and

AbstractMoving applications and data to the Cloud has become the norm today. Enterprises are in fact moving away from a ‘Cloud first’ to a ‘Cloud only’ philosophy. While Cloud brings in a lot of advantages, enterprises are beginning to realize some of the limitations of Cloud set-ups in the wake of an all-pervasive computing world. Decentralization techniques are rapidly becoming popular with the advent of technologies like IoT (Internet of Things) and IoE (Internet of Everything). To optimize performance of servers while catering to millions and billions of interconnected devices becomes a mammoth task, especially if it has to be performed from a centrally located cloud infrastructure.

A lot of IoT implementations require quick responses to perform real time actions, the latency brought about by passing data to and fro from the device to the cloud becomes an overhead. The number of different network conditions brings in an increased complexity as well. These challenges have led to the formation of disseminated/granular computing infrastructure which is closer to the edge/devices. But with this paradigm shift, the traditional testing approaches no long remains relevant and our test strategies had to do adapted to the changing architecture.

Enterprises look to bring data much closer to the source to bring in enhanced efficiencies and improve end-user experiences. This has led to newer and evolved computing models of Edge, Fog and Mist. Each of these newer computing models promise greater scalability; the key intent is to reduce the back-and-forth communication between the IoT sensors and the Cloud, which can negatively impact performance.

Here we discuss the strategies that needs to be adopted and the relevant parameters to be monitored to appropriately determine the performance of the application in this new era of computing model.

In this paper, we will explore the following –

a. Why are enterprises adopting edge, fog and mist computing?

b. How can enterprises be prepared to embrace these evolved computing models?

c. What are the various parameters to be monitored to deliver high performance?

Executive Summary:Decentralization techniques are rapidly becoming popular with the advent of technologies like IoT (Internet of Things) and IoE (Internet of Everything). To optimize performance of servers while catering to millions and billions of interconnected devices becomes a mammoth task, especially if it has to be performed from a centrally located cloud infrastructure. Also, since technologies like IoT requires quick responses to perform real time actions, the latency brought about by passing data to and fro from the device to the cloud became an overhead. This lead to the formation of disseminated or granular computing infrastructure which was closer to the edge/devices. But with this paradigm shift, the traditional testing approaches no long remains relevant and our test strategies had to do adapted to the changing architecture. Here we discuss the strategies that needs to be adopted and the relevant parameters to be monitored to appropriately determine the performance of the application in this new era of computing model.

Page 3: qaistc.comqaistc.com/.../2017/08/...superior-performance-in-a-fluid-e…  · Web viewEnterprises look to bring data much closer to the source to bring in enhanced efficiencies and

Evolution of Fluid Computing – From Cloud to Edge to Fog to MistDue to the revolution caused by the internet (World Wide Web), storing data on hard disks or some physical device is no more a preferred approach, irrespective of whether it is small business or a large firm. It was easier and cheaper to store their data on a virtualized data store and access it via the internet. This lead to the origin of cloud. While the cloud provided various benefits such as reliability, security, speed etc., it is still functionally infeasible due to sheer volume of data that was sent back and forth from the millions of interconnected devices. The network becomes a bottleneck and the servers themselves becomes overloaded, as they had to process all the data themselves. This lead to the idea of pushing the intelligence (computation and storage) to the edge. So, the devices or edges perform the computational activities and only the results were to be sent back to the server for further analytics. This is termed as Edge Computing.

This was optimal when the amount of processing that has to be performed on the edge was limited. But since the edge devices were not powerful enough, it was not scalable. It required the edge devices to be powerful enough to perform the computations. This also increased cost. So, then the solution evolved where the load from the edge was off loaded to an intermediary intelligent routers/gateways placed between cloud and the edge devices. These intelligent routers/gateways, termed as Fog is called so because it was similar to the cloud but closer to the ground/edge.

While this is a very valuable evolution, where we bring down the computing from the cloud, to a set of devices (gateways/routers), there are still issues where there are time lags between the real time data being sent from the edge to the live instructions being sent back to the edge. In case of network disconnections, the data from the sensors and the actions to be performed by the corresponding devices are now lost. So, a set of nodes were systematically placed between the Fog and the Edge. This is termed as Mist, as it is an extension of the fog but even more closer to the edge. These Mist devices are adjustable and dynamic. They have high level rules which are offloaded from the cloud/Fog and then used to control the edge devices.

Note: This is similar to edge computing but varies at the point where it is not limited to a specific device or communication protocol. And instead of the devices sending the data at a periodic interval, the Mist now controls the sensors and the actions to be performed by the edge, over a network, as and when it requests data from the sensors.

Page 4: qaistc.comqaistc.com/.../2017/08/...superior-performance-in-a-fluid-e…  · Web viewEnterprises look to bring data much closer to the source to bring in enhanced efficiencies and

Figure 1: Evolution of Fluid Computing

Why Adopt Fog/Mist?The 2 factors that predominantly affect the performance of a system are: the amount of processing that needs to be done and the amount of data that is communicated over the network. In an IoT environment, it becomes crucial to reduce these two performance impacting aspects to provide a good user experience. This lead to the development of the concepts of Fog and Mist Architectures.

This architectural pattern was conceived to achieve 3 simple goals:

1. Bring the data closer to the edge/user.2. Provide better security measures through compartmentalization and decentralization.3. Improve the performance by reducing the data movement over the network.

To overcome the problems of traditional cloud computing framework and limitations posed by smart edge devices, the fluid computing framework was adopted.

Page 5: qaistc.comqaistc.com/.../2017/08/...superior-performance-in-a-fluid-e…  · Web viewEnterprises look to bring data much closer to the source to bring in enhanced efficiencies and

Figure 2: Adopting the Fluid Way

Consolidated Architecture for Fluid EnvironmentThe high level architecture of a Fluid environment comprises of a centrally located cloud infrastructure, which talks to geographically distributed cloudlets, which in turn talks to the smart routers and gateways of Fog environment, this then communicates with the systematically placed Mist nodes which then lastly communicates with the sensors and other edge devices alike.

The centrally located cloud set up takes care of the major processing. The geo distributed cloudlets have become predominant now as cloud vendors have realized the obvious performance glitches prevalent, due to the long distance data travel. These geo-distributed cloudlets offload some of the processing work, off the central clouds and cater to the needs of the spatially proximate devices and users.

The fog layer comprising of smart gateways and switches, extends the services of the cloud and cloudlets, such as data storage, computing capabilities and specific application services. The next layer is the Mist, which comprises of nodes/rule engines located in close proximity to the edge and helps in the interconnection of the edge devices and sensors to perform certain actions based on real time data.

Page 6: qaistc.comqaistc.com/.../2017/08/...superior-performance-in-a-fluid-e…  · Web viewEnterprises look to bring data much closer to the source to bring in enhanced efficiencies and

Figure 3: Fluid Computing Overall Architecture

Challenges and Focus PointsThis novel architecture, although ideal on paper for improving efficiency comes with its own set of challenges. The top 4 key challenges/considerations are:

Figure 4: Challenges and Focus Points

Page 7: qaistc.comqaistc.com/.../2017/08/...superior-performance-in-a-fluid-e…  · Web viewEnterprises look to bring data much closer to the source to bring in enhanced efficiencies and

Key Differences and ConsiderationsThe N-Tier fluid architecture has its functionalities distributed according to the layers. Performance testing is no long a straight forward like the previously followed traditional approaches. The key considerations and touch points are listed below:

Figure 5: Differentiating Factors of Fluid Model

Based on the shift in NFT touch points, the NFRs also vary. A high level overview of activities, tools and touch points ae listed below.

Figure 6: NFT Touch Points and Associated NFRs

Touch

Points

General 3 Tier Architecture Performance

Testing

Fluid Architecture Performance

Testing

Volume Range

Traditional applications

Few Thousands of users

IoT Applications Few Billions of

Users and Devices

Data Per Request

Comparatively larger (stanbdard Http/s or any web

protocol)

Minimal and Light Weight (MQTT, AMQP, CoAP,

Zigbee, Z-wave)

Spatial Distributio

n

Possibly geo distributed with servers hosted

across the globe (may also run over

global cloud)

Multiple layers ranging beyond

cloud, drilling down to multiple locations across streets, cities,

states, countries across the world

Communication

Channel

Traditional networks like

wired/wifi/cellular networks 3g-4g

etc

Varied: Bluetooth, Z-wave,

thread,NFC, WiFi, Cellular, Neul, LoRaWan etc

Server Side

Processing

Could range from general static response to

complex calculation and

analytics

Layer specific processing --

Generally analytics based (BI).

Comprises of data collation and processing

PT Activity

Simulating Real Time Senario/Devices (Cocurrently)Injecting Millions of Events using Performance Testing ToolsPerformance Test Types - Load, Stress, Endurance, Scalability & Volume

Testing Tools

Device Simulation: IBM Blue Mix, Thingworx etcConnectivity Emulation: Charles Proxy, Fiddler, Tmeter, TrafficReg etcLoad Generation Tool: Loadrunner, Neoload, Jmeter, VSTS etcMonitoring: Splunk, AppDynanics, Jconsole, NewRelic, Dynatrace etc

Metrics and Validation

EDGE SIDE METRICS – Concurrent users, response times (including Latency), throughput, hits per sec, volumes LAYER LEVEL -SERVER SIDE METRICS - CPU utilization, memory utilization, Threads, cache, connection pools, wait events, long running queries, locks

Page 8: qaistc.comqaistc.com/.../2017/08/...superior-performance-in-a-fluid-e…  · Web viewEnterprises look to bring data much closer to the source to bring in enhanced efficiencies and

A Customer-Centric Approach to High PerformanceWith varying technologies, platforms, humungous volume of data from different devices and interconnection with 3rd party software, assuring quality amidst these challenges is by no means an easy task. Below is multi-pronged approach for testing the fluid environment.

Figure 7: Customer Centric Approach

Edge Layer

The Edge layer comprises of various sensors, mobile devices and other edge devices, prevalent in the IoT ecosystem. For performance testing purposes, we use a mix of emulators, simulators and real world devices, to validate the device capabilities. Here we assess the device capability and performance against amount of processing power, battery, memory etc. available for each of those devices. At the edge, the various APIs and their performance are also tested on the edge devices themselves.

Communication Layer

The communication layer consists of various network providers and varying available bandwidths for communication. For Performance testing purposes we simulate various network conditions and devices from various locations. Communication protocols like Wi-Fi, NFC, Bluetooth, ZigBee, Z-Wave etc are emulated for data communication purposes. Here we assess the impact of low bandwidth, network latency and network disconnections on the performance of the system.

Mist Layer

The mist layer consists of nodes/servers which execute algorithms to process data from the various sensors and either directly instructs the edge devices to perform some action or sends the data upwards to the cloud for further processing and analytics. For performance testing purposes, we generate the geo distributed load over varying data types and networks, for a limited set of sensors and connected devices, which talk to the nearest proximity MIST server. Here we assess the capacity of the Mist servers in performing computations with data from various edge devices.

Page 9: qaistc.comqaistc.com/.../2017/08/...superior-performance-in-a-fluid-e…  · Web viewEnterprises look to bring data much closer to the source to bring in enhanced efficiencies and

Fog Layer

The fog layer comprises of smart routers and switches which offload some of the processing capabilities from the cloud. These intelligent routers and switches perform computations along with routine data routing. For performance testing purposes, we emulate load from MIST servers and depending on the use case, even edge devices for validating the capacity of the smart routers and switches. This in general would follow the standard communication channel used by the MIST servers to talk to the routers, so network emulation may not comprise of new channels like Z-Wave or ZigBee. But a more traditional wireless/wired network emulation might suffice (depending on the use case). If the architecture allows Edge Devices to communicate directly to the Fog Layer, then varied communication channel emulation might be required. So, here we assess the number of nodes or mist servers and in some cases the number of edge devices that the router can connect to, process the data and either respond back to the below layers or send the processed data upwards to the cloud.

Cloud Layer

The cloud layer is where the major computations happen. The data is received from the devices/below layers and performs analytics and responds back with the necessary instructions or analysis. For performance testing, regular cloud testing approach can be adopted as the communication happens only via the Mist/Fog layer which standardizes the data format as well as uses a proper wireless or wired network to communicate with the cloud. Although, geo distribution/location emulation would still be required for testing a spatially distributed cloud server. Here we assess the various properties of the cloud such as elasticity, scalability, self-healing, disaster recovery, persistence etc.

Figure 8: Solution Overview

Lastly, once each of the layers are tested in isolation, an end to end approach test is carried out to evaluate how each of the layers works in tandem and validate the performance in real time. Monitoring and analytical tools can be leveraged for analysing the breakup of response time at each layer, to drill down to the layer level bottlenecks when testing as an end to end system.

Benefits and ImplicationsPerformance Assurance directly translates to usability and customer retention – which in turn translates to business growth and monetary profits.

The following categories best describes the implications of appropriately testing the performance of the fluid environment.

Speed — assess the application responsiveness to cater to large user/device loads and data volume, under varying network conditions.

Scalability — assess if the application can cater to the expected volumes of the future. Stability — Assess the ability to withstand adverse conditions.

Page 10: qaistc.comqaistc.com/.../2017/08/...superior-performance-in-a-fluid-e…  · Web viewEnterprises look to bring data much closer to the source to bring in enhanced efficiencies and

Sustainability and the Future IoT and IoE are revolutionizing the world and the current models of a centralized cloud environment are no longer sufficient. Fluidic architectures have to be adopted to meet the expectation of the users in terms of usability, performance and responsiveness. The benefits such as lower latency, geographic distribution, reduced jitters, improved mobility, cost efficiency, flexibility etc. are obvious, but assuring best in class quality of service in a fluidic environment can be challenging. We recommend the multi-pronged approach where each of the layer is tested in isolation and then an end to end test to validate each of the components to ensure the provisioning of uninterrupted, accelerated and undistorted services to customers. While fluid ecosystems are becoming more and more granular, it is still the way forward and assuring quality is essential to remain in business.

References & AppendixSinclair, B. (2016, August 24). Mist Computing. Retrieved from IoT Inc: http://www.iot-inc.com/mist-

computing-internet-of-things-on-the-edge-video/

Author BiographyKunal Karnani is a Lead Architect with Cognizant's Quality Engineering and Assurance business unit. For the last 12 years, Kunal has played a variety of roles in the Performance testing and Engineering space with a gamut of customers from different geographies. His current responsibilities include creating and evangelizing emerging NFT service lines in the areas of IoT, DevOps, Cloud. He can be reached at [email protected]

Page 11: qaistc.comqaistc.com/.../2017/08/...superior-performance-in-a-fluid-e…  · Web viewEnterprises look to bring data much closer to the source to bring in enhanced efficiencies and

Vikas Chakravarthy Srinivas has over 7 years of IT experience in roles ranging from performance test engineer to project lead for a variety of BFSI clients. Currently, Vikas is a core member of Cognizant’s Non Functional Testing Center of Excellence working on cutting edge technologies such as IoT, AI, micro-services and chat-bots. Vikas has a bachelor’s degree in Computer Science engineering from SASTRA University (India), and MS in Software Engineering from Cranfield University (UK). He can be reached at [email protected]

THANK YOU!