a survey on edge computing systems and toolsin the post-cloud era, the proliferation of internet of...

24
1 A Survey on Edge Computing Systems and Tools Fang Liu, Guoming Tang, Youhuizi Li, Zhiping Cai, Xingzhou Zhang, Tongqing Zhou Abstract—Driven by the visions of Internet of Things and 5G communications, the edge computing systems integrate comput- ing, storage and network resources at the edge of the network to provide computing infrastructure, enabling developers to quickly develop and deploy edge applications. Nowadays the edge computing systems have received widespread attention in both industry and academia. To explore new research opportunities and assist users in selecting suitable edge computing systems for specific applications, this survey paper provides a comprehensive overview of the existing edge computing systems and introduces representative projects. A comparison of open source tools is presented according to their applicability. Finally, we highlight energy efficiency and deep learning optimization of edge com- puting systems. Open issues for analyzing and designing an edge computing system are also studied in this survey. I. I NTRODUCTION In the post-Cloud era, the proliferation of Internet of Things (IoT) and the popularization of 4G/5G, gradually changes the public’s habit of accessing and processing data, and challenges the linearly increasing capability of cloud computing. Edge computing is a new computing paradigm with data processed at the edge of the network. Promoted by the fast growing demand and interest in this area, the edge computing systems and tools are blooming, even though some of them may not be popularly used right now. There are many classification perspectives to distinguish different edge computing system. To figure out why edge computing appears as well as its necessity, we pay more attention to the basic motivations. Specifically, based on dif- ferent design demands, existing edge computing systems can roughly be classified into three categories, together yielding innovations on system architecture, programming models and various applications, as shown in Fig. 1. Push from cloud. In this category, cloud providers push services and computation to the edge in order to leverage locality, reduce response time and improve user experi- ence. Representative systems include Cloudlet, Cachier, AirBox, and CloudPath. Many traditional cloud comput- ing service providers are actively pushing cloud services closer to users, shortening the distance between customers and cloud computing, so as not to lose market to mo- bile edge computing. For example, Microsoft launched F. Liu is with the School of Data and Computer Science, Sun Yat-sen Uni- versity, Guangzhou, Guangdong, China (e-mail: [email protected]). G. Tang is with the Key Laboratory of Science and Technology on Information System Engineering, National University of Defense Technology, Changsha, Hunan, China (e-mail: [email protected]). Y. Li is with the School of Compute Science and Technology, Hangzhou Dianzi University, China. Z. Cai and T. Zhou are with the College of Computer, National University of Defense Technology, Changsha, Hunan, China (e-mail: [email protected], [email protected]). X. Zhang is with State Key Laboratory of Computer Architecture, Institute of Computing Technology, Chinese Academy of Sciences, China. Cloudlet Cachier CloudPath CORD PCloud ParaDrop EdgeX Foundry Apache Edgent Firework Azure IoT Edge Cloud-sea computing Architecture Applications Programming models Open-source systems Pull from IoT Hybrid cloud- edge analytics Push from cloud Supporting Supporting Akraino Edge Stack Some other systems, e.g., Vigilia, HomePad, LAVEA, MUVR, OpenVDAP, SafeShareRide, VideoEdge …… SpanEdge FocusStack AirBox Fig. 1. Categorization of edge computing systems. AzureStack in 2017, which allows cloud computing ca- pabilities to be integrated into the terminal, and data can be processed and analyzed on the terminal device. Pull from IoT. Internet of Things (IoT) applications pull services and computation from the faraway cloud to the near edge to handle the huge amount of data generated by IoT devices. Representative systems include PCloud, ParaDrop, FocusStack and SpanEdge. Advances in embedded Systems-on-a-Chip (SoCs) have given rise to many IoT devices that are powerful enough to run embedded operating systems and complex algorithms. Many manufacturers integrate machine learning and even deep learning capabilities into IoT devices. Utilizing edge computing systems and tools, IoT devices can effectively share computing, storage, and network resources while maintaining a certain degree of independence. Hybrid cloud-edge analytics. The integration of advan- tages of cloud and edge provides a solution to facilitate both global optimal results and minimum response time in modern advanced services and applications. Represen- tative systems include Firework and Cloud-Sea Comput- ing Systems. Such edge computing systems utilize the processing power of IoT devices to filter, pre-process, and aggregate IoT data, while employing the power and flexibility of cloud services to run complex analytics on those data. For example, Alibaba Cloud launched its first IoT edge computing product, LinkEdge, in 2018, which expands its advantages in cloud computing, big data and artificial intelligence to the edge to build a cloud/edge integrated collaborative computing system; Amazon released AWS Greengrass in 2017, which can extend AWS seamlessly to devices so that devices can arXiv:1911.02794v1 [cs.DC] 7 Nov 2019

Upload: others

Post on 30-Jul-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A Survey on Edge Computing Systems and ToolsIn the post-Cloud era, the proliferation of Internet of Things (IoT) and the popularization of 4G/5G, gradually changes the public’s habit

1

A Survey on Edge Computing Systems and ToolsFang Liu, Guoming Tang, Youhuizi Li, Zhiping Cai, Xingzhou Zhang, Tongqing Zhou

Abstract—Driven by the visions of Internet of Things and 5Gcommunications, the edge computing systems integrate comput-ing, storage and network resources at the edge of the networkto provide computing infrastructure, enabling developers toquickly develop and deploy edge applications. Nowadays the edgecomputing systems have received widespread attention in bothindustry and academia. To explore new research opportunitiesand assist users in selecting suitable edge computing systems forspecific applications, this survey paper provides a comprehensiveoverview of the existing edge computing systems and introducesrepresentative projects. A comparison of open source tools ispresented according to their applicability. Finally, we highlightenergy efficiency and deep learning optimization of edge com-puting systems. Open issues for analyzing and designing an edgecomputing system are also studied in this survey.

I. INTRODUCTION

In the post-Cloud era, the proliferation of Internet of Things(IoT) and the popularization of 4G/5G, gradually changes thepublic’s habit of accessing and processing data, and challengesthe linearly increasing capability of cloud computing. Edgecomputing is a new computing paradigm with data processedat the edge of the network. Promoted by the fast growingdemand and interest in this area, the edge computing systemsand tools are blooming, even though some of them may notbe popularly used right now.

There are many classification perspectives to distinguishdifferent edge computing system. To figure out why edgecomputing appears as well as its necessity, we pay moreattention to the basic motivations. Specifically, based on dif-ferent design demands, existing edge computing systems canroughly be classified into three categories, together yieldinginnovations on system architecture, programming models andvarious applications, as shown in Fig. 1.

• Push from cloud. In this category, cloud providers pushservices and computation to the edge in order to leveragelocality, reduce response time and improve user experi-ence. Representative systems include Cloudlet, Cachier,AirBox, and CloudPath. Many traditional cloud comput-ing service providers are actively pushing cloud servicescloser to users, shortening the distance between customersand cloud computing, so as not to lose market to mo-bile edge computing. For example, Microsoft launched

F. Liu is with the School of Data and Computer Science, Sun Yat-sen Uni-versity, Guangzhou, Guangdong, China (e-mail: [email protected]).G. Tang is with the Key Laboratory of Science and Technology on InformationSystem Engineering, National University of Defense Technology, Changsha,Hunan, China (e-mail: [email protected]). Y. Li is with the School ofCompute Science and Technology, Hangzhou Dianzi University, China. Z.Cai and T. Zhou are with the College of Computer, National University ofDefense Technology, Changsha, Hunan, China (e-mail: [email protected],[email protected]). X. Zhang is with State Key Laboratory ofComputer Architecture, Institute of Computing Technology, Chinese Academyof Sciences, China.

Cloudlet Cachier

CloudPath

CORD

PCloud

ParaDrop

EdgeX Foundry

Apache Edgent

Firework

Azure IoT Edge

Cloud-sea computing

Architecture Applications

Programming models

Open-source systems

Pull from IoT

Hybrid cloud-edge analytics

Push from cloud

Supporting Supporting

Akraino Edge Stack

Some other systems, e.g., Vigilia, HomePad, LAVEA, MUVR, OpenVDAP, SafeShareRide, VideoEdge

……

SpanEdge

FocusStack

AirBox

Fig. 1. Categorization of edge computing systems.

AzureStack in 2017, which allows cloud computing ca-pabilities to be integrated into the terminal, and data canbe processed and analyzed on the terminal device.

• Pull from IoT. Internet of Things (IoT) applicationspull services and computation from the faraway cloudto the near edge to handle the huge amount of datagenerated by IoT devices. Representative systems includePCloud, ParaDrop, FocusStack and SpanEdge. Advancesin embedded Systems-on-a-Chip (SoCs) have given riseto many IoT devices that are powerful enough to runembedded operating systems and complex algorithms.Many manufacturers integrate machine learning and evendeep learning capabilities into IoT devices. Utilizing edgecomputing systems and tools, IoT devices can effectivelyshare computing, storage, and network resources whilemaintaining a certain degree of independence.

• Hybrid cloud-edge analytics. The integration of advan-tages of cloud and edge provides a solution to facilitateboth global optimal results and minimum response timein modern advanced services and applications. Represen-tative systems include Firework and Cloud-Sea Comput-ing Systems. Such edge computing systems utilize theprocessing power of IoT devices to filter, pre-process,and aggregate IoT data, while employing the power andflexibility of cloud services to run complex analyticson those data. For example, Alibaba Cloud launched itsfirst IoT edge computing product, LinkEdge, in 2018,which expands its advantages in cloud computing, bigdata and artificial intelligence to the edge to build acloud/edge integrated collaborative computing system;Amazon released AWS Greengrass in 2017, which canextend AWS seamlessly to devices so that devices can

arX

iv:1

911.

0279

4v1

[cs

.DC

] 7

Nov

201

9

Page 2: A Survey on Edge Computing Systems and ToolsIn the post-Cloud era, the proliferation of Internet of Things (IoT) and the popularization of 4G/5G, gradually changes the public’s habit

2

Deep Learning Optimization at the Edge

(Sec. V)

Open Source Edge Computing Projects

(Sec. III)

Edge Computing Systems & Tools

(Sec. II)

Ener

gy E

ffic

iency

(Sec

. IV

)

Application View

System View

Fig. 2. Major building blocks and organization of this survey paper.

perform local operations on the data they generate, whiledata is transferred to the cloud for management, analysis,and storage.

From a research point of view, this paper gives a detailedintroduction to the distinctive ideas and model abstractions ofthe aforementioned edge computing systems. Noted that thethree categories are presented to clearly explain the necessityof edge computing, and the classification is not the main line ofthis paper. Specifically, we review systems designed for archi-tecture innovation first, then introduce those for programmingmodels and applications (in Sec. II). Besides, some recentlyefforts for specific application scenarios are also studied.

While we can find a lot of systems using edge computingas the building block, there still lacks standardization to sucha paradigm. Therefore, a comprehensive and coordinated setof foundational open-source systems/tools are also neededto accelerate the deployment of IoT and edge computingsolutions. Some open source edge computing projects havebeen launched recently (e.g., CORD). As shown in Fig. 1,these systems can support the design of both architecture andprogramming models with useful APIs. We review these opensource systems with a comparative study on their characteris-tics (in Sec. III).

When designing the edge computing system describedabove, energy efficiency is always considered as one of themajor concerns as the edge hardware is energy-restriction.Meanwhile, the increasing number of IoT devices is bringingthe growth of energy-hungry services. Therefore, we alsoreview the energy efficiency enhancing mechanisms adoptedby the state-of-the-art edge computing systems from the three-layer paradigm of edge computing (in Sec. IV).

In addition to investigations from the system view, wealso look into the emerging techniques in edge computingsystem from the application view. Recently, deep learningbased Artificial Intelligence (AI) applications are widely usedand offloading the AI functions from the cloud to the edge isbecoming a trend. However, deep learning models are knownfor being large and computationally expensive. Traditionally,many systems and tools are designed to run deep learningmodels efficiently on the cloud. As the multi-layer structure ofdeep learning, it is appropriate for edge computing paradigm,and more of its functions can be offloaded to the edge. Ac-cordingly, this paper also studies the new techniques recently

Fig. 3. Cloudlet Component Overview and Functions that support applicationmobility. A: Cloudlet Discovery, B: VM Provisioning, C: VM Handoff.

proposed to support the deep learning models at the edge (inSec. V).

Our main contributions in this work are as follows:• Reviewing existing systems and open source projects for

edge computing by categorizing them from their designdemands and innovations. We study the targets, architec-ture, characteristics, and limitations of the systems in acomparative way.

• Investigating the energy efficiency enhancing mechanismfor edge computing from the view of the cloud, the edgeservers, and the battery-powered devices.

• Studying the technological innovations dedicated to de-ploying deep learning models on the edge, includingsystems and toolkits, packages, and hardware.

• Identifying challenges and open research issues of edgecomputing systems, such as mobility support, multi-userfairness, and privacy protection.

We hope this effort will inspire further research on edgecomputing systems. The contents and their organization in thispaper are shown in Fig. 2. Besides the major four buildingblocks (Sec. II∼Sec. V), we also give a list of open issues foranalyzing and designing an edge computing system in Sec. VI.The paper is concluded in Sec. VII.

II. EDGE COMPUTING SYSTEMS AND TOOLS

In this section, we review edge computing systems and toolspresenting architecture innovations, programming models, andapplications, respectively. For each part, we introduce workunder the “push”, “pull”, and “hybrid” demand in turn.

A. Cloudlet

In 2009, Carnegie Mellon University proposed the conceptof Cloudlet [1], and the Open Edge computing initiative wasalso evolved from the Cloudlet project [2]. Cloudlet is atrusted, resource-rich computer or cluster of computers that arewell-connected to the Internet and available to nearby mobiledevices. It upgrades the original two-tier architecture “MobileDevice-Cloud” of the mobile cloud computing to a three-tier architecture “Mobile Device-Cloudlet-Cloud”. Meanwhile,Cloudlet can also serve users like an independent cloud,making it a “small cloud” or “data center in a box”. Although

Page 3: A Survey on Edge Computing Systems and ToolsIn the post-Cloud era, the proliferation of Internet of Things (IoT) and the popularization of 4G/5G, gradually changes the public’s habit

3

the Cloudlet project is not proposed and launched in thename of edge computing, its architecture and ideas fit thoseof the edge computing and thus can be regarded as an edgecomputing system.

The Cloudlet is in the middle layer of the three-tier edgecomputing architecture and can be implemented on a personalcomputer, low-cost server, or small cluster. It can be composedof a single machine or small clusters consisting of multiplemachines. Like WiFi service access points, a Cloudlet canbe deployed at a convenient location (such as a restaurant, acafe, or a library). Multiple Cloudlets may form a distributedcomputing platform, which can further extend the availableresources for mobile devices [3]. As the Cloudlet is just onehop away from the users’ mobile devices, it improves the QoSwith low communication delay and high bandwidth utilization.

In detail, Cloudlet has three main features as follows.1) Soft State: Cloudlet can be regarded as a small cloud

computing center located at the edge of the network. There-fore, as the server end of the application, the Cloudletgenerally needs to maintain state information for interactingwith the client. However, unlike Cloud, Cloudlet does notmaintain long-term state information for interactions, but onlytemporarily caches some state information. This reduces muchof the burden of Cloudlet as a lightweight cloud.

2) Rich Resources: Cloudlet has sufficient computing re-sources to enable multiple mobile users to offload computingtasks to it. Besides, Cloudlet also has stable power supply soit doesn’t need to worry about energy exhaustion.

3) Close to Users: Cloudlets are deployed at those placeswhere both network distance and physical distance are short tothe end user, making it easy to control the network bandwidth,delay and jitter. Besides, the physical proximity ensures thatthe Cloudlet and the user are in the same context (e.g., thesame location), based on which customized services (e.g., thelocation-based service) could be provided.

To further promote Cloudlet, CMU built up an open edgecomputing alliance, with Intel, Huawei and other compa-nies [2], to develop standardized APIs for Cloudlet-based edgecomputing platforms. Currently, the alliance has transplantedOpenStack to the edge computing platform, which enablesdistributed Cloudlet control and management via the standardOpenStack APIs [4]. With recent development of the edgecomputing, the Cloudlet paradigm has been widely adoptedin various applications, e.g., cognitive assistance system [5],[6], Internet of Things data analysis [7], and hostile environ-ments [8].

Unlike the cloud, cloudlets are deployed on the edge of thenetwork and serve only nearby users. Cloudlet supports appli-cation mobility, allowing devices to switch service requests tothe nearest cloudlet during the mobile process. As shown inFig. 3, Cloudlet supports for application mobility relying onthree key steps.

1) Cloudlet Discovery: Mobile devices can quickly discoverthe available Cloudlets around them, and choose the mostsuitable one to offload tasks.

2) VM Provisioning: Configuring and deploying the serviceVM that contains the server code on the cloudlet so that it isready to be used by the client.

Fig. 4. CloudPath Architecture [9].

3) VM Handoff: Migrating the VM running the applicationto another cloudlet.

B. CloudPath

CloudPath [9] is an edge computing system proposed bythe University of Toronto. In such a system, diverse resourceslike computing and storage are provided along the path fromthe user device to the cloud data center. It supports on-demandallocation and dynamic deployment of the multi-level archi-tecture. The main idea of CloudPath is to implement the so-called “path computing”, such that it can reduce the responsetime and improve the bandwidth utilization, compared withthe conventional cloud computing.

As illustrated in Fig. 4, the bottom layer of CloudPath isuser devices, and the top layer is the cloud computing datacenter. The system reassigns those tasks of the data centersalong the path (for path computing) to support different typesof applications, such as IoT data aggregation, data cachingservices, and data processing services. Developers can selectan optimal hierarchical deployment plan for their services byconsidering the factors such as cost, delay, resource availabilityand geographic coverage. Path computing builds a multi-tieredarchitecture, and from the top (traditional data center) tothe bottom (user terminal equipment), the device capabilitybecomes weaker, while the number of devices gets larger.On the premise of clear separation of computing and states,CloudPath extends the abstract shared storage layer to all datacenter nodes along the path, which reduces the complexityof third-party application development and deployment, andmeanwhile keeps the RESTful development style.

The CloudPath application consists of a set of short-cycleand stateless functions that can be quickly instantiated atany level of the CloudPath framework. Developers either tagfunctions to specify where (such as edges, cores, clouds, etc.)their codes run, or tag performance requirements (such as

Page 4: A Survey on Edge Computing Systems and ToolsIn the post-Cloud era, the proliferation of Internet of Things (IoT) and the popularization of 4G/5G, gradually changes the public’s habit

4

APP1 APP2 APP3

Instance1 Instance3Instance2

Intent-initiate

service

Application-

created services

Local

Edge

Cloud

Composition

Personal Cloud Runtime

Intent

Virtual Platform (STRATUS)

Networked Resources

Fig. 5. PCloud Architecture [10].

response latency) to estimate the running location. CloudPathdoes not migrate a running function/module, but supportsservice mobility by stopping the current instance and re-starting a new one at the expected location. Each CloudPathnode usually consists of the following six modules. 1) PathEx-ecute: implements a serverless cloud container architecturethat supports lightweight stateless application functions/mod-ules. 2) PathStore: provides a distributed eventual consistentstorage system that transparently manages application dataacross nodes. 3) PathRoute: transmits the request to themost appropriate CloudPath node, according to the informationsuch as user’s location in the network, application preferences,or system status. 4) PathDeploy: dynamically deploys andremoves applications on CloudPath nodes based on applicationpreferences and system policies. 5) PathMonitor: providesreal-time monitoring and historical data analysis function toapplications and CloudPath nodes. It collects the metrics ofother CloudPath modules on each node through the PathStoreand presents the data to users using web pages. 6) PathInit: isan initialization module in the top-level data center node andcan be used to upload applications/services to the CloudPath.

C. PCloud

PCloud [10] integrates the edge computing and storageresources with those at the cloud to support seamless mobileservices. The architecture of PCloud is shown in Fig. 5.Specifically, these resources are virtualized through a spe-cial virtualization layer named STRATUS [11], and form adistributed resource pool that can discover new resourcesand monitor resource changes. With the resource pool, theruntime mechanism is responsible for resource application andallocation. Through a resource description interface, the run-time mechanism selects and combines appropriate resourcesbased on the requirements of specified applications. Afterthe resources are combined, it generates a new instanceto provide corresponding services for external applications,according to the resource access control policy. (Note that,

Fig. 6. ParaDrop System [12].

the computing resources of the newly generated instancemay come from multiple devices, which is equivalent to oneintegrated computing device for the external applications.)Thus, an application program is actually a combination ofservices running on the PCloud instance. For example, amedia player application can be a combination of the storage,decoding and playing services. These services could be localor remote but are transparent to the applications. Furthermore,the PCloud system also provides basic system services, such aspermission management and user data aggregation, to controlthe resources access of other users.

In the actual operation, the mobile application describesthe required resources to the PCloud through interfaces. ThePCloud will find out the optimal resource configuration byanalyzing the description and the currently available resources,and then generates an instance to provide correspondingservices for the application. PCloud integrates edge resourceswith cloud resources so that they can complement each other.The abundant resources from the cloud can make up forthe lack of computing and storage capabilities at the edge;meanwhile, due to the physical proximity, edge devices canprovide low-latency services to the user that cloud cannot offer.In addition, PCloud also enhances the availability of the entiresystem and can choose alternate resources when encounteringnetwork and equipment failures.

D. ParaDrop

ParaDrop [12] is developed by the WiNGS Laboratory atthe University of Wisconsin-Madison. It is an edge computingframework that makes the computing/storage resources closeto mobile devices and data sources available to the third partydevelopers. Its goal is to bring intelligence to the network edgein a friendly way.

ParaDrop upgrades the existing access point to an edgecomputing system, which supports applications and serviceslike a normal server. To isolate applications under the multi-tenancy scenario, ParaDrop leverages the lightweight containervirtualization technique. As Fig. 6 shows, the ParaDrop server(in the cloud) controls the deployment, starting and deletion

Page 5: A Survey on Edge Computing Systems and ToolsIn the post-Cloud era, the proliferation of Internet of Things (IoT) and the popularization of 4G/5G, gradually changes the public’s habit

5

Spoke-Worker

Manager

Hub-Worker

Hub-Worker

Spoke-Worker

2nd tier

Spoke-Worker

2nd tier

Spoke-Worker

2nd tier

Spoke-Worker

2nd tier

2nd tier

1st tier

1st tier

Hub-Worker

1st tier

Fig. 7. SpanEdge Architecture [13].

of the applications. It provides a group of APIs, via whichthe developer can monitor and manage the system resourcesand configure the running environment. The web UI is alsoprovided, through which the user can directly interact with theapplications.

The design goals of ParaDrop include three aspects: multi-tenancy, efficient resource utilization and dynamic applicationmanagement. To achieve these goals, the container technologyis applied to manage the multi-tenancy resources separately.As the resources of the edge devices are very limited, com-pared with the virtual machine, the container consumes lessresources and would be more suitable for delay sensitive andhigh I/O applications. Moreover, as applications running inthe container, ParaDrop can easily control their startup andrevocation.

ParaDrop is mainly used for IoT applications, especiallyIoT data analysis. Its advantages over traditional cloud systemcan be summarized as follows: a) since sensitive data can beprocessed locally, it protects the users privacy; b) WiFi accesspoint is only one hop away from the data source, leading to lownetwork delay and stable connection; c) only user requesteddata are transmitted to the equipment through the Internet,thus cutting down the total traffic amount and saving thebandwidth of the backbone network; d) the gateway can obtainthe location information of the edge devices through radiosignals (e.g., the distance between devices, and the locationof the specific device), which facilitates the location-awareservices; e) when edge devices cannot be connected to theInternet, the edge service can still work.

E. SpanEdge

Streaming processing is one important type of applicationsin edge computing, where data is generated by various datasources in different geographical locations and continuouslytransmitted in the form of streams. Traditionally, all raw data istransmitted over the WAN to the data center server, and stream

processing systems, such as Apache Spark and Flink, arealso designed and optimized for one centralized data center.However, this approach cannot effectively handle the hugedata generated by a lot of devices at the edge of the network,and the situation is even worse when the applications requirelow latency and predictability. SpanEdge [13] is a researchproject of the Royal Institute of Technology in Sweden. Itunifies the cloud central node and the near-edge central node,reduces network latency in WAN connections, and provides aprogramming environment that allows the program to run nearthe data source. Developers can focus on developing streamingapplications without considering where the data sources arelocated and distributed.

The data center in SpanEdge is composed of two levels:the cloud data center is the first level and the edge data center(such as the operator’s cloud, Cloudlet, or Fog) is the secondlevel. Partial streaming processing tasks run on the edge cen-tral nodes to reduce latency and boost performance. SpanEdgeuses the master-worker architecture (as shown in Fig. 7) withone manager and multiple workers. The manager collects thestreaming processing requests and assigns tasks to the workers.Workers mainly consist of cluster nodes whose primary re-sponsibility is to execute tasks. There are two types of workers:hub-worker (first level) and spoke-worker (second level). Thenetwork transmission overhead and latency are related to thegeographical location and network connection status betweenworkers. The communication in SpanEdge is also divided intoa system management communication (worker-manager) anda data transmission communication (worker-worker). Systemmanagement communication aims to schedule tasks betweenmanagers and workers, and data transmission communicationtakes care of the data flow in each task. Each worker hasan agent that handles system management operations, such assending and receiving management information, monitoringcompute nodes to ensure that they are running normally, andperiodically sending heartbeat messages to the manager toensure immediate recovery when the task fails.

SpanEdge allows developers to divide the tasks into localones and global ones. Local tasks should run on the nodenear the data source and provide only part of the requireddata; global tasks are responsible for further processing theresults of local tasks and aggregating all results. SpanEdgecreates a copy of the local task on each spoke-worker whichhas all corresponding data sources. If the data is insufficient,the task is dispatched to the hub-worker. The global task runson a separate hub-worker, and the scheduler selects the optimalhub-worker based on the network delay (i.e. the distance fromthe spoke-worker in the network topology).

F. Cloud-Sea Computing Systems

Cloud-Sea Computing Systems project [14] is a main re-search thrust of the Next Generation Information and Com-munication Technology initiative (the NICT initiative) and a10-year strategic priority research initiative, launched by theChinese Academy of Science in 2012. The NICT initiativeaims to address the three major technology challenges inthe coming Zettabyte era: i) improving the performance per

Page 6: A Survey on Edge Computing Systems and ToolsIn the post-Cloud era, the proliferation of Internet of Things (IoT) and the popularization of 4G/5G, gradually changes the public’s habit

6

Sea Zone

Seaport

SeaHTTP

Sea Zone

SeaportSeaHTTP

PB-ScaleSevers

EB-ScaleBillion-Thread

Severs

TP~PB-ScaleCDN/CGN

1000s

Millions

100Ks

HTTP 2.0+

HTTP 1.1HTTP 2.0

Sea-Side FunctionsSensing, Interaction, Local Processing

Cloud-Side FunctionsAggregation, Request-Response,Big Data

Trillions,KB~GB

Billions,TB

Fig. 8. Cloud-Sea Computing Model.

watt by 1000 times, ii) supporting more applications fromthe human-cyber-physical ternary computing, and iii) enablingtransformative innovations in devices, systems and applica-tions, while without polluting beneficial IT ecosystems.

In the cloud-sea computing system, “cloud” refers to thedatacenters and “sea” refers to the terminal side (the clientdevices, e.g., human-facing and physical world facing subsys-tems). The design of the project can be depicted from threelevels: the overall systems architecture level, the datacenterserver and storage system level, and the processor chip level.The project contains four research components: a computingmodel called REST 2.0 which extends the representationalstate transfer (REST) [15] architectural style of Web com-puting to cloud-sea computing, a three-tier storage systemarchitecture capable of managing ZBs of data, a billion-threaddatacenter server with high energy efficiency, and an elasticprocessor aiming at energy efficiency of one trillion operationsper second per watt.

As shown in Fig. 8, the cloud-sea computing model includessea-side functions and cloud-side functions. The sea zone isexpanded from the traditional cloud client, e.g., a home, anoffice, a factory manufacturing pipeline. There can be multipleclient devices inside a sea zone, and each device can be humanfacing or physical world facing. In a sea zone, there is a specialdevice (like a home datacenter or a smart TV set) designatedas the seaport for three purposes: i) a gateway interfacing thesea zone to the cloud, ii) a gathering point of information andfunctionalities inside a sea zone, and iii) a shield protectingsecurity and privacy of the sea zone. A device inside a seazone does not communicate to the cloud directly, but throughthe seaport, either implicitly or explicitly. The SeaHTTP, avariant based on HTTP 2.0, is the widely-used protocol in seazones of the cloud-sea system to connect with the cloud.

The cloud-sea computing model has four distinct features.1) Ternary computing via sea devices: Human and physical

world entities interface and collaborate with the cyberspacethrough the sea side. For example, users can leverage a smartphone application to read and control a sensor device in ahome through an Internet application service.

2) Cooperation with locality: A specific network computingsystem will partition its functions between the sea side and thecloud side. Sea-side functions include sensing, interaction, and

Profiler

Distribution

Estimator

Could

Offline

Analysis

Optimizer

Mobile Users

Requests

Cachier

Recognizer

Responses

Trained

Data

Structure

Fig. 9. Cachier System [16].

local processing while cloud-side functions include aggrega-tions, request-response, and big data processing.

3) Scalability to ZB and trillion devices: This future Netwill collectively need to support trillions of sea devices andto handle ZBs of data.

4) Minimal extension to existing ecosystems: The REST 2.0cloud-sea computing architecture attempts to utilize existingWeb computing ecosystems as much as possible.

Overall, the cloud-sea computing model is proposed tomigrate the cloud computing function to the sea side, and itfocuses more on the devices at the “sea” side and the data atthe “cloud” side. Typical edge computing is more generalizedand may care about any intermediate computing resourcesand network resources between the “sea” and the “cloud”.The researches in cloud-sea computing (e.g., energy efficientcomputing and elastic processor designing) are consultativefor the edge computing.

G. Cachier and Precog

Cachier [16] and Precog [17] are two edge caching systemsthat proposed by the researchers from Carnegie Mellon Uni-versity for image recognition. Recognition applications havestrict response time requirements, while the computation ishuge due to the model complexity and dataset size. With edgecomputing, we can leverage the computing resource of theedge nodes to process the matching, so the network delaycan be reduced. Considering that edge nodes mainly provideservices to nearby users, the spatio-temporal characteristicsof service requests can be leveraged. From the perspective ofcaching, Cachier system proposes that edge nodes can be usedas “computable cache devices”, which can cache recognitionresults, reduce matching dataset size and improve responsetime. By analyzing the response delay model and requests’characteristics, the system can dynamically adjust the size ofthe dataset on the edge nodes according to the environmentfactors, thus ensuring optimal response time.

As Fig. 9 shows, Cachier consists of the Recognizer Mod-ule, the Optimizer Module, and the Offline Analysis Module.The Recognizer Module is responsible for analyzing andmatching the received figures according to the cached trainingdata and model. If there is a match, the result will be directlyreturned to the user, otherwise the figure will be transmittedto the cloud for recognition. The distribution estimator inthe Optimizer Module can use the maximum a posterioriestimation to predict the request distribution. Secondly, given

Page 7: A Survey on Edge Computing Systems and ToolsIn the post-Cloud era, the proliferation of Internet of Things (IoT) and the popularization of 4G/5G, gradually changes the public’s habit

7

Fig. 10. Precog System [17].

the classification algorithm and training dataset, the cachesearching delay and the cache accuracy can also be calculatedby the Offline Analysis Module. At last, the Profiler sub-module is responsible for estimating the network delay andcloud latency incurred by cache misses. It measures andrecords the delay under corresponding distance in real time,and uses the moving average filter to remove the noise data.By taking such information into the delay expectation timemodel, Cachier is able to calculate the optimal cache size,and then adjusts the cache on the edge nodes accordingly.

Precog is an extension of Cachier. The cached data is notonly on the edge nodes, but also on the end devices whichare used for selection calculation in order to reduce the imagemigration between the edge nodes and the cloud. Based onthe prediction model, Precog prefetches some of the trainedclassifiers, uses them to recognize the images, and cooperateswith the edge nodes to efficiently complete the tasks. Precogpushes computing capability to the end device and leveragesthe locality and selectivity of the user requests to reduce thelast mile delay. For the end device, if the cache system uses thesame cache replacement policy as the edge node applies, it willlead to a large number of forced misses. For a single user, thedevice only has the request information of the user, and usuallythe user will not identify the same object multiple timesin the near future. Therefore, Precog constructs a Markovmodel using the request information from the edge node todescribe the relationship among the identified objects, andthen predicts the potential future requests. Precog improvesthe delay expectation time model by considering the networkstate, device capabilities and the prediction information fromedge nodes. The system architecture of Precog is illustratedin Fig. 10. We can see that it is mainly composed of thefeature extraction module, the offline analysis module, theoptimizer module and the profiler module. The edge nodeis mainly responsible for the construction of Markov model.Based on the offline analysis results, the Markov modelprediction results, and the network information provided by theprofiler module, the optimizer module determines the numberof feature to be extracted as well as the data to be cached on

Geocast

Georouter

Conductor

FocusStack API

OpenStack nova

Apps

LSA Subsystem

OSE Subsystem

LSA awareness messages

OSE Messages when

in focus of attention

OSE Always-on messages

Gclib nova

Containers

Geocast query

Gclib nova

Containers

Edge Devices

SAMonitor

VMs

App Server

Cloud based nova-

compute node

Edge Devices

Edge Devices

Fig. 11. FocusStack System [18].

the end device.

H. FocusStack

FocusStack [18] is developed by AT&T Labs, which sup-ports the deployment of complex applications on a variety ofpotential IoT edge devices. Although the resources, such ascomputing, power consumption, connectivity, are limited onedge devices, they have the nature of mobility. Hence, it isvery important to have a system that can discover and organizeavailable edge resources. FocusStack is such a system that candiscover a set of edge devices with sufficient resources, deployapplications and run them accordingly. Thus, the developerscan focus more on the design of applications than on how tofind and track edge resources.

FocusStack consists of two parts (as shown in Fig. 11):i) Geocast system, which provides location-based situationalawareness (LSA) information, and ii) OpenStack extension(OSE), which is responsible for deploying, executing, andmanaging the containers on the edge devices. FocusStackbuilds a hybrid cloud of edge devices (containers) and datacenter servers (virtual machines). When a user initiates a cloudoperation through the FocusStack API (such as instantiatinga container), the LSA subsystem analyzes the scope of therequest based on the Geocast route and sends a resourcelist of geographic locations to the target area, and waitsfor the response from (online) edge devices that can satisfythe requirements. Then, the selected edge device runs thecorresponding OpenStack operations with the help of theconductor module.

The situational aware subsystem enables one group ofdevices to monitor the survival status of each other, and eachdevice can update the sensing information of other devices.The geo-processing projection service is primarily intendedto pass requests and reply messages between areas of interest,send out device control information (like drones) and broadcastlocation-based information. The service is based on a two-layer network, and data can be transmitted over a dedicatedWiFi network or the Internet. The sender transmits the datapacket to the geo-router server which tracks the location and

Page 8: A Survey on Edge Computing Systems and ToolsIn the post-Cloud era, the proliferation of Internet of Things (IoT) and the popularization of 4G/5G, gradually changes the public’s habit

8

Fig. 12. AirBox Architecture [19].

metadata of each edge device, and then the data packet is sentto each device (including edge devices and a cloud devicerunning a SAMonitor instance) in the specified area based onthe address. Location and connection statuses are maintainedby the geo-routing database (GRDB). The GCLib is a softwareframework that provides data acquisition services, and it runson edge devices and cloud applications that use SAMonitor.Any application or service in need of the state of the device inthe area needs a SAMonitor component to communicate withthe LSA subsystem. The application server makes a request toSAMonitor through the FocusStack API, and then SAMonitorbuilds a real-time graph of the current area and returns alist of available edge devices. The regional graph is sent tothe conductor in the OSE, which is responsible for checkingwhether the devices are capable of running the tasks, whetherthe pre-defined policy rules are met, and so on. After that, theavailable edge device list is submitted to the application serverand the server selects devices to be used. OSE manages anddeploys the program through the OpenStack Nova API. Theedge devices run a custom version of Nova Compute to interactwith the local Docker to manage the container. The containeron the edge devices supports all OpenStack services, includingaccess to virtual networks and application-based granularityconfiguration.

I. AirBox

AirBox [19] is a secure, lightweight and flexible edgefunction system developed by Georgia Institute of Technology.It supports fast and flexible edge function loading and providesservice security and user privacy protection. The Edge Func-tion (EF) in AirBox is defined as a service that can be loadedon an edge node, and the software stack that supports the edgefunction is named Edge Function Platform (EFP).

As shown in Fig. 12, the AirBox consists of two parts: theAB console and the AB provisioner. The back-end servicemanager deploys and manages the EF on the edge nodesthrough the AB console. The AB provisioner which runs atthe edge is responsible for providing dynamic EF seamlessly.The edge functions are implemented through system-levelcontainers with minimal constraints on developers. Security isenhanced by using the hardware security mechanisms like IntelSGX. AirBox provides centrally controlled backend services indiscovering edge nodes and registering them. The AB consoleis a web-based management system, which activates the dockerstart-up process on the edge node with AB provisioner.

To ensure the security of the AirBox, the EF consists ofa trusted part and an untrusted part. The untrusted part isresponsible for all network and storage interactions. Basedon OpenSGX APIs, AirBox provides four extensible APIsto implement secure communication and storage: RemoteAttestation, Remote Authentication, Sealed Storage, and EFDefined Interface. AirBox supports a variety of edge featuressuch as aggregation, buffering, and caching.

J. FireworkWayne State University’s MIST Lab proposes a program-

ming model for edge computing—Firework [20]. In the modelof Firework, all services/functions are represented in a dataview of datasets and functions which can be the results ofprocessing with their own data or secondary processing withother services/datasets. A system consisting of nodes thatapply the Firework model is called the Firework system. In theFirework system, instead of dividing nodes into edge nodesand cloud ones, they are all considered as Firework nodes.Through the mutual invocation of the Firework nodes, thedata can be distributed and processed on each node. Alongthe path of data transmission, the edge nodes perform a seriesof calculations upon the data, thus forming a “computationalflow”. The beginning of the computational flow could be userterminals, edge servers close to the user, edge servers close tothe cloud, or cloud nodes.

In the Firework system, the data processing service is splitinto multiple sub-services, and the scheduling of the dataprocessing is performed at the following two layers.

1) Same sub-service layer scheduling: A Firework nodecan cooperate with another for the same sub-services in thesurrounding area to achieve optimal response time. For idleFirework nodes, the system can schedule sub-service programsonto them, such that a cluster providing specific sub-serviceis formed dynamically and complete the service faster.

2) Computational flow layer scheduling: Firework nodesalong the computational flow can cooperate with each otherand dynamically schedule execution nodes to achieve anoptimal solution. For example, depending on the states of thenetwork, the system can choose Firework nodes for serviceproviding based on the nodes’ locations (e.g., selecting thoseclosest to the users).

As shown in Fig. 13, Firework divides the nodes into com-puting nodes and managers, depending on the type of servicethose nodes provide. In general, each node with the Fireworkmodel has three modules: the job management module, theactuator management module, and the service managementmodule.

1) Service management module: This type of module isdesigned for the management of data views. It providesinterfaces to update the data view, as well as relevant programsfor data processing.

2) Job management module: This type of module is re-sponsible for the scheduling, monitoring, and evaluation ofthe task executions. When the local computing resources areinsufficient, the module can look into the node list and thedata view and make resource re-scheduling at the same sub-service layer. When the sub-service is running, the module

Page 9: A Survey on Edge Computing Systems and ToolsIn the post-Cloud era, the proliferation of Internet of Things (IoT) and the popularization of 4G/5G, gradually changes the public’s habit

9

TABLE ISUMMARY OF EDGE COMPUTING SYSTEMS

ApplicationScenarios

EdgeComputing

Systems

EndDevices Edge Nodes Computation

Architecture Features/Targets

GeneralUsage

Scenario

Cloudlet Mobile devices Cloudlet Hybrid(3-tier)

LightweightVM migration

PCloud Mobile devices Mobile devices,local server, PC

Hybrid(3-tier)

Resource integration,dynamic allocation

ParaDrop IoT devices Home gateway Hybrid(3-tier)

Hardware,developer support

Cachier & Precog Mobile devices Mobile devices,local server, PC

Hybrid(3-tier)

Figure recognition,identification

FocusStack IoT devices Router, server Hybrid(3-tier)

Location-based info,OpenStack extension

SpanEdge IoT devices Local cluster,Cloudlet, Fog

Hybrid(2-tier)

Streaming processing,local/global task

AirBox IoT devices Mobile devices,local server, PC

Hybrid(3-tier) Security

CloudPath Mobile devices Multi-leveldata centers

Hybrid(multi-tier) Path computing

Firework Firework.Node Firework.Node Two-layerscheduling Programming model

Cloud-Sea Sea Seaport Hybrid(3-tier)

Minimal extension,transparency

VehicularData

Analytics

OpenVDAP CAVs XEdge Hybrid(2-tier)

Generalplatform

SafeShareRide Smartphonesand vehicles Smartphones Hybrid

(2-tier)In-vehiclesecurity

SmartHome

Vigilia Smarthome devices Hubs Edge

onlySmart

Home Security

HomePad Smarthome devices Routers Edge

onlySmart

Home Security

VideoStream

Analytics

LAVEA ∼ ∼ Edgeor cloud

Lowlatency response

VideoEdge CamerasCameras and

privateclusters

Hybrid(3-tier)

Resource-accuracytradeoff

Videoon drones

Autonomousdrones

Portableedge computers

Edgeonly

Bandwidthsaving

VirtualReality MUVR Smartphones Individual

householdsEdgeonly

Resource utilizationefficiency

optimization

Fig. 13. An example of a Firework instance that consists of heterogeneouscomputing platforms [20].

can also provide necessary monitoring information and givefeedback to other upstream and downstream nodes for flowlayer scheduling.

3) Actuator management module: This type of module ismainly responsible for managing all hardware resources andhosting the execution processes of different tasks. With thehelp of this module, the device, running environment and the

upper layer functions could be decoupled, so that the nodes ofFirework system are not limited to a certain type of devices,and the data processing environment is not limited to a certaintype of computing platform.

K. Other Edge Computing Systems

The edge systems introduced above depict some typicaland basic innovations on the exploitation of edge analyticsfor highly-responsive services. The previous systems are usedin general cases, which lay the foundation of further devel-opment. We highlight that, in addition to these efforts, thereare many other edge systems tuned for a series of differentapplication scenarios.In Table I, we briefly summarize thegeneral and application-specific edge systems, which leveragedifferent kinds of edge nodes to serve diverse end devicesusing either a hybrid or edge only computation architecture.

Compared to both on-board computation and cloud-basedcomputation, edge computing can provide more effective dataanalytics with lower latency for the moving vehicles [21].In [21], an open full-stack edge computing-based platformOpenVDAP is proposed for the data analytics of connectedand autonomous vehicles (CAVs). OpenVDAP proposes sys-

Page 10: A Survey on Edge Computing Systems and ToolsIn the post-Cloud era, the proliferation of Internet of Things (IoT) and the popularization of 4G/5G, gradually changes the public’s habit

10

tematic mechanisms, including varied wireless interfaces, toutilize the heterogeneous computation resources of nearbyCAVs, edge nodes, and the cloud. For optimal utilization of theresources, a dynamic scheduling interface is also provided tosense the status of available resources and to offload dividedtasks in a distributed way for computation efficiency. Safe-ShareRide is an edge-based attack detection system addressingin-vehicle security for ridesharing services [22]. Its threedetection stages leverage both the smartphones of drivers andpassengers as edge computing platform to collect multime-dia information in vehicles. Specifically, speech recognitionand driving behavior detection stages are first carried outindependently to capture in-vehicle danger, and video captureand uploading stage is activated when abnormal keywords ordangerous behaviors are detected to collect videos for cloud-based analysis. By using such an edge-cloud collaborativearchitecture, SafeShareRide can accurately detect attacks in-vehicle with low bandwidth demand.

Another scenario that edge computing can play an impor-tant role is the IoT devices management in the smart homeenvironment. Wherein, the privacy issue of the wide range ofhome-devices is a popular topic. In [23], the Vigilia systemis proposed to harden smart home systems by restricting thenetwork access of devices. A default access deny policy andan API-granularity device access mechanism for applicationsare adopted to enforce access at the network level. Run timechecking implemented in the routers only permits those de-clared communications, thus helping users secure their home-devices. Similarly, the HomePad system in [24] also proposesto execute IoT applications at the edge and introduces aprivacy-aware hub to mitigate security concerns. Homepadallows users to specify privacy policy to regulate how ap-plications access and process their data. Through enforcingapplications to use explicit information flow, Homepad can useProlog rules to verify whether applications have the ability toviolate the defined privacy policy at install time.

Edge computing has also been widely used in the analysisof video stream. LAVEA is an edge-based system built forlatency-aware video analytics nearby the end users [25]. Inorder to minimize the response time, LAVEA formulates anoptimization problem to determine which part of tasks to beoffloaded to the edge computer and uses a task queue priori-tizer to minimize the makespan. It also proposes several taskplacement schemes to enable the collaboration of nearby edgenodes, which can further reduce the overall task completiontime. VideoEdge is a system that provides the most promisingvideo analytics implementation across a hierarchy of clustersin the city environment [26]. A 3-tier computation architectureis considered with deployed cameras and private clusters asthe edge and remote server as the cloud. The hierarchicaledge architecture is also adopted in [27] and is believedto be promising in processing live video stream at scale.Technically, VideoEdge searches thousands of combinationsof computer vision components implementation, knobs, andplacement and finds a configuration to balance the accuracyand resource demands using an efficient heuristic. In [28], avideo analytics system for autonomous drones is proposed,where edge computing is introduced to save the bandwidth.

Portable edge computers are required here to support dynamictransportation during a mission. Totally four different videotransmission strategies are presented to build an adaptive andefficient computer vision pipeline. In addition to the analyticswork (e.g., object recognition), the edge nodes also train filtersfor the drones to avoid the uploading of the uninteresting videoframes.

In order to provide flexible virtual reality (VR) on unteth-ered smartphones, edge computing can be useful to transportthe heavy workload from smartphones to their nearby edgecloud [29]. However, the rendering task of the panoramic VRframes (i.e., 2GB per second) will also saturate the individualhouseholds as common edge in the house. In [29], the MUVRsystem is designed to support multi-user VR with efficientbandwidth and computation resources utilization. MUVR isbuilt on a basic observation that the VR frames being renderedand transmitted to different users are highly redundant. Forcomputation efficiency, MUVR maintains a two-level hierar-chical cache for invariant background at the edge and the userend to reuse frames whenever necessary. Meanwhile, MUVRtransmits a part of all frames in full and delivers the distinctportion for the rest frames to further reduce the transmissioncosts.

III. OPEN SOURCE EDGE COMPUTING PROJECTS

Besides the designed edge computing systems for specificpurposes, some open source edge computing projects havealso been launched recently. The Linux Foundation pub-lished two projects, EdgeX Foundry in 2017 and AkrainoEdge Statck [30] in 2018. The Open Network Founda-tion(ONF) launched a project namely CORD (Central OfficeRe-architected as a Datacenter) [31]. The Apache SoftwareFoundation published Apache Edgent. Microsoft publishedAzure IoT Edge in 2017 and announced it as an open sourcein 2018.

Among them, CORD and Akraino Edge Stack focus onproviding edge cloud services; EdgeX Foundry and ApacheEdgent focus on IoT and aim to solve problems which bringdifficulties to practical applications of edge computing in IoT;Azure IoT Edge provides hybrid cloud-edge analytics, whichhelps to migrate cloud solutions to IoT devices.

A. CORD

CORD is an open source project of ONF initiated by AT&Tand is designed for network operators. Current network in-frastructure is built with closed proprietary integrated systemsprovided by network equipment providers. Due to the closedproperty, the network capability cannot scale up and downdynamically. And the lack of flexibility results in inefficientutilization of the computing and networking resources. CORDplans to reconstruct the edge network infrastructure to builddatacenters with SDN [32], NFV [33] and Cloud technolo-gies. It attempts to slice the computing, storage and networkresources so that these datacenters can act as clouds at theedge, providing agile services for end users.

CORD is an integrated system built from commodity hard-ware and open source software. Fig. 14 shows the hardware

Page 11: A Survey on Edge Computing Systems and ToolsIn the post-Cloud era, the proliferation of Internet of Things (IoT) and the popularization of 4G/5G, gradually changes the public’s habit

11

Mobile

Enterprise

Residential

EnterpriseMobile Residential

CORD Virtualized Central Office

Cord Controller (XOS)

Subscribers

Mobile

Access

Enterprise

Access

Residential

Access Shared Server Resources ROADM(Core)

WAN

Fig. 14. Hardware Architecture of CORD.

ACCESS-AS-A-

SERVICE

SUBSCRIBER-AS-A-

SERVICE

INTERNET-AS-A-

SERVICECDN

MONITORING-AS-

A-SERVICE

XOS

CEILOMETER vSG vCDN

OPENSTACK/DOCKER

VTNFABRIC

CONTROL

MULTICAST

CONTROLvOLT vROUTER

ONOS

Fig. 15. Software Architecture of CORD.

architecture of CORD [31]. It uses commodity servers thatare interconnected by a Fabric of White-box switches. White-box switch [34] is a component of SDN switch, whichis responsible to regulate the flow of data according toSDN controller. These commodity servers provide computing,storage resources, and the fabric of switches are used tobuild the network. This switching fabric is organized to aSpine-Leaf topology [35], a kind of flat network topologystructure which adds a horizontal network structure parallelto the trunk longitudinal network structure, and then addscorresponding switching network on the horizontal structure.Comparing to traditional three-tier network topology, it canprovide scalable throughput for greater East-to-West networktraffic, that is, traffic coming from network diagram drawingsthat usually depict local area network (LAN) traffic hori-zontally. In addition, specialized access hardware is requiredto connect subscribers. The subscribers can be divided intothree categories for different use cases, mobile subscribers,enterprise subscribers and residential subscribers. Each cat-egory demands different access hardware due to differentaccess technologies. In terms of software, Fig. 15 shows thesoftware architecture of CORD [31]. Based on the serversand the fabric of switches, OpenStack provides with IaaScapability for CORD, it manages the compute, storage andnetworking resources as well as creating virtual machines andvirtual networks. Docker is used to run services in containersfor isolation. ONOS(Open Network Operating System) is a

network operating system which is used to manage networkcomponents like the switching fabric and provide communi-cation services to end-users. XOS provides a control planeto assemble and compose services. Other software projectsprovide component capabilities, for example, vRouter(VirtualRouter) provides with virtual routing functionality.

The edge of the operator network is a sweet spot for edgecomputing because it connects customers with operators and isclose to customers’ applications as data sources. CORD takesedge computing into consideration and moves to support edgecomputing as a platform to provide edge cloud services (fromthe released version 4.1). CORD can be deployed into threesolution, M-CORD (Mobile CORD), R-CORD (ResidentialCORD) and E-CORD (Enterprise CORD) for different usecases. M-CORD focuses on mobile network, especially 5Gnetwork, and it plans to disaggregate and virtualize cellularnetwork functions to enable services be created and scaleddynamically. This agility helps to provide multi-access edgeservices for mobile applications. For those use cases likedriverless cars or drones, users can rent the edge service torun their edge applications. Similarly, R-CORD and E-CORDare designed to be agile service delivery platforms but fordifferent users, residential and enterprise users relatively.

So far, the deployment of CORD is still under test amongnetwork operators, and more researches are needed to combineCORD with various edge applications.

B. Akraino Edge Stack

Akraino Edge Stack, initiated by AT&T and now hosted byLinux Foundation, is a project to develop a holistic solutionfor edge infrastructure so as to support high-availability edgecloud services [30]. An open source software stack, as thesoftware part of this solution, is developed for network carrierto facilitate optimal networking and workload orchestrationfor underlying infrastructure in order to meet the need ofedge computing such as low latency, high performance, highavailability, scalability and so on.

To provide a holistic solution, Akraino Edge Stack hasa wide scope from infrastructure layer to application layer.Fig. 16 [30] shows the scope with three layers. In the ap-plication layer, Akraino Edge Stack wants to create a virutalnetwork function (VNF) ecosystem and calls for edge applica-tions. The second layer consists of middleware which supportsapplications in the top layer. In this layer, Akraino Edge Stackplans to develop Edge API and framework for interoperabilitywith 3rd party Edge projects such as EdgeX Foundry. Atthe bottom layer, Akraino Edge Stack intends to developan open source software stack for the edge infrastructure incollaboration with upstream communities. It interfaces withand maximizes the use of existing open source projects suchas Kubernetes, OpenStack and so on. Akraino Edge Stackprovides different edge use cases with blueprints, which aredeclarative configurations of entire stack including hardware,software, point of delivery, etc [30]. The application domainsof these blueprints start from Telco industry and are expectedto applied in more domains like Enterprise and industrial IoT.Now Akraino Edge Stack has put forward several blueprints

Page 12: A Survey on Edge Computing Systems and ToolsIn the post-Cloud era, the proliferation of Internet of Things (IoT) and the popularization of 4G/5G, gradually changes the public’s habit

12

Fig. 16. Akraino Edge Stack’s Scope.

such as Micro-MEC and Edge Media Processing. Micro-MECintends to develop a new service infrastructure for smart cities,which enables developing services for smart city and has highdata capacity for citizens. Edge Media Processing intends todevelop a network cloud to enable real-time media processingand edge media AI analytics with low latency.

As an emerging project, Akraino Edge Stack has been takento execution since August 2018. Thus more researches needto be done with the development of this project.

C. EdgeX Foundry

EdgeX Foundry is a standardized interoperability frame-work for IoT edge computing, whose sweet spots are edgenodes such as gateways, hubs, and routers [36]. It can con-nect with various sensors and devices via different protocols,manage them and collect data from them, and export the datato a local application at the edge or the cloud for furtherprocessing. EdgeX is designed to be agnostic to hardware,CPU, operating system, and application environment. It canrun natively or run in docker containers.

Fig. 17 [36] shows the architecture of EdgeX Foundry.“South side” at the bottom of the figure includes all IoTobjects, and the edge of the network that communicatesdirectly with those devices, sensors, actuators and other IoTobjects to collect the data from them. Relatively, “north side”at the top of the figure includes the Cloud (or Enterprisesystem) where data are collected, stored, aggregated, analyzed,and turned into information, and the part of the network thatcommunicates with the Cloud. EdgeX Foundry connects thesetwo sides regardless of the differences of hardware, softwareand network. EdgeX tries to unify the manipulation methodof the IoT objects from the south side to a common API, sothat those objects can be manipulated in the same way by theapplications of north side.

EdgeX uses a Device Profile to describe a south side object.A Device Profile defines the type of the object, the format ofdata that the object provides, the format of data to be storedin EdgeX and the commands used to manipulate this object.Each Device Profile involves a Device Service, which is aservice that converts the format of the data, and translatesthe commands into instructions that IoT objects know how toexecute. EdgeX provides SDK for developers to create Device

Services, so that it can support for any combination of deviceinterfaces and protocols by programming.

EdgeX consists of a collection of microservices, whichallows services to scale up and down based on device ca-pability. These microservices can be grouped into four servicelayers and two underlying augmenting system services, asdepicted in Fig. 17. The four service layers include DeviceServices Layer, Core Services Layer, Supporting ServicesLayer and Export Services Layer, respectively; the two un-derlying augmenting system services are System Managementand Security, respectively. Each of the six layers consists ofseveral components and all components use a common RestfulAPI for configuration.

1) Device Services Layer: This layer consists of DeviceServices. According to the Device Profiles, Device ServiceLayer converts the format of the data, sends them to CoreServices Layer, and translates the command requests from theCore Services Layer.

2) Core Services Layer: This layer consists of four com-ponents: Core Data, Command, Metadata, and Registry &Configuration. Core Data is a persistence repository as well asa management service. It stores and manages the data collectedfrom the south side objects. Command is a service to offerthe API for command requests from the north side to DeviceServices. Metadata is a repository and management service formetadata about IoT objects. For example, the Device Profilesare uploaded and stored in Metadata. Registry & Configu-ration provides centralized management of configuration andoperating parameters for other microservices.

3) Supporting Services Layer: This layer is designed toprovide edge analytics and intelligence [33]. Now the RulesEngine, Alerting and Notification, Scheduling and Loggingmicroservices are implemented. A target range of data can beset to trigger a specific device actuation as a rule and RulesEngine helps realize the rule by monitoring the incoming data.Alerting and Notifications can send notifications or alerts toanother system or person by email, REST callback or othermethods when an urgent actuation or a service malfunctionhappens. The scheduling module can set up a timer to regularlyclean up the stale data. Logging is used to record the runninginformation of EdgeX.

4) Export Services Layer: This layer connects EdgeX withNorth Side and consists of Client Registration and ExportDistribution. Client Registration enables clients like a specificcloud or a local application to register as recipients of datafrom Core Data. Export Distribution distributes the data tothe Clients registered in Client Registration.

5) System Management and Security: System Managementprovides management operations including installation, up-grade, starting, stopping and monitoring, as EdgeX is scalableand can be deployed dynamically. Security is designed toprotect the data and command of IoT objects connected withEdgeX Foundry.

EdgeX is designed for the user cases dealing with multitudesof sensors or devices, such as automated factories, machinerysystems and lots of other cases in IoT. Now EdgeX Foundryis in the rapid upgrading phase, and more features will be

Page 13: A Survey on Edge Computing Systems and ToolsIn the post-Cloud era, the proliferation of Internet of Things (IoT) and the popularization of 4G/5G, gradually changes the public’s habit

13

Fig. 17. Architecture of EdgeX Foundry.

Sensors or devices

data

Edgent application on edge devices

ConnectorConnector

Filter

split

Transform

...

Tstream

The Cloud

data

control

Topology A

Topology B

...

Provider

Fig. 18. Model of the Edgent Applications.

added in future releases. An EdgeX UI is in development asa web-based interface to add and manage the devices.

D. Apache Edgent

Apache Edgent, which was known as Apache Quarks pre-viously, is an Apache Incubator project at present. It is anopen source programming model for lightweight runtime dataanalytics, used in small devices such as routers and gatewaysat the edge. Apache Edgent focuses on data analytics at theedge, aiming to accelerate the development of data analysis.

As a programming model, Edgent provides API to buildedge applications. Fig. 18 illustrates the model of the Edgentapplications. Edgent uses a topology as a graph to representthe processing transformation of streams of data which areabstracted to a Tstream class. A connector is used to getstreams of data from external entities such as sensors anddevices in physical world, or to send streams of data toback-end systems like a cloud. The primary API of Edgent

is responsible for data analysis. The streams of data can befiltered, split, transformed or processed by other operationsin a topology. Edgent uses a provider to act as a factory tocreate and execute topologies. To build an Edgent application,users should firstly get a provider, then create a topology andadd the processing flow to deal with the streams of data, andfinally submit the topology. The deployment environments ofEdgent are Java 8, Java 7 and Android.

Edgent provides APIs for sending data to back-end systemsand now supports MQTT, IBM Watson IoT Platform, ApacheKafka and custom message hubs. Edgent applications analyzethe data from sensors and devices, and send the essential datato the back-end system for further analysis. For IoT use cases,Edgent helps reduce the cost of transmitting data and providelocal feedback.

Edgent is suitable for use cases in IoT such as intelligenttransportation, automated factories and so on. In addition, thedata in Edgent applications are not limited to sensor readings,they can also be files or logs. Therefore, Edgent can be appliedto other use cases. For example, it can perform local dataanalysis when embedded in application servers, where it cananalyze error logs without impacting network traffic [37].

E. Azure IoT Edge

Azure IoT Edge, provided by Microsoft Azure as a cloudservice provider, tries to move cloud analytics to edge devices.These edge devices can be routers, gateways or other deviceswhich can provide computing resources. The programmingmodel of Azure IoT Edge is the same as that of other AzureIoT services [38] in the cloud, which enables user to movetheir existing applications from Azure to the edge devices forlower latency. The convenience simplifies the development ofedge applications. In addition, Azure services like Azure Func-tions, Azure Machine Learning and Azure Stream Analyticscan be used to deploy complex tasks on the edge devices such

Page 14: A Survey on Edge Computing Systems and ToolsIn the post-Cloud era, the proliferation of Internet of Things (IoT) and the popularization of 4G/5G, gradually changes the public’s habit

14

Azure IoT edge device

Data

Analystics

Meachine

Learning

Azure

Fuctions

Module

Module

Module

IoT Edge cloud

interface

IoT Edge

agent

instantiate

module

ensure running

deploy module

report module

statusIoT Edge

hub

dataIoT Hub

data

data

IoT Edge runtimesensors

and

devices

Fig. 19. Diagram of Azure IoT Edge.

as machine learning, image recognition and other tasks aboutartificial intelligence.

Azure IoT Edge consists of three components: IoT Edgemodules, IoT Edge runtime and a cloud-based interface asdepicted in Fig. 19. The first two components run on edgedevices, and the last one is an interface in the cloud. IoT Edgemodules are containerized instances running the customer codeor Azure services. IoT Edge runtime manages these modules.The cloud-based interface is used to monitor and manage theformer two components, in other words, monitor and managethe edge devices.

IoT Edge modules are the places that run specific applica-tions as the units of execution. A module image is a dockerimage containing the user code. A module instance, as adocker container, is a unit of computation running the moduleimage. If the resources at the edge devices are sufficient, thesemodules can run the same Azure services or custom applica-tions as in the cloud because of the same programming model.In addition, these modules can be deployed dynamically asAzure IoT Edge is scalable.

IoT Edge runtime acts as a manager on the edge devices. Itconsists of two modules: IoT Edge hub and IoT Edge agent.IoT Edge hub acts as a local proxy for IoT Hub which is amanaged service and a central message hub in the cloud. As amessage broker, IoT Edge hub helps modules communicatewith each other, and transport data to IoT Hub. IoT Edgeagent is used to deploy and monitor the IoT Edge modules.It receives the deployment information about modules fromIoT Hub, instantiates these modules, and ensures they arerunning, for example, restarts the crashed modules. In addition,it reports the status of the modules to the IoT hub.

IoT Edge cloud interface is provided for device manage-ment. By this interface, users can create edge applications,then send these applications to the device and monitor therunning status of the device. This monitoring function is usefulfor use cases with massive devices, where users can deployapplications to devices on a large scale and monitor thesedevices.

A simple deployment procedure for applications is that:users choose a Azure service or write their own code as anapplication, build it as an IoT Edge module image, and deploy

this module image to the edge device with the help of the IoTEdge interface. Then the IoT Edge receives the deploymentinformation, pulls the module image, and instantiates themodule instance.

Azure IoT Edge has wide application areas. Now it has ap-plication cases on intelligent manufacturing, irrigation system,drone management system and so on. It is worth noting thatAzure IoT Edge is open-source but the Azure services likeAzure Functions, Azure Machine Learning and Azure Streamare charged.

F. Comparative Study

We summarize the features of the above open source edgecomputing systems in Table II. Then we compare them fromdifferent aspects in Table II, including the main purpose of thesystems,application area, deployment, target user, virtualiza-tion technology, system characteristic, limitations, scalabilityand mobility. We believe such comparisons give better under-standings of current open source edge computing systems.

1) Main purpose: The main purpose shows the targetproblem that a system tries to fix, and it is a key factorfor us to choose a suitable system to run edge applications.As an interoperability framework, EdgeX Foundry aims tocommunicate with any sensor or device in IoT. This ability isnecessary for edge applications with data from various sensorsand devices. Azure IoT Edge offers an efficient solution tomove the existing applications from cloud to edge, and todevelop edge applications in the same way with the cloudapplications. Apache Edgent helps to accelerate the develop-ment process of data analysis in IoT use cases. CORD aimsto reconstruct current edge network infrastructure to builddatacenters so as to provide agile network services for end-usercustomers. From the view of edge computing, CORD provideswith multi-access edge services. Akraino Edge Stack providesan open source software stack to support high-availability edgeclouds.

2) Application area: EdgeX Foundry and Apache Edgentboth focus on IoT edge, and EdgeX Foundry is geared towardcommunication with various sensors and devices, while Edgentis geared toward data analysis. They are suitable for intelligentmanufacturing, intelligent transportation and smart city wherevarious sensors and devices generate data all the time. AzureIoT Edge can be thought as the expansion of Azure Cloud. Ithas an extensive application area but depends on the computeresources of edge devices. Besides, it is very convenient todeploy edge applications about artificial intelligence such asmachine learning and image recognition to Azure IoT Edgewith the help of Azure services. CORD and Akraino EdgeStack support edge cloud services, which have no restrictionon application area. If the edge devices of users don’t havesufficient computing capability, these two systems are suitablefor users to run resource-intensive and interactive applicationsin connection with operator network.

3) Deployment: As for the deployment requirements,EdgeX Foundry, Apache Edgent and Azure IoT Edge aredeployed in edge devices such as routers, gateways, switchersand so on. Users can deploy EdgeX Foundry by themselves,

Page 15: A Survey on Edge Computing Systems and ToolsIn the post-Cloud era, the proliferation of Internet of Things (IoT) and the popularization of 4G/5G, gradually changes the public’s habit

15

TABLE IICOMPARISON OF OPEN EDGE SYSTEM CHARACTERISTICS

Aspect EdgeX Foundry Azure IoTEdge Apache Edgent CORD Akraino

Edge StackUser access

interfaceRestful API

or EdgeX UIWeb service,

Command-line API API orXOS-GUI N/A

OS support Various OS Various OS Various OS Ubuntu LinuxProgramming

framework Not provided Java, .NET, C,Python, etc. Java Shell script,

Python N/A

Main purposeProvide with

Interoperabilityfor IoT edge

Support hybridcloud-edgeanalytics

Accelerate thedevelopmentprocess of

data analysis

Transform edge ofthe operator network

into agile servicedelivery platforms

Support edgeclouds with an

open sourcesoftware stack

Application area IoT Unrestricted IoT Unrestricted UnrestrictedDeployment Dynamic Dynamic Static Dynamic Dynamic

Target user General users General users General users Network operators Networkoperators

Virtualizationtechnology Container Container JVM Virtual Machine

and ContainerVirtual Machineand Container

Systemcharacteristics

A common API fordevice management

PowerfulAzure services

APIs fordata analytics

Widespreadedge clouds

Widespreadedge clouds

LimitationLack of

programableinterface

Azure Servicesis chargeable

Limited todata analytics

Unable tobe offline

Unable tobe offline

Scalability Scalable Scalable Not scalable Scalable ScalableMobility Not support Not support Not support Support Support

add or reduce microservices dynamically, and run their ownedge applications. Differently, users need the help of cloud-based interface to deploy Azure IoT Edge and develop theiredge applications. CORD and Akraino Edge Stack are de-signed for network operators, who need fabric switches, accessdevices, network cards and other related hardware apart fromcompute machines. Customers have no need to think aboutthe hardware requirements and management process of thehardware, but to rent the services provided by the networkoperators like renting a cloud service instead of managing aphysical server.

4) Target user: Though these open source systems focuson edge computing, their target users are not the same.EdgeX Foundry, Azure IoT Edge and Apache Edgent haveno restriction on target users. Therefore, every developer candeploy them into local edge devices like gateways, routersand hubs. Differently, CORD and Akraino Edge Stack arecreated for network operators because they focus on edgeinfrastructure.

5) Virtualization technology: Virtualization technologiesare widely used nowadays. Virtual machine technology canprovide better management and higher utilization of resources,stability, scalability and other advantages. Container technol-ogy can provide services with isolation and agility but withnegligible overhead, which can be used in edge devices [39].Using OpenStack and Docker as software components, CORDand Akraino Edge Stack use both of these two technologies tosupport edge cloud. Different edge devices may have differenthardware and software environment. For those edge systemswhich are deployed on edge devices, container is a goodtechnology for services to keep independence in differentenvironment. Therefore, EdgeX Foundry and Azure IoT Edgechoose to run as docker containers. As for Edgent, Edgentapplications run on JVM.

6) System characteristic: System characteristics show theunique features of the system, which may help users todevelop, deploy or monitor their edge applications. It willsave lots of workload and time if making good use of thesecharacteristics. EdgeX Foundry provides a common API tomanage the devices, and this brings great convenience to de-ploying and monitoring edge applications in large scale. AzureIoT Edge provides powerful Azure services to accelerate thedevelopment of edge applications. Apache Edgent provides aseries of functional APIs for data analytics, which lowers thedifficulty and reduces the time on developing edge analyticapplications. CORD and Akraino Edge Stack provide withmulti-access edge services on edge cloud. We only need tokeep connection with operator network, and we can apply forthese services without the need to deploy an edge computingsystem on edge devices by ourselves.

7) Limitation: This subsection discusses the limitation ofthe latest version of them to deploy edge applications. Thelastest version of EdgeX Foundry has not provided a pro-grammable interface in its architecture for developers to writetheir own applications. Although EdgeX allows us to addcustom implementations, it demands more workload and time.As for Azure IoT Edge, though it is open-source and free,Azure services are chargeable as commercial software. ForApache Edgent, it is lightweight and it focuses on only dataanalytics. As for CORD and Akraino Edge Stack, these twosystems demand stable network between data sources and theoperators because the edge applications are running on theedge of operator network rather than local devices.

8) Scalability: Increasing applications at edge make thenetwork architecture more complex and the application man-agement more difficult. Scalability is one major concern inedge computing. Among these edge computing systems, AzureIoT Edge, CORD and Akraino Edge Stack apply dockertechnology or virtual machine technology to support users to

Page 16: A Survey on Edge Computing Systems and ToolsIn the post-Cloud era, the proliferation of Internet of Things (IoT) and the popularization of 4G/5G, gradually changes the public’s habit

16

Choice of edge

computing

systems

Deployment by

developers

Third-party

servicesCORD or Akraino

Edge Stack

Local solutions while

supporting

third-party clouds

Need for

third-party

clouds

Azure IoT Edge,

AWS IoT Greengrass

or Link IoT Edge

Device

management

capability

Accelerate the

development of

analytics

EdgeX

Foundry

Apache

Edgent

Fig. 20. The choice of open-source tools in different scenarios.

scale up or down their applications efficiently by adding ordeleting module images. EdgeX Foundry is also a scalableplatform that enables users to dynamically add or reducemicroservices to adapt to the actual needs. But Apache Edgentis not scalable enough because every Edgent application is asingle Java application and performance cannot be changeddynamically.

9) Mobility: For EdgeX Foundry, Apache Edgent andAzure IoT Edge, once the applications are executed on someedge devices, they cannot be dynamically migrated to otherdevices. CORD and Akraino Edge Stack, deployed in thetelecom infrastructure, support mobile edge services throughmobile access network like 4G/5G. The mobility of thesesystems meet the need of cases such as unmanned cars anddrones.

10) Scenarios: We discuss the choice of open-source toolsfrom the perspective of the following three scenarios, as shownin Fig. 20.

In the first scenario, suppose IoT edge applications arerunning on local area network (LAN), and the local enterprisesystem is regarded as back-end system with no need for third-party clouds. In this case, EdgeX Foundry or Apache Edgentare favorable because they enable users to build and controltheir own back-end system without being bound to any specificcloud platform. Furthermore, for the sake of managing andcontrolling edge devices on a large scale, EdgeX Foundryis a better choice for good device management capability.If considering data analysis, Apache Edgent is preferableto EdgeX Foundry. It provides a programming model toaccelerate the development of edge analytic applications anda set of powerful APIs for data processing.

In the second scenario, suppose the cloud services push tothe edge of network to improve the quality. In this case, AuzreIoT Edge provides a convenient way for developers to migratethe applications from the cloud to the edge devices, and toleverage third-party high value services or functions whendeveloping edge systems. Besides, AWS IoT Greengrass andLink IoT Edge, which are published by Amazon and AlibabaCloud, are good choices as the competitive projects. Morespecifically, Azure IoT Edge provides Azure services such asAzure Functions, Azure Stream Analytics, and Azure MachineLearning. AWS IoT Greengrass can run AWS Lamda functionsand machine learning models that are created, trained, and

Top Cloud Layer

Middle Edge Server Layer

Bottom (Battery-powered) Device Layer

Fig. 21. The Three-layer Edge Computing Paradigm from a Power SourceView.

optimized in the cloud. Link IoT Edge provides FunctionCompute and other functions. Based on the application re-quirements, a suitable system could be chosen among themby taking account the functions they provide.

In the third scenario, suppose the users expect to directlyleverage third-party services to deploy edge applications with-out any hardware or software system locally. In this case,edge systems such as CORD and Akraino Edge Stack aresuitable. The users could choose one of them dependent ontheir application requirements. These systems are deployedby telecom operators. And telecom operators are also theproviders of network services. Thus when there exist specialnetwork requirements of edge applications, these systemscould satisfy the requirements. For example, edge computingapplications on unmanned vehicles or drones need the supportof wireless telecom networks (such as 4G/5G), in this case, amobile edge computing service provided by these systems isa good choice.

In addition to the systems described above, there are otheremerging open source projects. Device Management Edge, aspart of Mbed IoT Platform published by ARM, is responsiblefor edge computing and provides the ability to access, manageand control Edge devices. KubeEdge, released by Huawei,provides edge nodes with native containerized applicationorchestration capabilities.

IV. ENHANCING ENERGY EFFICIENCY OF EDGECOMPUTING SYSTEMS

In the design of an edge computing system, energy con-sumption is always considered as one of the major concernsand evaluated as one of the key performance metrics. In thissection, we review the energy efficiency enhancing mecha-nisms adopted by the state-of-the-art edge computing systems,from the view of top cloud layer, middle edge server layer andbottom device layer, respectively (the three-layer paradigm isillustrated by Fig. 21).

A. At the Top Cloud Layer

For cloud computing, a centralized DC can comprise thou-sands of servers and thus consume enormous energy [40].

Page 17: A Survey on Edge Computing Systems and ToolsIn the post-Cloud era, the proliferation of Internet of Things (IoT) and the popularization of 4G/5G, gradually changes the public’s habit

17

As an alternative to the cloud computing, does the edge/fogcomputing paradigm consume more or less energy? Differentpoints of view have been given: some claim that decentralizeddata storage and processing supported by the edge computingarchitecture are more energy efficient [41], [42], while someothers show that such distributed content delivery may con-sume more energy than that of the centralized way [43].

The authors in [44] give a thorough energy analysis forapplications running over the centralized DC (i.e., under cloudmode) and decentralized nano DCs (i.e., under fog mode),respectively. The results indicate that the fog mode maybe with a higher energy efficiency, depending on severalsystem design factors (e.g., type of application, type of accessnetwork, and ratio of active time), and those applications thatgenerate and distribute a large amount of data in end-userpremises result in best energy saving under the fog mode.

Note that the decentralized nano DCs or fog nodes inour context are different from the traditional CDN datacen-ters [45], [46], [47]. They are also designed in a decentralizedway but usually with much more powerful computing/com-municating/storage capacities.

B. At the Middle Edge Server Layer

At the middle layer of the edge computing paradigm, energyis also regarded as an important aspect, as the edge serverscan be deployed in a domestic environment or powered bythe battery (e.g., a desktop or a portable WiFi router, asshown in Fig. 21). Thus, to provide a higher availability, manypower management techniques have been applied to limit theenergy consumption of edge servers, while still ensuring theirperformances. We give a review of two major strategies usedat the edge server layer in recent edge computing systems.

1) Low-power system design and power management:In [48], the tactical cloudlet is presented and its energyconsumption when performing VM synthesis is evaluatedparticularly, under different cloudlet provisioning mechanisms.The results show that the largest amount of energy is consumedby i) VM synthesis due to the large payload size, and ii)on-demand VM provisioning due to the long application-ready time. Such results lead to the high energy efficiencypolicy: combining cached VM with cloudlet push for cloudletprovision.

A service-oriented architecture for fog/edge computing, FogData, is proposed and evaluated in [49]. It is implemented withan embedded computer system and performs data mining anddata analytics on the raw data collection from the wearablesensors (in telehealth applications). With Fog Data, ordersof magnitude data are reduced for transmission, thus leadingto enormous energy saving. Furthermore, Fog Data is with alow power architecture design, and even consumes much lessenergy than that of a Raspberry Pi.

In [50], a performance-aware orchestrator for Docker con-tainers, named DockerCap, is developed to meet the powerconsumption constraints of the edge server (fog node). Follow-ing the observe-decide-act loop structure, DockerCap is ableto manage container resources at run-time and provide soft-level power capping strategies. The experiments demonstrate

that the obtained results with DockerCap is comparable to thatfrom the power capping solution provided by the hardware(Intel RAPL).

An energy aware edge computer architecture is designedto be portable and usable in the fieldwork scenarios in [51].Based on the architecture, a high-density cluster prototype isbuilt using the compact general-purpose commodity hardware.Power management policies are implemented in the prototypeto enable the real-time energy-awareness. Through variousexperiments, it shows that both the load balance strategies andcluster configurations have big impacts on the system energyconsumption and responsiveness.

2) Green-energy powered sustainable computing: Dual en-ergy sources are employed to support the running of a fogcomputing based system in [52], where solar power is utilizedas the primary energy supply of the fog nodes. A comprehen-sive analytic framework is presented to minimize the long-termcost on energy consumption. Meanwhile, the framework alsoenables an energy-efficient data offloading (from fog nodes tothe cloud) mechanism to help provide a high quality of service.

In [53], a rack-scale green-energy powered edge infrastruc-ture, InSURE (in-situ server system using renewable energy)is implemented for data pre-processing at the edge. InSUREcan be powered by standalone (solar/wind) power and withbatteries as the energy backup. Meanwhile, an energy bufferingmechanism and a joint spatio-temporal power managementscheme are applied, to enable efficient energy flow controlfrom the power supply to the edge server.

C. At the Bottom Device LayerAs a well-recognized fact, the IoT devices in edge comput-

ing usually have strict energy constraints, e.g., limited batterylife and energy storage. Thus, it remains a key challenge topower a great number (can up to tens of billions) of IoTdevices at the edge, especially for those resource-intensiveapplications or services [54]. We review the energy savingstrategies adopted at the device layer of the edge computingdiagram. Specifically, we go through three major approachesto achieving high energy efficiency in different edge/fogcomputing systems.

1) Computation offloading to edge servers or cloud: As anatural idea to solve the energy poverty problem, computa-tion offloading from the IoT devices to the edge servers orcloud has been long investigated [55], [56], [57]. It was alsodemonstrated that, for some particular applications or services,offloading tasks from IoT devices to more powerful ends canreduce the total energy consumption of the system, since thetask execution time on powerful servers or cloud can be muchshortened [58]. Although it increases the energy consumptionof (wireless) data transmission, the tradeoff favors the offload-ing option as the computational demand increases [59].

Having realized that the battery life is the primary bottle-neck of handheld mobile devices, the authors in [57] presentMAUI, an architecture for mobile code offloading and re-mote execution. To reduce the energy consumption of thesmartphone program, MAUI adopts a fine-grained programpartitioning mechanism and minimizes the code changes re-quired at the remote server or cloud. The ability of MAUI

Page 18: A Survey on Edge Computing Systems and ToolsIn the post-Cloud era, the proliferation of Internet of Things (IoT) and the popularization of 4G/5G, gradually changes the public’s habit

18

in energy reduction is validated by various experiments uponmacro-benchmarks and micro-benchmarks. The results showthat MAUI’s energy saving for a resource-intensive mobileapplication is up to one order of magnitude, also with asignificant performance improvement.

Like MAUI [57] , the authors in [59] design and implementCloneCloud, a system that helps partition mobile applicationprograms and performs strategically offloading for fast andelastic execution at the cloud end. As the major differenceto MAUI, CloneCloud involves less programmer help duringthe whole process and only offloads particular program parti-tions on demand of execution, which further speeds up theprogram execution. Evaluation shows that CloneCloud canimprove the energy efficiency of mobile applications (alongwith their execution efficiency) by 20 times. Similarly in [60],by continuous updates of software clones in the cloud with areasonable overhead, the offloading service can lead to energyreduction at the mobile end by a factor. For computationintensive applications on resource constrained edge devices,their executions usually need to be offloaded to the cloud.To reduce the response latency of the image recognitionapplication, Precog is presented which has been introduced inSec. II-G. With the on-device recognition caches, Precog muchreduces the amount of images offloading to the edge server orcloud, by predicting and prefetching the future images to berecognized.

2) Collaborated devices control and resource management:For energy saving of the massive devices at the edge, besidesoffloading their computational tasks to more powerful ends,there is also a great potential via sophisticated collaborationand cooperation among the devices themselves. Particularly,when the remote resources from the edge server or cloud areunavailable, it is critical and non-trivial to complete the edgetasks while without violating the energy constraint.

PCloud is presented in [10] to enhance the capability ofindividual mobile devices at the edge. By seamless usingavailable resources from the nearby devices, PCloud formsa personal cloud to serve end users whenever the cloudresources are difficult to access, where device participationis guided in a privacy-preserving manner. The authors showthat, by leveraging multiple nearby device resources, PCloudcan much reduce the task execution time as well as energyconsumption. For example, in the case study of neighborhoodwatch with face recognition, the results show a 74% reductionin energy consumption on a PCloud vs. on a single edgedevice. Similar to PCloud, the concept of mobile device cloud(MDC) is proposed in [61], where computational offloadingis also adopted among the mobile devices. It shows that theenergy efficiency (gain) is increased by 26% via offloadingin MDC. The authors of [62] propose an adaptive methodto dynamically discovery available nearby resource in het-erogeneous networks, and perform automatic transformationbetween centralized and flooding strategies to save energy.

As current LTE standard is not optimized to support a largesimultaneous access of IoT devices, the authors in [63] proposean improved memory replication architecture and protocol,REPLISON, for computation offloading of the massive IoTdevices at the edge. REPLISON improves the memory repli-

cation performance through an LTE-optimized protocol, whereDevice-to-Device (D2D) communication is applied as an im-portant supporting technology (to pull memory replicas fromIoT devices). The total energy consumption of REPLISONis generally worse than the conventional LTE scenario, as itneeds more active devices. However, the energy consumedper device during a single replicate transmission is much less.With further evaluation results, it shows that REPLISOM hasan energy advantage over the conventional LTE scenarios aslong as the size of replica is sufficiently small.

For IoT devices distributed at the edge, the authors in [64]leverage the software agents running on the IoT devices toestablish an integrated multi-agent system (MAS). By sharingdata and information among the mobile agents, edge devicesare able to collaborate with each other and improve thesystem energy efficiency in executing distributed opportunisticapplications. Upon the experimental platform with 100 sensornodes and 20 smartphones as edge devices, the authors showthe great potential of data transmission reduction with MAS.This leads to a significant energy saving, from 15% to 66%,under different edge computing scenarios. As another workapplying data reduction for energy saving, CAROMM [65]employs a change detect technique (LWC algorithm) to controlthe data transmission of IoT devices, while maintaining thedata accuracy.

V. DEEP LEARNING OPTIMIZATION AT THE EDGE

In the past decades, we have witnessed the burgeoning ofmachine learning, especially deep learning based applicationswhich have changed human being’s life. With complex struc-tures of hierarchical layers to capture features from raw data,deep learning models have shown outstanding performances inthose novel applications, such as machine translation, objectdetection, and smart question and answer systems.

Traditionally, most deep learning based applications aredeployed on a remote cloud center, and many systems andtools are designed to run deep learning models efficientlyon the cloud. Recently, with the rapid development of edgecomputing, the deep learning functions are being offloadedto the edge. Thus it calls for new techniques to support thedeep learning models at the edge. This section classifies thesetechnologies into three categories: systems and toolkits, deeplearning packages, and hardware.

A. Systems and Toolkits

Building systems to support deep learning at the edgeis currently a hot topic for both industry and academy.There are several challenges when offloading state-of-the-art AI techniques on the edge directly, including computingpower limitation, data sharing and collaborating, and mismatchbetween edge platform and AI algorithms. To address thesechallenges, OpenEI is proposed as an Open Framework forEdge Intelligence [66]. OpenEI is a lightweight softwareplatform to equip edges with intelligent processing and datasharing capability. OpenEI consists of three components: apackage manager to execute the real-time deep learning taskand train the model locally, a model selector to select the

Page 19: A Survey on Edge Computing Systems and ToolsIn the post-Cloud era, the proliferation of Internet of Things (IoT) and the popularization of 4G/5G, gradually changes the public’s habit

19

most suitable model for different edge hardware, and a libraryincluding a RESTFul API for data sharing. The goal of OpenEIis that any edge hardware will has the intelligent capabilityafter deploying it.

In the industry, some top-leading tech-giants have publishedseveral projects to move the deep learning functions from thecloud to the edge. Except Microsoft published Azure IoT Edgewhich have been introduced in Sec. III-E, Amazon and Googlealso build their services to support deep learning on the edge.Table III summarizes the features of the systems which willbe discussed below.

Amazon Web Services (AWS) has published IoT GreengrassML Inference [67] after IoT Greengrass. AWS IoT GreengrassML Inference is a software to support machine learninginferences on local devices. With AWS IoT Greengrass MLInference, connected IoT devices can run AWS Lambda func-tions and have the flexibility to execute predictions based onthose deep learning models created, trained, and optimizedin the cloud. AWS IoT Greengrass consists of three softwaredistributions: AWS IoT Greengrass Core, AWS IoT DeviceSDK, and AWS IoT Greengrass SDK. Greengrass is flexiblefor users as it includes a pre-built TensorFlow, Apache MXNet,and Chainer package, and it can also work with Caffe2 andMicrosoft Cognitive Toolkit.

Cloud IoT Edge [68] extends Google Cloud’s data process-ing and machine learning to edge devices by taking advantagesof Google AI products, such TensorFlow Lite and Edge TPU.Cloud IoT Edge can either run on Android or Linux-basedoperating systems. It is made up of three components: EdgeConnect ensures the connection to the cloud and the updatesof software and firmware, Edge ML runs ML inference byTensorFlow Lite, and Edge TPU specific designed to runTensorFlow Lite ML models. Cloud IoT Edge can satisfy thereal-time requirement for the mission-critical IoT applications,as it can take advantages of Google AI products (such asTensorFlow Lite and Edge TPU) and optimize the performancecollaboratively.

B. Deep Learning Packages

Many deep learning packages have been widely used todeliver the deep learning algorithms and deployed on thecloud data centers, including TensorFlow [69], Caffe [70],PyTorch [71], and MXNet [72]. Due to the limitations ofcomputing resources at the edge, the packages designed forthe cloud are not suitable for edge devices. Thus, to supportdata processing with deep learning models at the edge, severaledge-based deep learning frameworks and tools have been re-leased. In this section, we introduce TensorFlow Lite, Caffe2,PyTorch, MXNet, CoreML [73], and TensorRT [74], whosefeatures are summarized in Tables IV.

TensorFlow Lite [75] is TensorFlow’s lightweight solutionwhich is designed for mobile and edge devices. TensorFlowis developed by Google in 2016 and becomes one of themost widely used deep learning frameworks in cloud datacenters. To enable low-latency inference of on-device deeplearning models, TensorFlow Lite leverages many optimizationtechniques, including optimizing the kernels for mobile apps,

pre-fused activations, and quantized kernels that allow smallerand faster (fixed-point math) models.

Facebook published Caffe2 [76] as a lightweight, modular,and scalable framework for deep learning in 2017. Caffe2is a new version of Caffe which is first developed by UCBerkeley AI Research (BAIR) and community contributors.Caffe2 provides an easy and straightforward way to play withthe deep learning and leverage community contributions ofnew models and algorithms. Comparing with the original Caffeframework, Caffe2 merges many new computation patterns,including distributed computation, mobile, reduced precisioncomputation, and more non-vision use cases. Caffe2 supportsmultiple platforms which enable developers to use the powerof GPUs in the cloud or at the edge with cross-platformlibraries.

PyTorch [71] is published by Facebook. It is a python pack-age that provides two high-level features: tensor computationwith strong GPU acceleration and deep Neural Networks builton a tape-based auto-grad system. Maintained by the samecompany (Facebook), PyTorch and Caffe2 have their own ad-vantages. PyTorch is geared toward research, experimentationand trying out exotic neural networks, while caffe2 supportsmore industrial-strength applications with a heavy focus onthe mobile. In 2018, Caffe2 and PyTorch projects merged intoa new one named PyTorch 1.0, which would combine the userexperience of the PyTorch frontend with scaling, deploymentand embedding capabilities of the Caffe2 backend.

MXNet [72] is a flexible and efficient library for deeplearning. It was initially developed by the University of Wash-ington and Carnegie Mellon University, to support CNN andlong short-term memory networks (LSTM). In 2017, Amazonannounced MXNet as its choice of deep learning framework.MXNet places a special emphasis on speeding up the devel-opment and deployment of large-scale deep neural networks.It is designed to support multiple different platforms (eithercloud platforms or the edge ones) and can execute training andinference tasks. Furthermore, other than the Windows, Linux,and OSX operating systems based devices, it also supports theUbuntu Arch64 and Raspbian ARM based operating systems.

CoreML [73] is a deep learning framework optimizedfor on-device performance at memory footprint and powerconsumption. Published by Apple, users can integrate thetrained machine learning model into Apple products, such asSiri, Camera, and QuickType. CoreML supports not only deeplearning models, but also some standard models such as treeensembles, SVMs, and generalized linear models. Built on topof low level technologies, CoreML aims to make full use ofthe CPU and GPU capability and ensure the performance andefficiency of data processing.

The platform of TensorRT [74] acts as a deep learning infer-ence to run the models trained by TensorFlow, Caffe, and otherframeworks. Developed by NVIDIA company, it is designed toreduce the latency and increase the throughput when executingthe inference task on NVIDIA GPU. To achieve computingacceleration, TensorRT leverages several techniques, includingweight and activation precision calibration, layer and tensorfusion, kernel auto-tuning, dynamic tensor memory, and multi-stream execution.

Page 20: A Survey on Edge Computing Systems and ToolsIn the post-Cloud era, the proliferation of Internet of Things (IoT) and the popularization of 4G/5G, gradually changes the public’s habit

20

TABLE IIICOMPARISON OF DEEP LEARNING SYSTEMS ON EDGE

Features AWS IoT Greengrass Azure IoT Edge Cloud IoT EdgeDeveloper Amazon Microsoft Google

Components IoT Greengrass Core, IoT Device SDK,IoT Greengrass SDK

IoT Edge modules, IoT Edge runtime,Cloud-based interface Edge Connect, Edge ML, Edge TPU

OS Linux, macOS, Windows Windows, Linux, macOS Linux, macOS, Windows, AndroidTarget device Multiple platforms (GPU-based, Raspberry Pi) Multiple platforms TPUCharacteristic Flexible Windows friendly Real-time

Considering the different performance of the packages andthe diversity of the edge hardware, it is challenging to choose asuitable package to build edge computing systems. To evaluatethe deep learning frameworks at the edge and provide areference to select appropriate combinations of package andedge hardware, pCAMP [77] is proposed. It compares thepackages’ performances (w.r.t. the latency, memory footprint,and energy) resulting from five edge devices and observes thatno framework could win over all the others at all aspects. Itindicates that there is much room to improve the frameworksat the edge. Currently, developing a lightweight, efficient andhigh-scalability framework to support diverse deep learningmodes at the edge cannot be more important and urgent.

In addition to these single-device based frameworks, moreresearchers focus on distributed deep learning models overthe cloud and edge. DDNN [78] is a distributed deep neuralnetwork architecture across cloud, edge, and edge devices.DDNN maps the sections of a deep neural network ontodifferent computing devices, to minimize communication andresource usage for devices and maximize usefulness of featuresextracted from the cloud.

Neurosurgeon [79] is a lightweight scheduler which canautomatically partition DNN computation between mobiledevices and datacenters at the granularity of neural networklayers. By effectively leveraging the resources in the cloud andat the edge, Neurosurgeon achieves low computing latency,low energy consumption, and high traffic throughput.

C. Hardware System

The hardware designed specifically for deep learning canstrongly support edge computing. Thus, we further review rele-vant hardware systems and classify them into three categories:FPGA-based hardware, GPU-based hardware, and ASIC.

1) FPGA-based Hardware: A field-programmable gate ar-ray (FPGA) is an integrated circuit and can be configured bythe customer or designer after manufacturing. FPGA basedaccelerators can achieve high performance computing withlow energy, high parallelism, high flexibility, and high secu-rity [80].

[81] implements a CNN accelerator on a VC707 FPGAboard. The accelerator focuses on solving the problem thatthe computation throughput does not match the memorybandwidth well. By quantitatively analyzing the two factorsusing various optimization techniques, the authors provide asolution with better performance and lower FPGA resourcerequirement, and their solution achieves a peak performanceof 61.62 GOPS under a 100MHz working frequency.

Following the above work, Qiu et al. [82] propose a CNNaccelerator designed upon the embedded FPGA, Xilinx ZynqZC706, for large-scale image classification. It presents anin-depth analysis of state-of-the-art CNN models and showsthat Convolutional layers are computational-centric and Fully-Connected layers are memory-centric. The average perfor-mances of the CNN accelerator at convolutional layers and thefull CNN are 187.8 GOPS and 137.0 GOPS under a 150MHzworking frequency, respectively, which outperform previousapproaches significantly.

An efficient speech recognition engine (ESE) is designedto speed up the predictions and save energy when applyingthe deep learning model of LSTM. ESE is implemented in aXilinx XCKU060 FPGA opearting at 200MHz. For the sparseLSTM network, it can achieve 282GOPS, corresponding toa 2.52 TOPS on the dense LSTM network. Besides, energyefficiency improvements of 40x and 11.5x are achieved,respectively, compared with the CPU and GPU based solution.

2) GPU-based hardware: GPU can execute parallel pro-grams at a much higher speed than CPU, which makes it fitfor the computational paradigm of deep learning algorithms.Thus, to run deep learning models at the edge, building thehardware platform with GPU is a must choice. Specifically,NVIDIA Jetson TX2 and DRIVE PX2 are two representativeGPU-based hardware platforms for deep learning.

NVIDIA Jetson TX2 [83] is an embedded AI computingdevice which is designed to achieve low latency and highpower-efficient. It is built upon an NVIDIA Pascal GPU with256 CUDA cores, an HMP Dual Denver CPU and a QualcommARM CPU. It is loaded with 8GB of memory and 59.7GB/s ofmemory bandwidth and the power is about 7.5 watts. The GPUis used to execute the deep learning task and CPUs are used tomaintain general tasks. It also supports the NVIDIA JetpackSDK which includes libraries for deep learning, computervision, GPU computing, and multimedia processing.

NVIDIA DRIVE PX [84] is designed as the AI supercom-puter for autonomous driving. The architecture is available in avariety of configurations, from the mobile processor operatingat 10 watts to a multi-chip AI processors delivering 320 TOPS.It can fuse data from multiple cameras, as well as lidar, radar,and ultrasonic sensors.

3) ASIC: Application-Specific Integrated Circuit (ASIC) isthe integrated circuit which supports customized design fora particular application, rather than the general-purpose use.ASIC is suitable for the edge scenario as it usually has asmaller size, lower power consumption, higher performance,and higher security than many other circuits. Researchers anddevelopers design ASIC to meet the computing pattern of deep

Page 21: A Survey on Edge Computing Systems and ToolsIn the post-Cloud era, the proliferation of Internet of Things (IoT) and the popularization of 4G/5G, gradually changes the public’s habit

21

TABLE IVCOMPARISON OF DEEP LEARNING PACKAGES ON EDGE

Features TensorFlow Lite Caffe2 PyTorch MXNet CoreML TensorRTDeveloper Google Facebook Facebook DMLC, Amazon Apple NVIDIAOpen SourceLicense Apache-2.0 Apache-2.0 BSD Apache-2.0 Not open source Not open source

Task Inference Training, Inference Training, Inference Training, Inference Inference Inference

Target Device Mobile andembedded device Multiple platform Multiple platform Multiple platform Apple devices NVIDIA GPU

Characteristic Latency Lightweight, modular,and scalable Research Large-scale

deep neural networksMemory footprint andpower consumption.

Latency andthroughput

learning.DianNao family [85] is a series of hardware accelera-

tors designed for deep learning models, including DianNao,DaDianNao, ShiDianNao, and PuDianNao. They investigatethe accelerator architecture to minimize memory transfersas efficiently as possible. Different from other acceleratorsin DianNao family, ShiDianNao [86] focuses on image ap-plications in embedded systems which are widely used inedge computing scenarios. For the CNN-based deep learningmodels, it provides a computing speed 30x faster than that ofNVIDIA K20M GPU, with an body area of 4.86mm2 and apower of 320mW .

Edge TPU [87] is Google’s purpose-built ASIC for edgecomputing. It augments Google’s Cloud TPU and Cloud IoTto provide an end-to-end infrastructure and facilitates thedeployment of customers’ AI-based solutions. In addition,Edge TPU can combine the custom hardware, open software,and state-of-the-art AI algorithms to achieve high performancewith a small physical area and low power consumption.

VI. KEY DESIGN ISSUES

The edge computing system manages various resourcesalong the path from the cloud center to end devices, shieldingthe complexity and diversity of hardware and helping devel-opers quickly design and deploy novel applications. To fullyleverages the advantages, we discuss the following key issueswhich need to be paid attention when analyzing and designinga new edge computing system.

1) Mobility Support: Mobility support has two aspects: usermobility and resource mobility. User mobility refers to how toautomatically migrate the current program state and necessarydata when the user moves from one edge node coveragedomain to another so that the service handover is seamlessly.Currently, Cloudlet and CloudPath provide service migrationthrough terminating/finishing existing task and starting a newVM/instance in the target edge node. Fine-grain migrationand target edge node prediction are not supported. Resourcemobility refers to i) how to dynamically discover and manageavailable resources, including both the long-term resources andshort-term ones, and ii) when some edge devices are damagedhow the system can be resumed as soon as possible withreplacements. For example, PCloud and FocusStack supportsedge devices dynamic join and leave with no task running.Intelligent dynamic resource management is still required foredge computing systems.

2) Multi-user Fairness: For edge devices with limitedresources, how to ensure the fairness of multi-user usage,especially for the shared resources and rare resources. Forexample, a smartphone made up of various sensors and com-puting resources can act as an edge node to serve multipleusers. However, as the smartphone has limited battery life,it is a problem to fairly allocate resources when receivingmany requests. The resource competition is more intense,the requests are coming from different irrelevant tasks, it ishard to decide who should use the resource with only localinformation. Besides, the unfairness can be used as an attackstrategy to make the critical task resource hungry, which maylead to main task failure. The existing edge computing systemsdo not pay much attention to the multi-user fairness, the basicsolution (bottom-line) is to provide resource isolation, usersonly get what the edge node promised when it accepts therequest. More fairness strategies require system support, likerelated task status updates, etc.

3) Privacy Protection: Unlike cloud computing, edge de-vices can be privately owned, such as gateway devices forsmart home systems. When other users use such edge devices,obtain their data, and even take control of them, how toensure the owner’s privacy and guest users’ data privacyis important. AirBox is a lightweight flexible edge functionsystem, which leverages hardware security mechanism (e.g.Intel SGX) to enhance system security. Other system havenot pay much attention to privacy and security. Existing cloudsecurity approaches can be applied with the consideration ofthe resource limitation. Enhancing resource isolation, settingup privilege management and access control policies can bepotential directions to solve the privacy problem.

4) Developer Friendliness: The system ultimately provideshardware interaction and basic services for upper-level appli-cations. How to design interactive APIs, program deploymentmodule, resource application and revocation, etc., are thekey factors for the system to be widely used. Therefore, todesign an edge computing system, we should think from anapplication developer’s perspective. Specifically, to provideeffective development and deployment services is a goodidea to help improve the ecosystem of the edge computingsystem. For instances, EdgeX Foundry and Apache Edgentprovide APIs to manage devices and analyze data respectively.ParaDrop supports monitoring devices and application statusthrough developer APIs, and provides a web UI for users tomanage/configure gateway as well as deploy applications todevices.

Page 22: A Survey on Edge Computing Systems and ToolsIn the post-Cloud era, the proliferation of Internet of Things (IoT) and the popularization of 4G/5G, gradually changes the public’s habit

22

5) Multi-domain Management and Cooperation: Edgecomputing involves multiple types of resources, each of whichmay belong to a different owner. For examples, the smart homegateways and sensors of the house owner, networks resourcesand base stations from the Internet service providers (ISP),traffic cameras owned by the government. How to access theseresources and organize them according to the requirements ofapplication and services, especially in emergency situations, isa problem that the edge computing system needs to consider.Existing researches assume we have the permission to usevarious devices belongs to different owners. In the initial stage,systems focus on the functionality perspective rather thanimplementation issues like price model. With the developing ofedge computing, the real deployment issues should be solvedbefore we enjoy all the benefits.

6) Cost Model: In cloud computing, the corresponding vir-tual machine can be allocated based on the resource requestedby the user, and the cost model can be given according tothe resource usage. In edge computing, an application mayuse resources from different owners. Thus, how to measureresource usage, calculate overall overhead, and give an appro-priate pricing model are crucial problems when deploying anedge computing system. Generally, edge computing systemsfocus on how to satisfy the resource requirements of variousservices and applications, some of them pay more attentionto the energy consumption due to the nature of mobile nodes,more complex overhead are not considered.

7) Compatibility: Currently, specialized edge computingapplications are still quite few. For examples, ParaDrop appli-cations require additional XML configuration file to specifythe resource usage requirements, SpanEdge needs developersto divide/configure tasks into local tasks and global tasks.Common applications are not directly supported to run in edgesystems. How to automatically and transparently convert ex-isting programs to the edge version and conveniently leveragesthe advantages of edge computing are still open problems. Thecompatibility should be considered in the edge system design.Specifically, how to adapt traditional applications to the newarchitecture and realize the basic functions so that they can runsuccessfully, deserving further exploration and development.

VII. CONCLUSIONS

Edge computing is a new paradigm that jointly migratesthe capability of networking, computation, and storage fromthe remote cloud to the user sides. Under the context ofIoT and 5G, the vision of edge computing is promisingin providing smarter services and applications with betteruser experiences. The recently proposed systems and toolson edge computing generally reduce the data processing andtransmission overhead, and improve the efficiency and efficacyof mobile data analytics. Additionally, the integration of edgecomputing and deep learning techniques further fosters theresearch on edge-based intelligence services. This article intro-duced the representative systems and open source projects foredge computing, presented several energy efficiency enhancingstrategies for performance consideration and technologies fordeploying deep learning models at the edge, and suggested afew research directions during system design and analysis.

REFERENCES

[1] M. Satyanarayanan, V. Bahl, R. Caceres, and N. Davies, “The case forvm-based cloudlets in mobile computing,” IEEE pervasive Computing,2009.

[2] (2018) Open edge computing. [Online]. Available: http://openedgecomputing.org/

[3] Z. Li, C. Wang, and R. Xu, “Computation offloading to save energyon handheld devices: a partition scheme,” in Proceedings of the 2001international conference on Compilers, architecture, and synthesis forembedded systems. ACM, 2001, pp. 238–246.

[4] K. Ha and M. Satyanarayanan, “Openstack++ for cloudlet deployment,”School of Computer Science Carnegie Mellon University Pittsburgh,2015.

[5] K. Ha, P. Pillai, W. Richter, Y. Abe, and M. Satyanarayanan, “Just-in-time provisioning for cyber foraging,” in Proceeding of the 11th annualinternational conference on Mobile systems, applications, and services.ACM, 2013, pp. 153–166.

[6] K. Ha, Z. Chen, W. Hu, W. Richter, P. Pillai, and M. Satyanarayanan,“Towards wearable cognitive assistance,” in Proceedings of the 12thannual international conference on Mobile systems, applications, andservices. ACM, 2014, pp. 68–81.

[7] M. Satyanarayanan, P. Simoens, Y. Xiao, P. Pillai, Z. Chen, K. Ha,W. Hu, and B. Amos, “Edge analytics in the internet of things,” IEEEPervasive Computing, no. 2, pp. 24–31, 2015.

[8] M. Satyanarayanan, G. Lewis, E. Morris, S. Simanta, J. Boleng, andK. Ha, “The role of cloudlets in hostile environments,” IEEE PervasiveComputing, vol. 12, no. 4, pp. 40–49, 2013.

[9] S. H. Mortazavi, M. Salehe, C. S. Gomes, C. Phillips, and E. de Lara,“Cloudpath: a multi-tier cloud computing framework,” in Proceedings ofthe Second ACM/IEEE Symposium on Edge Computing. ACM, 2017,p. 20.

[10] M. Jang, K. Schwan, K. Bhardwaj, A. Gavrilovska, and A. Avasthi, “Per-sonal clouds: Sharing and integrating networked resources to enhanceend user experiences,” in Proceedings of IEEE INFOCOM. IEEE, 2014,pp. 2220–2228.

[11] M. Jang and K. Schwan, “Stratus: Assembling virtual platforms fromdevice clouds,” in Proceedings of the IEEE International Conference onCloud Computing. IEEE, 2011, pp. 476–483.

[12] P. Liu, D. Willis, and S. Banerjee, “Paradrop: Enabling lightweightmulti-tenancy at the networks extreme edge,” in 2016 IEEE/ACMSymposium on Edge Computing (SEC), Oct 2016, pp. 1–13.

[13] H. P. Sajjad, K. Danniswara, A. Al-Shishtawy, and V. Vlassov,“Spanedge: Towards unifying stream processing over central and near-the-edge data centers,” in IEEE/ACM Symposium on Edge Computing(SEC). IEEE, 2016, pp. 168–178.

[14] Z.-W. Xu, “Cloud-sea computing systems: Towards thousand-fold im-provement in performance per watt for the coming zettabyte era,”Journal of Computer Science and Technology, vol. 29, no. 2, pp. 177–181, 2014.

[15] R. T. Fielding and R. N. Taylor, Architectural styles and the design ofnetwork-based software architectures. University of California, IrvineIrvine, USA, 2000, vol. 7.

[16] U. Drolia, K. Guo, J. Tan, R. Gandhi, and P. Narasimhan, “Cachier:Edge-caching for recognition applications,” in 2017 IEEE 37th Interna-tional Conference on Distributed Computing Systems (ICDCS). IEEE,2017, pp. 276–286.

[17] U. Drolia, K. Guo, and P. Narasimhan, “Precog: prefetching for imagerecognition applications at the edge,” in Proceedings of the SecondACM/IEEE Symposium on Edge Computing. ACM, 2017, p. 17.

[18] B. Amento, B. Balasubramanian, R. J. Hall, K. Joshi, G. Jung, andK. H. Purdy, “Focusstack: Orchestrating edge clouds using location-based focus of attention,” in IEEE/ACM Symposium on Edge Computing(SEC). IEEE, 2016, pp. 179–191.

[19] K. Bhardwaj, M.-W. Shih, P. Agarwal, A. Gavrilovska, T. Kim, andK. Schwan, “Fast, scalable and secure onloading of edge functions usingairbox,” in IEEE/ACM Symposium on Edge Computing (SEC). IEEE,2016, pp. 14–27.

[20] Q. Zhang, X. Zhang, Q. Zhang, W. Shi, and H. Zhong, “Firework:Big data sharing and processing in collaborative edge environment,”in Proceedings of the IEEE Workshop on Hot Topics in Web Systemsand Technologies (HotWeb). IEEE, 2016, pp. 20–25.

[21] Q. Zhang, Y. Wang, X. Zhang, L. Liu, X. Wu, W. Shi, and H. Zhong,“Openvdap: an open vehicular data analytics platform for cavs,” inProceedings of the IEEE 38th International Conference on DistributedComputing Systems (ICDCS), 2018.

Page 23: A Survey on Edge Computing Systems and ToolsIn the post-Cloud era, the proliferation of Internet of Things (IoT) and the popularization of 4G/5G, gradually changes the public’s habit

23

[22] L. Liu, X. Zhang, M. Qiao, and W. Shi, “Safeshareride: edge-based at-tack detection in ridesharing services,” in Proceedings of the IEEE/ACMSymposium on Edge Computing (SEC). IEEE, 2018.

[23] R. Trimananda, A. Younis, B. Wang, B. Xu, B. Demsky, and G. Xu,“Vigilia: securing smart home edge computing,” in Proceedings of theIEEE/ACM Symposium on Edge Computing (SEC). IEEE, 2018, pp.74–89.

[24] I. Zavalyshyn, N. O. Duarte, and N. Santos, “Homepad: a privacy-awaresmart hub for home environments,” in Proceedings of the IEEE/ACMSymposium on Edge Computing (SEC). IEEE, 2018, pp. 58–73.

[25] S. Yi, Z. Hao, Q. Zhang, Q. Zhang, W. Shi, and Q. Li, “Lavea: latency-aware video analytics on edge computing platform,” in Proceedings ofthe Second ACM/IEEE Symposium on Edge Computing (SEC). ACM,2017, p. 15.

[26] C.-C. Hung, G. Ananthanarayanan, P. Bodik, L. Golubchik, M. Yu,P. Bahl, and M. Philipose, “Videoedge: processing camera streams usinghierarchical clusters,” in Proceedings of the IEEE/ACM Symposium onEdge Computing (SEC). IEEE, 2018, pp. 115–131.

[27] L. Tong, Y. Li, and W. Gao, “A hierarchical edge cloud architecture formobile computing,” in Proceedings of the 35th Annual IEEE Interna-tional Conference on Computer Communications (INFOCOM). IEEE,2016, pp. 1–9.

[28] J. Wang, Z. Feng, Z. Chen, S. George, M. Bala, P. Pillai, S.-W. Yang, andM. Satyanarayanan, “Bandwidth-efficient live video analytics for dronesvia edge computing,” in Proceedings of the IEEE/ACM Symposium onEdge Computing (SEC). IEEE, 2018, pp. 159–173.

[29] Y. Li and W. Gao, “Muvr: supporting multi-user mobile virtual realitywith resource constrained edge cloud,” in Proceedings of the IEEE/ACMSymposium on Edge Computing (SEC). IEEE, 2018, pp. 1–16.

[30] “Akraino edge statck,” 2018, https://www.akraino.org.[31] “Cord,” 2018, https://www.opennetworking.org/cord.[32] B. A. A. Nunes, M. Mendonca, X.-N. Nguyen, K. Obraczka, and

T. Turletti, “A survey of software-defined networking: Past, present, andfuture of programmable networks,” IEEE Communications Surveys &Tutorials, vol. 16, no. 3, pp. 1617–1634, 2014.

[33] H. Hawilo, A. Shami, M. Mirahmadi, and R. Asal, “Nfv: state of theart, challenges, and implementation in next generation mobile networks(vepc),” IEEE Network, vol. 28, no. 6, pp. 18–26, 2014.

[34] A. W. Manggala, Hendrawan, and A. Tanwidjaja, “Performance anal-ysis of white box switch on software defined networking using openvswitch,” in International Conference on Wireless and Telematics, 2016.

[35] K. C. Okafor, I. E. Achumba, G. A. Chukwudebe, and G. C. Onon-iwu, “Leveraging fog computing for scalable iot datacenter usingspine-leaf network topology,” Journal of Electrical and ComputerEngineering,2017,(2017-4-24), vol. 2017, no. 2363240, pp. 1–11, 2017.

[36] “Edgex foundry,” 2018, https://www.edgexfoundry.org.[37] “Apache edgent,” 2018, http://edgent.apache.org.[38] “Azure iot,” 2018, https://azure.microsoft.com/en-us/overview/iot/.[39] R. Morabito, “Virtualization on internet of things edge devices with

container technologies: a performance evaluation,” IEEE Access, vol. 5,pp. 8835–8850, 2017.

[40] T. Mastelic, A. Oleksiak, H. Claussen, I. Brandic, J.-M. Pierson, andA. V. Vasilakos, “Cloud computing: Survey on energy efficiency,” Acmcomputing surveys (csur), vol. 47, no. 2, p. 33, 2015.

[41] V. Valancius, N. Laoutaris, L. Massoulie, C. Diot, and P. Rodriguez,“Greening the internet with nano data centers,” in Proceedings of the5th international conference on Emerging networking experiments andtechnologies. ACM, 2009, pp. 37–48.

[42] S. Sarkar and S. Misra, “Theoretical modelling of fog computing: Agreen computing paradigm to support iot applications,” Iet Networks,vol. 5, no. 2, pp. 23–29, 2016.

[43] A. Feldmann, A. Gladisch, M. Kind, C. Lange, G. Smaragdakis, and F.-J. Westphal, “Energy trade-offs among content delivery architectures,”in Telecommunications Internet and Media Techno Economics (CTTE),2010 9th Conference on. IEEE, 2010, pp. 1–6.

[44] F. Jalali, K. Hinton, R. Ayre, T. Alpcan, and R. S. Tucker, “Fogcomputing may help to save energy in cloud computing,” IEEE Journalon Selected Areas in Communications, vol. 34, no. 5, pp. 1728–1739,2016.

[45] G. Tang, H. Wang, K. Wu, and D. Guo, “Tapping the knowledge of dy-namic traffic demands for optimal cdn design,” IEEE/ACM Transactionson Networking (TON), vol. 27, no. 1, pp. 98–111, 2019.

[46] G. Tang, K. Wu, and R. Brunner, “Rethinking cdn design with distributeetime-varying traffic demands,” in INFOCOM. IEEE, 2017, pp. 1–9.

[47] G. Tang, H. Wang, K. Wu, D. Guo, and C. Zhang, “When more may notbe better: Toward cost-efficient cdn selection,” in INFOCOM WKSHPS.IEEE, 2018, pp. 1–2.

[48] G. Lewis, S. Echeverrıa, S. Simanta, B. Bradshaw, and J. Root, “Tacticalcloudlets: Moving cloud computing to the edge,” in Military Commu-nications Conference (MILCOM), 2014 IEEE. IEEE, 2014, pp. 1440–1446.

[49] H. Dubey, J. Yang, N. Constant, A. M. Amiri, Q. Yang, andK. Makodiya, “Fog data: Enhancing telehealth big data through fogcomputing,” in Proceedings of the ASE BigData & SocialInformatics2015. ACM, 2015, p. 14.

[50] A. Asnaghi, M. Ferroni, and M. D. Santambrogio, “Dockercap: Asoftware-level power capping orchestrator for docker containers,” inComputational Science and Engineering (CSE) and IEEE Intl Confer-ence on Embedded and Ubiquitous Computing (EUC) and 15th IntlSymposium on Distributed Computing and Applications for BusinessEngineering (DCABES), 2016 IEEE Intl Conference on. IEEE, 2016,pp. 90–97.

[51] T. Rausch, C. Avasalcai, and S. Dustdar, “Portable energy-aware cluster-based edge computers,” in 2018 IEEE/ACM Symposium on Edge Com-puting (SEC). IEEE, 2018, pp. 260–272.

[52] Y. Nan, W. Li, W. Bao, F. C. Delicato, P. F. Pires, Y. Dou, and A. Y.Zomaya, “Adaptive energy-aware computation offloading for cloud ofthings systems,” IEEE Access, vol. 5, pp. 23 947–23 957, 2017.

[53] C. Li, Y. Hu, L. Liu, J. Gu, M. Song, X. Liang, J. Yuan, and T. Li,“Towards sustainable in-situ server systems in the big data era,” in ACMSIGARCH Computer Architecture News, vol. 43, no. 3. ACM, 2015,pp. 14–26.

[54] Y. Mao, C. You, J. Zhang, K. Huang, and K. B. Letaief, “A surveyon mobile edge computing: The communication perspective,” IEEECommunications Surveys & Tutorials, vol. 19, no. 4, pp. 2322–2358,2017.

[55] A. Rudenko, P. Reiher, G. J. Popek, and G. H. Kuenning, “Savingportable computer battery power through remote process execution,”ACM SIGMOBILE Mobile Computing and Communications Review,vol. 2, no. 1, pp. 19–26, 1998.

[56] R. Kemp, N. Palmer, T. Kielmann, F. Seinstra, N. Drost, J. Maassen, andH. Bal, “eyedentify: Multimedia cyber foraging from a smartphone,” in2009 11th IEEE International Symposium on Multimedia. IEEE, 2009,pp. 392–399.

[57] E. Cuervo, A. Balasubramanian, D.-k. Cho, A. Wolman, S. Saroiu,R. Chandra, and P. Bahl, “Maui: making smartphones last longer withcode offload,” in Proceedings of the 8th international conference onMobile systems, applications, and services. ACM, 2010, pp. 49–62.

[58] K. Ha, “System infrastructure for mobile-cloud convergence,” Ph.D.dissertation, Carnegie Mellon University, 2016.

[59] B.-G. Chun, S. Ihm, P. Maniatis, M. Naik, and A. Patti, “Clonecloud:elastic execution between mobile device and cloud,” in Proceedings ofthe sixth conference on Computer systems. ACM, 2011, pp. 301–314.

[60] M. V. Barbera, S. Kosta, A. Mei, and J. Stefa, “To offload or not tooffload? the bandwidth and energy costs of mobile cloud computing,”in INFOCOM, 2013 Proceedings IEEE. IEEE, 2013, pp. 1285–1293.

[61] A. Fahim, A. Mtibaa, and K. A. Harras, “Making the case for compu-tational offloading in mobile device clouds,” in Proceedings of the 19thannual international conference on Mobile computing & networking.ACM, 2013, pp. 203–205.

[62] W. Liu, T. Nishio, R. Shinkuma, and T. Takahashi, “Adaptive resourcediscovery in mobile cloud computing,” Computer Communications,vol. 50, pp. 119–129, 2014.

[63] S. Abdelwahab, B. Hamdaoui, M. Guizani, and T. Znati, “Replisom:Disciplined tiny memory replication for massive iot devices in lte edgecloud,” IEEE Internet of Things Journal, vol. 3, no. 3, pp. 327–338,2016.

[64] S. Abdelwahab, B. Hamdaoui, M. Guizani, T. Znati, Taieb Leppnen,and J. Riekki, “Energy efficient opportunistic edge computing for theinternet of things,” Web Intelligence and Agent Systems, p. In Press,2018.

[65] P. P. Jayaraman, J. B. Gomes, H. L. Nguyen, Z. S. Abdallah, S. Kr-ishnaswamy, and A. Zaslavsky, “Cardap: A scalable energy-efficientcontext aware distributed mobile data analytics platform for the fog,” inEast European Conference on Advances in Databases and InformationSystems. Springer, 2014, pp. 192–206.

[66] X. Zhang, Y. Wang, S. Lu, L. Liu, L. Xu, and W. Shi, “OpenEI: An OpenFramework for Edge Intelligence,” in 2019 IEEE 39th InternationalConference on Distributed Computing Systems (ICDCS), July 2019.

[67] (2019) Aws iot greengrass. https://aws.amazon.com/greengrass/”.[Online]. Available: https://aws.amazon.com/greengrass/

[68] (2019) Cloud iot edge: Deliver google ai capabilities at theedge. https://cloud.google.com/iot-edge/. [Online]. Available: https://cloud.google.com/iot-edge/

Page 24: A Survey on Edge Computing Systems and ToolsIn the post-Cloud era, the proliferation of Internet of Things (IoT) and the popularization of 4G/5G, gradually changes the public’s habit

24

[69] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin,S. Ghemawat, G. Irving, M. Isard et al., “Tensorflow: A system forlarge-scale machine learning.” in OSDI, vol. 16, 2016, pp. 265–283.

[70] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick,S. Guadarrama, and T. Darrell, “Caffe: Convolutional architecture forfast feature embedding,” in Proceedings of the 22nd ACM internationalconference on Multimedia. ACM, 2014, pp. 675–678.

[71] (2019) From research to production. https://pytorch.org. [Online].Available: https://pytorch.org

[72] T. Chen, M. Li, Y. Li, M. Lin, N. Wang, M. Wang, T. Xiao, B. Xu,C. Zhang, and Z. Zhang, “Mxnet: A flexible and efficient machinelearning library for heterogeneous distributed systems,” in LearningSysat NIPS. ACM, 2015.

[73] (2019) Core ml: Integrate machine learning models into your app.https://developer.apple.com/documentation/coreml. [Online]. Available:https://developer.apple.com/documentation/coreml

[74] (2019) Nvidia tensorrt: Programmable inference accelerator.https://developer.nvidia.com/tensorrt. [Online]. Available: https://developer.nvidia.com/tensorrt

[75] (2019) Introduction to tensorflow lite.https://www.tensorflow.org/mobile/tflite/. [Online]. Available:https://www.tensorflow.org/mobile/tflite/

[76] (2019) Caffe2: A new lightweight, modular, and scalable deep learningframework. https://caffe2.ai/. [Online]. Available: https://caffe2.ai/

[77] X. Zhang, Y. Wang, and W. Shi, “pcamp: Performance comparisonof machine learning packages on the edges,” in USENIX Workshopon Hot Topics in Edge Computing (HotEdge 18). Boston, MA:USENIX Association, 2018. [Online]. Available: https://www.usenix.org/conference/hotedge18/presentation/zhang

[78] S. Teerapittayanon, B. McDanel, and H. Kung, “Distributed deep neuralnetworks over the cloud, the edge and end devices,” in DistributedComputing Systems (ICDCS), 2017 IEEE 37th International Conferenceon. IEEE, 2017, pp. 328–339.

[79] Y. Kang, J. Hauswald, C. Gao, A. Rovinski, T. Mudge, J. Mars, andL. Tang, “Neurosurgeon: Collaborative intelligence between the cloudand mobile edge,” ACM SIGPLAN Notices, vol. 52, no. 4, pp. 615–629,2017.

[80] T. Wang, C. Wang, X. Zhou, and H. Chen, “A survey of fpga baseddeep learning accelerators: Challenges and opportunities,” arXiv preprintarXiv:1901.04988, 2018.

[81] C. Zhang, P. Li, G. Sun, Y. Guan, B. Xiao, and J. Cong, “Optimizingfpga-based accelerator design for deep convolutional neural networks,”in Proceedings of the 2015 ACM/SIGDA International Symposium onField-Programmable Gate Arrays. ACM, 2015, pp. 161–170.

[82] J. Qiu, J. Wang, S. Yao, K. Guo, B. Li, E. Zhou, J. Yu, T. Tang,N. Xu, S. Song et al., “Going deeper with embedded fpga platform forconvolutional neural network,” in Proceedings of the 2016 ACM/SIGDAInternational Symposium on Field-Programmable Gate Arrays. ACM,2016, pp. 26–35.

[83] (2019) Nvidia jetson tx2 module.https://developer.nvidia.com/embedded/buy/jetson-tx2. [Online].Available: https://developer.nvidia.com/embedded/buy/jetson-tx2

[84] (2019) Nvidia drive px: Scalable ai supercomputer for autonomous driv-ing. https://www.nvidia.com/en-au/self-driving-cars/drive-px/. [Online].Available: https://www.nvidia.com/en-au/self-driving-cars/drive-px/

[85] Y. Chen, T. Chen, Z. Xu, N. Sun, and O. Temam, “Diannao family:energy-efficient hardware accelerators for machine learning,” Communi-cations of the ACM, vol. 59, no. 11, pp. 105–112, 2016.

[86] Z. Du, R. Fasthuber, T. Chen, P. Ienne, L. Li, T. Luo, X. Feng, Y. Chen,and O. Temam, “Shidiannao: Shifting vision processing closer to thesensor,” in ACM SIGARCH Computer Architecture News, vol. 43, no. 3.ACM, 2015, pp. 92–104.

[87] (2019) Edge tpu: Googles purpose-built asic designed to run inferenceat the edge. https://cloud.google.com/edge-tpu/. [Online]. Available:https://cloud.google.com/edge-tpu/