uncertainty analysis of middleware architectures for streaming smart … · 2016-01-11 ·...

11
1 Uncertainty Analysis of Middleware Architectures for Streaming Smart Grid Applications Ilge Akkaya, Student Member, IEEE, Yan Liu, Member, IEEE, and Edward A. Lee, Fellow, IEEE Abstract—Accuracy and responsiveness are two key properties of emerging cyber-physical energy systems (CPES) that need to incorporate high throughput sensor streams for distributed monitoring and control purposes. The electric power grid, which is a prominent example of such systems, is being integrated with high throughput sensors in order to support stable system dynam- ics that are provisioned to be utilized in real-time supervisory control applications. The end-to-end performance and overall scalability of cyber-physical energy applications depend on robust middleware that is able to operate with variable resources and multi-source sensor data. This leads to uncertain behavior under highly variable sensor and middleware topologies. We present a parametric approach to modeling middleware for distributed power applications and account for temporal satisfiability of system properties under network resource and data volume un- certainty. We present a heterogeneous modeling framework that combines Monte Carlo simulations of uncertainty parameters within an executable Discrete-Event (DE) middleware model. By employing Monte Carlo simulations followed by regression analysis, we determine system parameters that significantly affect limiting middleware behavior and the achievability of temporal requirements. I. I NTRODUCTION Smart grid is the effort of transforming the existing electrical power grid infrastructure to be more intelligent, accurate and evolving by jointly employing the technologies in the areas of communications, controls and computing. Today, power grid systems are incorporating high throughput sensor devices into power distribution networks. As Figure 1 demonstrates, power grid operators rely on streaming sensor data to make real-time control decisions. With high-fidelity and trustworthy phasor data from hundreds of thousands of measurement points within the Wide Area Measurement System (WAMS), the future elec- tric power grid will enable multiple advanced distributed con- trol techniques. Phasor Measurement Units (PMUs) provide real-time and precisely time stamped measurements, which is a feature that enables the measurements to be time-aligned and aggregated for accurate real-time evaluation of grid state. The challenge of combining phasor measurements for Wide Area Monitoring and Control (WAMC) applications has been extensively investigated by power engineers and researchers [1]–[3]. From a distributed cyber-physical system (CPS) per- spective, the challenge is to autonomously coordinate the data flow and access within distributed power systems [4], [5]. Middleware plays an important role in linking applications and data that may be on different domains, enabling these to work together. Surveys presented in [7], [8] highlight that middleware should be developed as a bridge between systems and operators. [7] proposes a user centric middleware architecture design that expands across three logic layers, Fig. 1: National Institute of Standards and Technology (NIST) Smart Grid Conceptual Model: Operations [6] namely, transmission, control, and the user. The transmission layer encompasses generation, distribution, and communica- tion that provides data transfer to Advanced Metering Infras- tructure (AMI), allowing users to configure energy consump- tion through smart devices. Even within the transmission layer, monitoring, and control applications based on intensive analy- sis of measurements requires certain middleware components to coordinate the data communication and analysis process. In this middleware-centric architecture, a number of factors that either depend on environmental conditions or cannot be predetermined at the time of deployment affect end-to-end response times of power applications, creating a challenge for realizing applications with strict timing requirements. Such unpredictable factors include: – iterative and approximate distributed applications that run until global convergence of state has been achieved – sensor data quality, which is highly affected by sensor failures and environmental conditions – sensor placement, and – smart grid partitioning decisions for distributed sensing and actuation. These variations in smart grid design lead to considerable jitter in reliability, bandwidth or delay, which are characterized as attributes of Quality of Services (QoS) of integrated power applications. A systematic quantification of these factors gain importance in evaluating typical and worst-case performance of smart grid applications and become metrics that are used in comparing alternative middleware architectures [9]. In this paper, we present a model-based approach to speci- fying uncertainty parameters in middleware architectures. We

Upload: others

Post on 22-May-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Uncertainty Analysis of Middleware Architectures for Streaming Smart … · 2016-01-11 · Uncertainty Analysis of Middleware Architectures for Streaming Smart Grid Applications Ilge

1

Uncertainty Analysis of Middleware Architecturesfor Streaming Smart Grid Applications

Ilge Akkaya, Student Member, IEEE, Yan Liu, Member, IEEE, and Edward A. Lee, Fellow, IEEE

Abstract—Accuracy and responsiveness are two key propertiesof emerging cyber-physical energy systems (CPES) that needto incorporate high throughput sensor streams for distributedmonitoring and control purposes. The electric power grid, whichis a prominent example of such systems, is being integrated withhigh throughput sensors in order to support stable system dynam-ics that are provisioned to be utilized in real-time supervisorycontrol applications. The end-to-end performance and overallscalability of cyber-physical energy applications depend on robustmiddleware that is able to operate with variable resources andmulti-source sensor data. This leads to uncertain behavior underhighly variable sensor and middleware topologies. We presenta parametric approach to modeling middleware for distributedpower applications and account for temporal satisfiability ofsystem properties under network resource and data volume un-certainty. We present a heterogeneous modeling framework thatcombines Monte Carlo simulations of uncertainty parameterswithin an executable Discrete-Event (DE) middleware model.By employing Monte Carlo simulations followed by regressionanalysis, we determine system parameters that significantly affectlimiting middleware behavior and the achievability of temporalrequirements.

I. INTRODUCTION

Smart grid is the effort of transforming the existing electricalpower grid infrastructure to be more intelligent, accurate andevolving by jointly employing the technologies in the areas ofcommunications, controls and computing. Today, power gridsystems are incorporating high throughput sensor devices intopower distribution networks. As Figure 1 demonstrates, powergrid operators rely on streaming sensor data to make real-timecontrol decisions. With high-fidelity and trustworthy phasordata from hundreds of thousands of measurement points withinthe Wide Area Measurement System (WAMS), the future elec-tric power grid will enable multiple advanced distributed con-trol techniques. Phasor Measurement Units (PMUs) providereal-time and precisely time stamped measurements, which isa feature that enables the measurements to be time-alignedand aggregated for accurate real-time evaluation of grid state.The challenge of combining phasor measurements for WideArea Monitoring and Control (WAMC) applications has beenextensively investigated by power engineers and researchers[1]–[3]. From a distributed cyber-physical system (CPS) per-spective, the challenge is to autonomously coordinate the dataflow and access within distributed power systems [4], [5].

Middleware plays an important role in linking applicationsand data that may be on different domains, enabling theseto work together. Surveys presented in [7], [8] highlightthat middleware should be developed as a bridge betweensystems and operators. [7] proposes a user centric middlewarearchitecture design that expands across three logic layers,

Fig. 1: National Institute of Standards and Technology (NIST)Smart Grid Conceptual Model: Operations [6]

namely, transmission, control, and the user. The transmissionlayer encompasses generation, distribution, and communica-tion that provides data transfer to Advanced Metering Infras-tructure (AMI), allowing users to configure energy consump-tion through smart devices. Even within the transmission layer,monitoring, and control applications based on intensive analy-sis of measurements requires certain middleware componentsto coordinate the data communication and analysis process.

In this middleware-centric architecture, a number of factorsthat either depend on environmental conditions or cannot bepredetermined at the time of deployment affect end-to-endresponse times of power applications, creating a challengefor realizing applications with strict timing requirements. Suchunpredictable factors include:

– iterative and approximate distributed applications that rununtil global convergence of state has been achieved

– sensor data quality, which is highly affected by sensorfailures and environmental conditions

– sensor placement, and– smart grid partitioning decisions for distributed sensing

and actuation.

These variations in smart grid design lead to considerable jitterin reliability, bandwidth or delay, which are characterized asattributes of Quality of Services (QoS) of integrated powerapplications. A systematic quantification of these factors gainimportance in evaluating typical and worst-case performanceof smart grid applications and become metrics that are usedin comparing alternative middleware architectures [9].

In this paper, we present a model-based approach to speci-fying uncertainty parameters in middleware architectures. We

Page 2: Uncertainty Analysis of Middleware Architectures for Streaming Smart … · 2016-01-11 · Uncertainty Analysis of Middleware Architectures for Streaming Smart Grid Applications Ilge

2

consider the smart-grid communication fabric that entails theentire data flow from sensor devices, through network andmiddleware components, to distributed application nodes. Wepursue a model-based design approach to (i) build executableDiscrete-Event (DE) models of PMU networks, computationnodes, and the network fabric that interconnects the compo-nents, along with the middleware layer functionality to processand aggregate data streams, (ii) carry out a Monte Carlo(MC) simulation based approach to evaluate a parameter spacethat characterizes available sensor, middleware, and networkresources in the distributed model.

This paper is structured as follows: Section 2 presents anoverview of middleware architecture principles and infras-tructure in the context of smart grid applications. Section 3studies the modeling approach for domain-specific compo-nents of distributed power applications, followed by a detailedspecification of the end-to-end DE model, as well as modelparameterization and sampling methods. Section 4 presentsmethodology and results of parametric uncertainty analysis formiddleware latency and a comparative performance analysisof data volume and latency requirements on two middlewarearchitectures. Finally, related work is presented in Section 5,followed by Section 6, which concludes the article.

II. MIDDLEWARE ARCHITECTURE OVERVIEW

For the purposes of utilizing sensor streams at power gridnodes from a large-scale transmission grid in a distributedapplication, the entire electric power grid topology is par-titioned into non-overlapping subsystems that are connectedvia tie lines (buses). Each node of the distributed applica-tion, usually delegated to a balancing authority (BA) or acontrol center of a power utility, is able to run a local sub-application and consequently exchange data with neighboringsubstations for coordination. Middleware that mediates dataexchange between partitions of the grid is integrated into thecommunication infrastructure. In this paper, we consider twoscenarios of middleware architecture that respectively perform(i) aggregation and dispatching; and (ii) mediation of dataexchange only (without aggregation).

For the data exchange only middleware architecture, eacharea is assumed to share its local data stream with all otherparticipating areas, as shown in Figure 2. No coordination ormediation is performed at the middleware level; middlewareonly relays messages from the source to the destination. Maincomponents of middleware consists of the service endpointsdefined by an Enterprise Service Bus (ESB) and a messagebroker that relays data streams.

For applications in which data streams collected from eachlocal area are not fully accessible to other areas or when onlyintermediate data need be shared between designated nodes, amiddleware architecture with aggregation and dispatch func-tionality comes into play. Figure 3 demonstrates the conceptualsystem architecture for an aggregate-and-dispatch middlewarearchitecture. In this scenario, local streams from each areaare transferred into the middleware to be aggregated andtime-aligned. Additionally, the middleware fabric detects time-aligned faulty readings from the sensor network, then mergesand broadcasts this information to all remote participants for

Fig. 2: Data Exchange Only Middleware Architecture

Fig. 3: Aggregation and Dispatch Middleware Architecture

situational awareness [4]. An additional decision component,which is also part of the middleware, receives these interme-diate results and notifies each area when a consensus on thepower system state has been made. Centralized aggregationalso becomes desirable for applications, in which broadcastinghigh intensity data streams individually would incur significantnetwork load and would impact end-to-end system latency.

A. Middleware for Centralized Aggregation

As an example application for which middleware that per-forms centralized aggregation proves necessary, we consider adistributed state estimation scenario that operates on PMU datastreams. The middleware coordinates multiple data sources asinputs to the power application that consumes (i) PMU samplefiles that contain measurements from a power node (typicallycollected with a sampling rate of 30 Hz), received at minutelyintervals from each sensor; (ii) a SCADA file received at 1-4 minute intervals; and (iii) an error log file that containsrecords of time stamp errors from all PMUs across areas. Themiddleware components that process these data sources arehighlighted in green in Figure 4. Next, we will characterize themiddleware components that are required for this functionalworkflow.

Aggregation. At each estimation step, PMU sensor readingsfrom the previous time interval are obtained from the localareas. The PMU Data Monitor detects the arrival of thePMU data file, and pipelines the data stream to the ErrorDetection component for identification of records with

Page 3: Uncertainty Analysis of Middleware Architectures for Streaming Smart … · 2016-01-11 · Uncertainty Analysis of Middleware Architectures for Streaming Smart Grid Applications Ilge

3

Fig. 4: Middleware for coordinating multiple sources of datastreams

time stamp errors. An error file containing the records for localPMUs is then sent to a message broker associated with a mes-sage queue denoted as the Input Queue. Arrival messagesin the input queue are processed by the Aggregator thatcombines and aligns the records in the error index files sentfrom distinct areas. The aggregator performs time alignmentaccording to the GPS clock reference obtained via the TimeService that listens to one channel of PMU data over TCP.

The aggregate time stamp error index file is then broadcastto individual areas through the Broadcast Queue. Thisenables all the areas to become aware of the time stamp forwhich at leas one PMU reported erroneous data in the previoustime interval. All PMU records for any such time stamp arediscarded even when the local PMU measurements are of goodtime quality, for consistency considerations.

Convergence Control. One of the essential functionalitiesof the middleware is to moderate the consistency of stateacross participating areas. In our scenario, neighboring areasexchange state information through peer-to-peer communica-tions. The Causality Control component monitors in-termediate states to determine and declare global convergence.

Listing 1: Monitoring data files and filtering errors in middle-ware

1 / / c r e a t e a p i p e l i n e2 M i f P i p e l i n e p i p e l i n e = new M i f P i p e l i n e ( ) ;3 / / s e t t h e f i l e c o n n e c t o r4 MifConnec to r conn = p i p e l i n e . addMifConnec to r (

E n d p o i n t P r o t o c o l . FILE ) ;5 conn . s e t P r o p e r t y ( ” s t r e a m i n g ” , t rue ) ;6 conn . s e t P r o p e r t y ( ” a u t o D e l e t e ” , t rue ) ;7 conn . s e t P r o p e r t y ( ” p o l l i n g F r e q u e n c y ” , 60000) ;8 conn . s e t P r o p e r t y ( ” f i l e A g e ” , 60100) ;9 conn . s e t P r o p e r t y ( ” moveToDirec to ry ” ,

DseMiddlewareProps . PMU DSE WORKING DIR) ) ;10 . . .11 / / add a component t o p r o c e s s t h e f i l e12 MifModule pmuAreaAModule = p i p e l i n e .

addMifModule (13 P m u R e m o v e E r r o r s F i l e P r o c e s s o r . c l a s s ,

areaADataAppUri , ” jms : / / t o p i c : pmuIndexTopic” ) ;

14 pmuAreaAModule . setName ( ” AreaA−PMU−RECEIVER” ) ;15 . . .16 / / s t a r t a p i p e l i n e ; a f i l e a r r i v i n g w i l l

t r i g g e r PmuRemoveErrorsF i l eProcessor17 p i p e l i n e . s t a r t ( ) ;

B. Middleware Component Implementation

The aggregation and queuing behavior is built using ApacheActiveMQ [activemq.apache.org], an open-source messagebroker. The middleware components that connect to messagequeues are designed using MeDICi [10], an open source mid-dleware framework that wraps the complex APIs of the MuleEnterprise Service Bus. MeDICi has been utilized in other dataintensive analysis and its scalability has been demonstrated incoordinating large scale concurrent data analysis tasks [11].

As an example, the MeDICi implementation for a PMUData Monitor routine is given in Listing 1 (lines 1-9). Afile connector is added to a MifPipeline (the executioncomponent of MeDICi) that detects any arriving files. The fileconnector implements services such as getting the file as abyte array or an input stream, polling frequency, and auto-deleting processed files. The file stream is then processed bya PmuRemoveErrorsFileProcessor (line 11-14). Thisprocessor implements the Error Detection component. Itextracts the records with clock errors and sends the error fileto an aggregator. Its inbound endpoint (reaADataAppUri,line 13) refers to the directory that the file connector monitorsthe arriving files. The outbound endpoint specifies the URL(jms://topic:pmuIndexTopic, line 13) of the Input Queue.

These MeDICi components and queuing structure form theaggregation process of the middleware by merging multipledata sources as inputs to distributed power applications.

C. Uncertainty Parameters of Middleware Design

We evaluate a number of significant parameters that accountfor the uncertainty within the middleware level end-to-endlatency, based on the aggregate-dispatch architecture presentedin Figure 4.

1) Number of PMU streams per area. For each PMU, themeasurements to be extracted are determined alreadygiven by the state estimation algorithm. Each PMU

Page 4: Uncertainty Analysis of Middleware Architectures for Streaming Smart … · 2016-01-11 · Uncertainty Analysis of Middleware Architectures for Streaming Smart Grid Applications Ilge

4

packet follows the IEEE C37.118 Synchrophasor Stan-dard. It contains multiple measurements of frequency,voltage magnitude and angle, current magnitude andangle, quality and so on. In our testbed environment,a Java application listens to the broadcast from a PMUTCP socket, extracts measurement data of interests andgenerates a PMU data file in a minute cycle. So at the topof every minute, the middleware is triggered to processPMU data files as the number of available PMU devices.

2) Middleware Concurrency. Upon arrival of a PMU file,the MeDICi component automatically creates a newinstance of a thread for concurrent processing. Theaggregation and queuing behavior is implemented us-ing the ActiveMQ message broker. Application specificrequirements of middleware concurrency may requiredetermining an optimal level of concurrency given theexpected input data volume. For critical applications,availability requirements can be imposed to ensure peakloads are process within the temporal application dead-line.

3) Intermediate data exchange sessions until convergence.Intuitively, the number of intermediate data exchangesessions required until global convergence depends onthe data quality, namely, the accuracy of PMU mea-surements. Such a relation is hard to formulate as ithighly relies on state estimation algorithms as well asthe distribution of the noise level. In [12], the data noiseis assumed to have a linear relation to the number ofiterations until convergence and are investigated under aGaussian distribution assumption. For specific applica-tions, these assumptions may need to be re-calibrated.

Based on our experience discussed in [4], it was expen-sive and time consuming to install a networked testbed thatconnects physical PMU devices to distributed state estima-tors running on remote High Performance Computing (HPC)clusters. In addition to the cost of the PMU devices, at least200 engineering hours were spent solely on the integration ofPMU related software. Moreover, measuring the middlewareand end-to-end response times in a testing environment hasan even higher deployment cost, since it involves emulationscenarios over hundreds of PMUs for a single state estimationarea alone. Considering the cost of actual deployments andthe status of the developing state of smart grid technology,for which no standardized middleware requirements have beenset to date, a simulation based approach that aids in modelinguncertain parameters as random variables gains further value.We aim to determine the influence of the three fundamentalsources of uncertainty listed above as a result of our simulationstudies. The variables of uncertainty are sampled from aparameter space and used to configure a simulation modelto characterize the simulated end-to-end and local latencybehavior of distributed state estimation runs.

III. MODELING APPROACH

The conceptual modeling workflow is depicted in Figure5. The end goal of our study is to derive the most significantuncertain parameters as random variables from the middlewaredesign.

Samples for each of these random variables are generatedby the Monte Carlo (MC) method. MC methods are a familyof algorithms that perform repeated random sampling from arandom space to simulate complex random variables.

Fig. 5: Middleware design workflow guided by uncertaintyanalysis

We use the Ptolemy II [13] framework, which is a het-erogeneous modeling and simulation environment extensivelyused in CPS design, for modeling and simulation purposes.We provide a hierarchical actor-oriented model of the middle-ware components and the data flow through the network andmiddleware fabrics to account for the temporal behavior ofthe system. In this work, some general elements that need tobe modeled in particular are sensors, network and middlewarecomponents. Communication delays, software scheduling andtiming properties are also considerations of the modelingprocess.

Note that Ptolemy II is used first to generate MC samplesof tuples of parameters, which are then used to parameterizea Discrete-Event (DE) sub-model of the end-to-end commu-nication flow of the distributed power grid application. TheDE model is simulated to yield Monte Carlo samples of end-to-end completion times of the distributed state estimationapplication. Following the Monte Carlo simulation, inputparameters and output samples are utilized for regressionanalysis, which is used to identify input variables that arestatistically significant in explaining the temporal variation atthe output.

A. Ptolemy II

Ptolemy II [13] is an actor-oriented design tool that providesan integrated modeling and simulation environment for theconceptual procedure in Figure 5. Ptolemy II has extensiveactor libraries for modeling CPS, and has been demonstratedto be an effective design platform by previous work [14]–[16]. A CPS model consists of physical processes that interact

Page 5: Uncertainty Analysis of Middleware Architectures for Streaming Smart … · 2016-01-11 · Uncertainty Analysis of Middleware Architectures for Streaming Smart Grid Applications Ilge

5

with models of network and computation platforms, usuallyincluding feedback relations.

Actors in Ptolemy II are concurrently executed componentsthat communicate via events sent and received through inputand output actor ports. An actor in Ptolemy II consists of a setof input and output ports along with an internal state, whichitself can be a graphical submodel. A director implements amodel of computation (MoC) and mediates actor executionat each level of the hierarchy, which is the mechanism thatleverages integration of multiple MoCs within a single model.Passage of time at each compositional level is governed by alocal clock that allows to define different clock rates, clockdrifts and the relation of each local clock to the wall clocktime. For the purposes of uncertainty analysis, MC samplingis carried out using the Synchronous Data Flow (SDF) MoC,and the temporal execution of the middleware communicationmodel uses a Discrete-Event (DE) submodel within SDF.

Being an actor-oriented framework, Ptolemy II also presentsa clear semantics for defining cross-cutting concerns of thebehavioral models by the use of aspects. In the context ofsmart grid middleware modeling, network fabric and middle-ware models are examples of communication aspects that char-acterize aggregation, queuing and delaying of events withinthe behavioral model that consists of an interconnection ofsensors, intermediate storage components and computationnodes. Our modeling effort follows this design pattern.

In aspect-oriented Ptolemy models, when a connectionbetween two ports need to be delegated to an intermedi-ate communication aspect, the input port of the destinationactor is annotated with a parameter reference to the rele-vant CommunicationAspect component. Figure 6 demon-strates the use of communication aspects. The details of theaspect-oriented communication models are discussed in [17].

B. Modeling System Architecture

We first model the top level systems architecture andproceed to developing a hierarchical model that concentrateson quantifying the uncertainty in middleware coordination.

1) Top Level Model: In Ptolemy II, the high-throughputdevices communicating over the network with packets can beabstracted to DE components communicating via time stampedevents, where each network packet is represented by a discreteevent. DE execution is based on events composed of a tag-value pair, representing the time stamp and the payload of atoken. The DE scheduler guarantees that events are processedin time stamp order. Within one hierarchical level of a DEmodel, all actors share a global notion of time, namely themodel time. This imposes a global ordering of events withinthe model and therefore ensures determinacy. The modelingdetails of the domain-specific system entities are presentedbelow.

PMUs. Phasor Measurement Units are the main sources ofphasor data, abstracted as DE events as they are representedin the Ptolemy execution. We collectively model the PMUsresiding within a local area in the grid as a cluster and denotethis component as a PMUCluster_i, where i refers to thearea number. This actor is parameterized by the PMUCountparameter and generates the corresponding number of events

every iteration interval (parameterized by the PMUPeriodparameter). PMUCount is one of the inputs to the executablemodel in the outer hierarchy, sampled from a user-defined priordistribution.

Phasor Data Concentrators (PDCs). A PDC is responsiblefor receiving data from multiple PMUs and producing anaggregated data packet for each PMU at an application specificrate. We only model the relaying function of the PDC, where itproduces data packets for each PMU and processes the packetsin a FIFO pattern.

Distributed Power Application Workflow. Each dis-tributed power application is locally run on a computationcluster and is expected to consist of multiple iterations, in-terleaved with peer-to-peer communications with the coordi-nating areas. We use a generic computation node model thataccepts SCADA and PDC packets and produces intermediatepackets after each algorithm iteration. The execution time foreach iteration is characterized by a random variable that allowsthe user to associate a platform-dependent stochastic profilewith the algorithm execution time at each iteration.

The top-level model is illustrated in Figure 6. In thisarchitecture, the PMUs of each area produce data at 30samples per second that are transmitted to the local PDC viaa communication fabric, named PMULink. Data streams arerelayed at the PDC and sent to the local computation node ofeach area to be utilized in the periodic power applications.

Network Fabric. We consider the supporting network fab-ric (both in PMULink and LocalNet) to follow the linkspecifications by the North American SynchroPhasor Initiative(NASPI) [https://www.naspi.org]. In this case, each PMU isconnected to the local PDC by a T1 line. The packet size ofeach PMU message is assumed to be 128 bytes in compliancewith the data format of IEEE C37.118 standard [18]. Eachof these links is chosen to be 50 miles long, representing anaverage physical distance from a bus to the nearest PDC. Weassume each PDC is placed 300 miles away from the local areaand the middleware component is symmetrically located, at amaximum distance of 350 miles from the PDCs. The in-depthcharacterization of network links are studied in [12].

Aspects: PMULink

Aspects: PMULink

Aspects: PMULink Aspects: LocalNet

Aspects: LocalNet

Aspects: LocalNet

Aspects: LocalNet, MW, MWNetwork

Aspects: LocalNet, MW,MWNetwork

Aspects: LocalNet, MW, MWNetwork

PMUCluster1

PMUCluster2

PMUCluster3

PDC1

PDC2

PDC3

Aspects: MW, MWNetwork

Aspects: MW, MWNetwork

Aspects: MW, MWNetwork

Fig. 6: Top Level Distributed Smart Grid CommunicationModel with Communication and Middleware Aspects

2) Middleware Model for Centralized Aggregation: Wemodel middleware components and their interactions with

Page 6: Uncertainty Analysis of Middleware Architectures for Streaming Smart … · 2016-01-11 · Uncertainty Analysis of Middleware Architectures for Streaming Smart Grid Applications Ilge

6

Fig. 7: Top Level Model of Middleware Aggregation in MW

other systems components in the entire data flow of centralizedaggregation. Figure 6 implements a middleware aspect MW, thatreceives intermediate results from three areas, as well as fromPDCs, and performs the following tasks:

Aggregation. Middleware is responsible for receiving pack-ets from the PDCs of all distributed areas and aggregating thepackets into a universal index file. The functional model ofthe top-level middleware aggregator is given in Figure 7. PMUstreams from each area are first taken into a local aggregator(named Aggregator in the Ptolemy model), where they areprocessed on a per-file basis. A synchronization unit, calledCombinedIndexFile then aggregates the streams from allareas into one global output packet.

The local aggregator model is given in Figure 8, whereeach received PMU stream is randomly assigned to one ofthe process threads, which internally impose a delay on thepacket. The MiddlewareQueue components is designed asa thread pool, where the per-packet delay is modeled followingbenchmarks that will be studied in Section III-D. The outputsof all instances are then merged to yield a stream containingall the processed PMU streams for this particular area.

Convergence Control. Another important role of the mid-dleware simulation is to determine whether a distributed powerapplication has reached a convergence state. For distributedstate estimation and similar iterative power applications, thenumber of iterations until convergence relies on several pa-rameters such as PMU data redundancy and quality. To becomprehensive, we assume a variable number of iterations,ranging in the discrete set {1, 2, ..., 20} until convergence hasbeen declared.

C. Data-Exchange Only Middleware Model

We now elaborate on an alternative middleware scenario forthe middleware architecture, in which the centralized aggrega-tion may require extensive resources and network fabric, andtherefore become infeasible to couple streams from hundredsof sensors at a centralized node due to geographical and logicalconstraints . The data-exchange only middleware enables eachdistributed node to share streams, without the aggregate-dispatch behavior. Figure 9 demonstrates the Ptolemy modelthat considers such a middleware functionality.

For the data-exchange only middleware, the task of themiddleware architecture becomes trivially to route each datastream to its respective destination. For the distributed state

estimation case study, it is assumed that each BA has accessto PMU data from all PMUs (i) within its local substation and(ii) from the neighboring area(s). According to this topology,BA1 and BA3 both receive data from PDC2 in addition toPDCs within their own area. BA2, on the other hand, remainsa neighbor of both BA1 and BA3, therefore, the middlewaremust route PMU streams from all areas BA2.

Given the simplified role of routing of the middleware,whose delays are captured as a part of the LocalNet compo-nent, the local aggregation is performed at a local middleware,delegated separately to each distributed computation node.

A prototype model utilizing a local middleware is providedin Figure 9, as a refinement of MW1. For Area 1, the middle-ware is responsible for delivering data streams from PDCs 1and 2, and does not receive any data streams from PDC3. Theworkload on the centralized middleware is therefore reduced,at the expense of increased data redundancy over the network.

BA1

BA2

BA3

Fig. 9: Distributed Data-Exchange only Middleware Model

D. Model Calibration

For calibrating the model to accurately represent the char-acteristics of packet processing times in the middleware layer,we use the data obtained from benchmarks carried out on theActiveMQ message broker that implements the aggregationand queuing behavior of the middleware. The trend for the

Page 7: Uncertainty Analysis of Middleware Architectures for Streaming Smart … · 2016-01-11 · Uncertainty Analysis of Middleware Architectures for Streaming Smart Grid Applications Ilge

7

processedPMUFileIn

BooleanSwitch

Expression2

Counter2

DE Director

ThreadPoolSwitch

ExpressionCounter

Rician

Server

DE Director

(count % PMUCount == 0) &&(count != 0)

Fig. 8: The Local Aggregator Model

cumulative completion times obtained on an ActiveMQ queueinstance is presented in Figure 10. The pattern suggests thatincreasing the level of parallelism in the queue processinglevel is essential to improve the end-to-end latency, due to thelong-tailed behavior of the process latency. The benchmarksare not only important for obtaining an average completiontime, but are also essential to characterize middleware latencydistribution.

Fig. 10: Apache ActiveMQ Benchmark Results on AverageCompletion Times for Concurrent Packet Arrivals

Following the distributions for the benchmarked number ofsensor streams per area, we carry out a regression analysison the time series obtained by the benchmarks to yield adistribution fit for the middleware processing overhead. Themaximum-likelihood distribution fit for the benchmarked datais given by a Rician distribution parameterized as follows:

Ti ∼ Rice(ν, σ)ν = 0.0302 · log(NPMUi

) + 0.055 (1)σ = 0.0007 ·NPMUi + 0.0414

where Ti is a random variable that denotes the processing timein seconds of a PMU stream associated with Area i, Rice(·)

denotes a Rician distribution with parameters ν and σ, andNPMUi is the number of concurrent PMU streams delegatedto Area i.

Figure 11 demonstrates a sample histogram for the distri-butions of per-file middleware processing times, obtained for250 concurrent PMU files. The Rician distribution fitting isthen used to parameterize the random delay imposed by thelocal aggregator as modeled in Figure 8, on a per-event basis.

02

4

6

8

10

12

14

16

18

20

0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

Middleware Per File Delay and Rician Distribution Fitting

time(s)

freq

uenc

y

Fig. 11: Histograms for benchmarked [Red] and simulated[Blue] middleware delay

To further simulate the concurrent queuearchitecture, we make use of the Ptolemy actor calledMultiInstanceComposite, which is capable ofgenerating a user-defined number of copies of a componentand to simulate the instances concurrently. The localaggregator model allows each received PMU stream to berandomly assigned to one of the processing unit instanceswithin a thread pool, each simulating a server with stochasticlatency that follows (1). The outputs of all instances are thenmerged to yield a stream containing all the processed PMUstreams for this particular area.

Page 8: Uncertainty Analysis of Middleware Architectures for Streaming Smart … · 2016-01-11 · Uncertainty Analysis of Middleware Architectures for Streaming Smart Grid Applications Ilge

8

E. Monte Carlo Sampling

Smart grid topologies are expected to be highly variablein the number of PMUs per area and the inter-area networkcharacteristics. Middleware needs to be adaptive and flexibleto support numerous scenarios of data volume and networkcharacteristics. Monte Carlo simulations integrated with thedistribution system topology is a useful tool to evaluate the ef-fect of certain model parameters on middleware performance.Figure 12 displays the top level Ptolemy model that is used togenerate Monte Carlo samples of selected model parameters,characterized in detail in Table I. The PowerGrid componentin Figure 12 is the top-level power grid to be simulated, andis given in Figure 6. Here, the parameters are assumed to beuniformly distributed over a finite set of integer values to avoidany bias towards a particular fashion for the PMU distribution.However, it should be noted that the number of iterations mayfollow a Gaussian-like distribution in practical applications.As we explained in Section II, calibrating the distributionfor iteration quantification will require significant engineeringeffort. For simplicity, we also assume uniform distribution ofthe number of iterations.

DiscreteRandomSource Scale

PMUCount1

PMUCount2

PMUCount3

concurrencyLevel

numberO

fIterations

Fig. 12: Monte Carlo sampling in Ptolemy II

For a power system partitioned into three subareas,we are interested the parameter samples given by the5-tuple: <PMUCount1, PMUCount2, PMUCount3,concurrencyLevel, numberOfIterations>. Themodel is executed for 6000 seconds in model time for eachparameter sample, roughly corresponding to 100 completecycles of the application. In the specific case of the distributedstate estimation application, the cycle deadline is set to 60seconds, which is equal to the period. For each 5-tuple thatparameterizes the model, maximum and average end-to-endrun times are recorded over model execution.

IV. UNCERTAINTY ANALYSIS

A. Influence of Concurrency Level and Number of PMUStreams on End-to-End Runtime

Following data collection using Monte Carlo methods, weproceed to polynomial regression analyses to account forthe effect and significance of the model parameters on the

TABLE I: Monte Carlo Variables and Respective ProbabilityMass Functions (range format:initial:increment:final )

Parameter Name PMF RangePMUCount1 Uniform 10:10:500PMUCount2 Uniform 10:10:500PMUCount3 Uniform 10:10:500

concurrencyLevel Uniform 2:2:20numberOfIterations Uniform 1:1:20

TABLE II: Variables for Regression Analysis

Variable Explanationx1 concurrency levelx2 max{PMUCount1,

PMUCount2,PMUCount3}x3 number of iterationsy maximum end-to-end runtime

maximum end-to-end delay. We define the variables to be usedin the regression analysis on Table II.

The initial question addressed is the influence of the mid-dleware concurrency level and the maximum number of PMUsper area on the end-to-end delay. The polynomial curve fittingexpression given by (2) that uses x1 and x2 as independentvariables and y as the dependent variable. This method revealsthat a bivariate polynomial regression equation that is cubic inx1 and quadratic in x2 is the best-fitting model in the studiedset of polynomial fits. The best fit is evaluated in terms ofhighest Bayesian Information Criterion (BIC) and lowest sum-of-squares error (SSE) among the set of fits.

f(x1, x2) = p00 + p10x1 + p01x2 + p20x21 + p22x1x2

+ p02x22 + p30x

31 + p21x

21x2 + p12x1x

22 (2)

Table III presents the maximum-likelihood estimates of thepolynomial coefficients for the regression fit, together with theconfidence intervals for the coefficients. The p-value, whichindicates the false alarm probability for the variable beingestimated, is less than 10−6 for all the variables taken intoaccount. This provides high confidence that the regressionmodel is accurate and avoids overfitting.

The goodness of fit metrics presented in Table 3 furtherconfirm that the variables chosen for Monte Carlo simulationsufficiently relate to the trend in the runtime distribution.The R2 value indicates that the independent variables x1and x2 account for explaining 96.8% of the observed data.Moreover, the fit yields the highest BIC among all bivariatefits constrained to have at most third order dependence to eachparameter. Since AIC and BIC are goodness of fit metrics thatestablish a trade-off between model accuracy and complexity,maximizing these while minimizing the fit error of the modelhelps in avoiding parameter overfitting.

The regression analysis that best fits obtained simulatedoutputs with the input data set is given in Figure 13 with the95% confidence bounds. The concurrency level of at least 10 isnecessary for the middleware to scale in handling increasingnumber of PMUs. The response surface has linear trend as

Page 9: Uncertainty Analysis of Middleware Architectures for Streaming Smart … · 2016-01-11 · Uncertainty Analysis of Middleware Architectures for Streaming Smart Grid Applications Ilge

9

TABLE III: Polynomial Regression Coefficient Estimates

Coefficient Estimate Confidence IntervalIntercept 18.32 [14.73, 21.93]

x1 -6.9699 [-7.47, -6.46]x2 0.14869 [0.13, 0.17]x21 0.73883 [ 0.70, 0.78]x1x2 -0.01544 [-0.02, -0.01]x31 -0.02236 [-0.02, -0.02]

TABLE IV: Goodness of fit statistics for polynomial regressionanalysis

SSE R2 RMSE AIC BIC2519 0.968 1.784 3209 3251

the number of PMU increases, but remains planar as theconcurrency level increases. This indicates further increasingthe concurrency level would have marginal benefit.

Max PMU Count

Concurrency Level

Max

Run

time

(s)

Fig. 13: Polynomial Regression Analysis of Significant MonteCarlo Variables

B. Influence of Number of Iterations on End-to-End Runtime

The additional polynomial regression analysis includes thenumber of iterations (x3) as the third explanatory variablecandidate. The notation follows from the previous analysis.

The best fit is selected by evaluating the Akaike InformationCriterion (AIC) and BIC on a set of polynomial fits up toorder 3 in each explanatory variable. AIC favors the modelorder of {3, 2, 2} and BIC favors the model of order {3, 2, 3},respectively, in {x1, x2, x3}. The AIC-optimal polynomial fitis chosen, and the results are presented in Table V. Thesignificant coefficient estimates are given, and the higher-orderparameters with confidence intervals centered around zero areomitted from the result.

Comparison of Table IV and Table VI reveals that, incor-porating x3 in the regression analysis has little improvementon the overall fit. Moreover, the coefficient estimates given inTable V that depend on x3 are either numerically insignificantor have confidence bounds that intersect 0, suggesting thatthe coefficient values most likely do not express a significantexplanatory relation to the observed variable y. This resultprovides more confidence that the number of iterations is aless significant variable that influences the maximum run time

TABLE V: Extended Polynomial Regression Coefficient Esti-mates

Coefficient Estimate Confidence Intervalx1 12.2991 [7.82, 16.7761]x2 -6.9866 [-7.49, -6.4822]x3 0.1743 [0.15, 0.1997]x21 0.6242 [0.21, 1.0379]x1x2 0.7524 [0.72, 0.7884]x22 -0.0162 [-0.02, -0.0145]x1x3 0.0001 [0.00, 0.0001]x2x3 -0.0065 [-0.03, 0.0185]x23 -0.0019 [-0.00, -0.0003]x31 -0.0091 [-0.02, 0.0064]x21x2 -0.0225 [-0.02, -0.0214]

TABLE VI: Goodness of fit statistics for extended polynomialregression analysis

SSE R2 RMSE AIC BIC2093 0.974 1.63 3079 3163

of the distributed state estimation algorithm, compared to theconcurrency level and the maximum number of PMU streamsper area.

To explain the insignificance of number of iterations in theanalysis, we refer to the system model presented in sectionIII-B. As only intermediate estimate data on tie-line buses areexchanged, the data communication load is only approximately65K bytes per iteration. Even for a larger scale power systemsuch as the Western Electricity Coordinating Council (WECC)with more than 1300 buses and 6700 loads, the data exchangebetween neighboring areas would remain to be a relativelysmall overhead and would have little contribution to the end-to-end delay.

0100

200300

400500

0

5

10

15

200

10

20

30

40

50

60

Number of PMUs

Number of Iterations vs Total #PMUs vs Runtime

Number of Iterations

Max

End

−to−

end

Run

time(

s)

Fig. 14: Effect of Iterations and Number of PMU Streams onend-to-end Runtime

Page 10: Uncertainty Analysis of Middleware Architectures for Streaming Smart … · 2016-01-11 · Uncertainty Analysis of Middleware Architectures for Streaming Smart Grid Applications Ilge

10

C. Evaluation of the Influence of Middleware Architecture onTemporal and Resource Constraints

The previous uncertainty analysis considers a centralizedmiddleware architecture as opposed to a data-exchange onlymiddleware that enables reduced data transfer over the networklayer, at the expense of requiring additional resources at thedistributed computation end.

We initially address the temporal effects of adapting acentralized middleware architecture that performs aggregate-and-dispatch versus a data-exchange only middleware, whichrequires distributed middleware components to aggregate datalocally at each node. Figure 15 presents a comparison ofend-to-end delay under the two alternative middleware ar-chitectures as a function of worst-case middleware capacity,obtained for a PMU distribution of {200,100,100} over thethree areas. Note that the distributed middleware providesmarginally lower mean and worst-case latency, due to theavoided expense of central aggregation. As middleware capac-ity is increased beyond the point where any queuing occursat middleware level (for Thread Pool Size greater than 40),the two architectures are observed to exhibit comparablelatency behavior, as the overhead of data aggregation becomescomparable to the noise level in the per-packet latency.

Fig. 15: Comparisons of Middleware Latency vs. MiddlewareArchitecture and Concurrency

We would also like to investigate the data volume imposedby these two middleware architectures to the network fabric.Since smart grid applications will likely be publishing data athigh rates and increased resolution, network will soon becomea scarce resource and data level optimizations will gain im-portance. Note that the distributed middleware only requirestransfer of the PMU files from PDCs to local middleware,after which the per-area state estimation can be performedin a standalone fashion. On the other hand, in the case ofcentralized middleware, PDC-to-MW communication needs tobe followed by MW-to-BA, as well as peer-to-peer BA-to-BAintermediate communication until convergence.

Figure 16 demonstrates the data volume exchanged underthe two aforementioned middleware topologies, as a functionof partitions within the middleware. It is important to notethat, for a coarse partitioning of the power grid, where alarge number of PMU streams are delegated to a single areafor reduced inter-area communication, a centralized aggregate-and-dispatch architecture requires lower total data volume tobe transferred over network fabrics, despite the intermediatedata needed until algorithm convergence. The exchange-only

distributed middleware architecture requires high volumes ofnetwork traffic, since each area contains a large number ofPMU files that need to be shared with neighbouring areasredundantly. As the grid becomes finely partitioned, i.e., thenumber of grid areas are increased, a distributed topologybecomes more advantageous, due to the intermediate data ex-change overhead that dominates data volume in the centralizedcase.

Figures 15 and 16 provide a well-defined basis for ourdecision to utilize the centralized middleware architecture forthe uncertainty analysis provided in Section IV.

V. RELATED WORK

The power industry has been evolving to incorporate electricdistribution systems with redundant communication networks,sensor nodes and controllers to form a data-intensive frame-work, referred to as the “smart grid”. The role of middleware inthis service-oriented context, which assembles heterogeneousservices becomes prominent, due to the data-oriented servicesthat are provisioned to grow exponentially in number [7].Surveys on middleware design and architectures [7], [8]present the architecture patterns, principles and existing mid-dleware tools tailored for transmission, distribution and controlservices. Quantitative evaluation of middleware solutions inthe smart grid, however, still demands additional considera-tion to facilitate the architecture design of distributed powerapplications.

In our approach, we model the entire end-to-end data flowof a distributed power application including sensor streams,network, middleware components and distributed power ap-plications. This model-based approach analyzes effects ofuncertain parameters to end-to-end delays. In our case, suchparameters are the number of iterations of distributed stateestimation, number of PMU devices, data quality of sensorstreams and the middleware concurrency level.

Current model-based evaluation methods use various setsof probabilistic models. Markov chains, petri nets, queuingnetworks, finite state automata, stochastic processes, depen-dency graphs and fault trees are widely-applied models forevaluating quality attributes of reliability, performance, safety,energy consumption. A comprehensive survey on the topic ispresented in [19]. These models are mathematically complex,and in general, it is an involved task to derive quality metricsas a function of input distributions. In our approach, weuse MC simulations to account for parametric uncertaintyin the middleware architecture, where the MC simulation isnatively composed with the executable Discrete-Event powerapplication model. Our contribution in this work is to present ahighly reconfigurable simulation framework for evaluation ofmiddleware-centric distributed cyber-physical applications andto account for the feasibility of different middleware architec-tures for a family of distributed state estimation applicationsto be deployed in the future power grid.

VI. CONCLUSION

In this paper, a model-based parametric approach for uncer-tainty analysis of middleware-centric smart grid applications ispresented. The discrete-event models encompassed the entire

Page 11: Uncertainty Analysis of Middleware Architectures for Streaming Smart … · 2016-01-11 · Uncertainty Analysis of Middleware Architectures for Streaming Smart Grid Applications Ilge

11

Fig. 16: Comparison of Data Volume Exchange Overhead for Two Middleware Architectures

data flow from sensors to application nodes, being processedthrough network and middleware fabrics. Using the PtolemyII framework, we created an integrated design environmentthat supports Monte Carlo sampling of model parametersand subsequent execution of the simulation model using thegenerated parameter set. We additionally carry out regressionanalysis to discover significant model parameters and evaluatetheir degree of effect on the end-to-end runtime. The resultsshow that among two middleware architectures, the maximumnumber of PMU streams for each area and the middlewareconcurrency level directly account for the maximum runtimeof the end-to-end process. Data exchange caused by iterationsuntil convergence is a less effective parameter in determiningthe empirical worst-case runtime given that only intermediatedata are exchanged. This indicates that power applications canbenefit from scalable middleware architectures to support mon-itoring and control applications with temporal requirements.It is key to communicate these results to power engineersto help calibrate the scale and configuration of distributedpower applications for the future power grid and to determineadditional scalability analyses to investigate using a modelbased design approach.

ACKNOWLEDGMENT

This work was supported in part by the TerraSwarm Re-search Center, one of six centers supported by the STAR-net phase of the Focus Center Research Program (FCRP) aSemiconductor Research Corporation program sponsored byMARCO and DARPA.

REFERENCES

[1] Weiqing Jiang, V. Vittal, and G.T. Heydt. A distributed state estimatorutilizing synchronized phasor measurements. Power Systems, IEEETransactions on, 22(2):563 –571, may 2007.

[2] G.N. Korres. A distributed multiarea state estimation. Power Systems,IEEE Transactions on, 26(1):73–84, 2011.

[3] A. Gomez-Exposito, A. Abur, A. de la Villa Jaen, and C. Gomez-Quiles.A multilevel state estimation paradigm for smart grids. Proceedings ofthe IEEE, 99(6):952–976, 2011.

[4] Yan Liu, Jared M. Chase, and Ian Gorton. A service oriented architecturefor exploring high performance distributed power models. In ChengfeiLiu, Heiko Ludwig, Farouk Toumani, and Qi Yu, editors, Service-Oriented Computing, volume 7636 of Lecture Notes in ComputerScience, pages 748–762. Springer Berlin Heidelberg, 2012.

[5] Yousu Chen, Zhenyu Huang, Yan Liu, Mark J. Rice, and ShuangshuangJin. Computational challenges for power system operation. In Pro-ceedings of the 2012 45th Hawaii International Conference on SystemSciences, HICSS ’12, pages 2141–2150, Washington, DC, USA, 2012.IEEE Computer Society.

[6] NIST. Smart grid conceptual model, http://smartgrid.ieee.org/ieee-smart-grid/smart-grid-conceptual-model.

[7] Liang Zhou and J.J.P.C. Rodrigues. Service-oriented middleware forsmart grid: Principle, infrastructure, and application. CommunicationsMagazine, IEEE, 51(1):84–89, January 2013.

[8] Pedro Castillejo Ruben de Diego Jose-Fernan Martınez, JesusRodrıguez-Molina. Middleware architectures for the smart grid: Surveyand challenges in the foreseeable future. Energies, 6(7):3593–3621, July2013.

[9] Ilge Akkaya, Yan Liu, Edward A. Lee, and Ian Gorton. Modelinguncertainty for middleware-based streaming power grid applications. InProceedings of the 8th Workshop on Middleware for Next GenerationInternet Computing, MW4NextGen ’13, pages 4:1–4:6, New York, NY,USA, 2013. ACM.

[10] I. Gorton, A. Wynne, Yan Liu, and Jian Yin. Components in the pipeline.Software, IEEE, 28(3):34 –40, may-june 2011.

[11] Ian Gorton, Yan Liu, Carina Lansing, Todd Elsethagen, and KerstinKleese van Dam. Build less code deliver more science: An experiencereport on composing scientific environments using component-based andcommodity software platforms. In Proceedings of the 16th InternationalACM Sigsoft Symposium on Component-based Software Engineering,CBSE ’13, pages 159–168, New York, NY, USA, 2013. ACM.

[12] I. Akkaya, Y. Liu, and I. Gorton. Modeling and analysis of middlewaredesign for streaming power grid applications. In Proceedings of the In-dustrial Track of the 13th ACM/IFIP/USENIX International MiddlewareConference, MIDDLEWARE ’12, pages 1:1–1:6, New York, NY, USA,2012. ACM.

[13] Claudius Ptolemaeus, editor. System Design, Modeling and Simulationusing Ptolemy II. 2014.

[14] P. Derler, E.A. Lee, and A.S. Vincentelli. Modeling cyber-physicalsystems. Proceedings of the IEEE, 100(1):13 –28, Jan. 2012.

[15] Jeff C. Jensen, Danica Chang, and Edward A. Lee. A model-baseddesign methodology for cyber-physical systems. In Wireless Com-munications and Mobile Computing Conference (IWCMC), 2011 7thInternational, pages 1666 – 1671, July 2011.

[16] Edward A. Lee. Cps foundations. In Proc. of the 47th Design AutomationConference (DAC), pages 737–742. ACM, June 2010.

[17] Ilge Akkaya, Yan Liu, and Edward A Lee. Modeling and simulation ofnetwork aspects for distributed cyber-physical energy systems. In CyberPhysical Systems Approach to Smart Electric Power Grid, pages 1–23.Springer Berlin Heidelberg, 2015.

[18] IEEE standard for synchrophasor measurements for power systems.IEEE Std C37.118.1-2011 (Revision of IEEE Std C37.118-2005), pages1 –61, 28 2011.

[19] A. Aleti, B. Buhnova, L. Grunske, A. Koziolek, and I. Meedeniya.Software architecture optimization methods: A systematic literaturereview. Software Engineering, IEEE Transactions on, PP(99):1, 2012.