a framework for reducing the modeling and simulation ... · in miniaturization, speed, power, and...
TRANSCRIPT
A Framework for Reducing the Modeling andSimulation Complexity of Cyberphysical Systems
Nikolaos Zompakis and Kostas SioziosSchool of Electrical and Computer Engineering, National Technical University of Athens, Greece
Email: {nzompakis, ksiop}@microlab.ntua.gr
Abstract—As systems continue to evolve they rely less onhuman decision-making and more on computational intelligence.This trend in conjunction to the available technologies forproviding advanced sensing, measurement, process control, andcommunication leads us towards the new field of Cyber-PhysicalSystem (CPS). Although these systems exhibit remarkable char-acteristics, the increased complexity imposed by numerous com-ponents and services makes their design extremely difficult. Thispaper proposes a software-supported framework for reducingthe design complexity regarding the modeling, as well as thesimulation of CPS. For this purpose, a novel technique basedon system scenarios is applied. Evaluation results prove theeffectiveness of introduced framework, as we achieve to reducementionable the modeling and simulation complexity with acontrollable overhead in accuracy.
I. INTRODUCTION
The wide reach of the Internet along with rapid advances
in miniaturization, speed, power, and mobility have led to
the pervasive use of networking and information technologies
across all economic sectors. Increasingly, these technologies
are combined with elements of the physical world (e.g.,
machines, devices, structures) to create smart systems. The
challenge of integrating embedded computing and physical
processes with feedback loops, where physical processes affect
computations and vice versa has been recognized for some
time, since it allows physical devices to operate in changing
environments [1].
Tightly coupled cyber and physical systems that exhibit
this level of integrated intelligence are referred to as Cyber-
Physical System (CPS). With their unique functionalities,
CPS has the potential to change every aspect of life. Un-
paralleled analytical capabilities, real-time networked infor-
mation, and pervasive sensing, actuating, and computation
are creating powerful opportunities for systems integration,
enabling among others CPS to execute extraordinary tasks that
are barely imagined otherwise. Today, CPS can be found in
such diverse domains such as automotive, energy, healthcare,
manufacturing, communications, etc.
Analyzing the main components of a CPS system we
distinguish three basic implementation layers. With a top-down
approach the first layer provides all the necessary mechanisms
for supporting the make decision, the second one refers to the
connectivity (network) among the CPS components, whereas
the last layer deals with the tasks of monitoring and actua-
tor. From functionality perspective, the make-decision layer
realizes the central control, the monitor and actuator layer
implements the system observation and defines the system
actions and reactions, while the network is the interface mean
between the previous two.
In order to take advantage of the competitive features of
CPS, it is necessary to incorporate all the necessary high-
confidence mechanisms that are able to interact with humans
and the physical world in dynamic environments and under
unforeseen conditions. Key role to these techniques are given
to the system’s modeling and co-simulation, which typically
integrates various models of computation and communication.
A methodology that integrates both discrete and continu-
ous time models at different layers of abstraction (SystemC
and Simulink) are discussed in [2]. A similar framework is
discussed in [3], where the co-simulation framework inte-
grates VDM++ and 20-sim. Although these approaches exhibit
remarkable results, they are not directly applicable to the
design of CPS. In [4], a methodology of virtual prototyping
is proposed which combines SystemC, QEMU, and Open
Dynamics Engine to achieve a holistic design view. The True-
Time toolbox, which considers timing aspects introduced by
computation and communication, has been proposed and used
in MATLAB/Simulink environment to enable CPS simulation
[5]. However, this toolbox cannot either integrate hardware
models, or support different abstraction levels. A co-simulation
framework that deals with the joint design of software in C,
hardware in HDL and mechanical components in MATLAB
is discussed in [6]. Although this framework tackles the three
main aspects of a CPS, it exhibits a limitation regarding the
efficient design of cyber part of the system. Finally, Hardware-
in-the-Loop (HIL) simulators are also proposed [7], which
enable co-simulation between software and hardware modules.
The previously mentioned analysis indicates the plenty of
simulators and design environments at different layers of
abstraction. However, none of them is directly applicable
to CPS, as they cannot support cross-domain concepts for
architecture, communication and compatibility. Specifically,
existing approaches can be classified at two complementary
directions: (i) cyberizing the physical systems means to endow
physical subsystems with cyber-like abstractions and interfaces
(wrap software abstractions around physical subsystems); and
(ii) physicalizing the cyber means to endow software and
network components with abstractions and interfaces that
represent their dynamics in time.
Throughout this paper we introduce a framework to pro-
vide a holistic solution to the modeling and simulation of
978-1-4673-7311-1/15/$31.00 ©2015 IEEE
3rd Workshop on Virtual Prototyping of Parallel and Embedded Systems (ViPES 2015)
360
CPS. The competitive advantage of this approach relies to
the considerable reduced complexity as compared to existing
approaches, with almost similar accuracy. Additionally, the
proposed framework employees a virtualization methodology
for enabling part(s) of the physical components to be replaced
with their equivalent software models without distributing
the overall system’s functionality. Although this approach
is relevant to the HiL technique discussed previously, the
introduced solution is also applicable for system debugging,
since it can isolate specific parts of the CPS.
For demonstration purposes, the introduced framework is
applied to model a large scale WiMAX network. Such a
network can be though as a CPS, since it includes sensors (e.g.
for detecting signal power, noise, etc), actuators (e.g. apply
different coding scheme depending on the desired Quality-of-
Service), and computation mechanisms (e.g. for determining
the optimum coding scheme). Experimental results prove the
effectiveness of introduced solution, as we achieve to reduce
the complexity of system’s modeling with a controllable
overhead in accuracy, as compared to the static and full-custom
implementations.
The rest of the paper is organized as follows: Section II
describes the proposed framework, while the technique for
reducing the complexity of system’s modeling and simulation
is discussed in Section III. Experimental results that evaluate
the efficiency of introduced solution, as compared to an
existing approaches are depicted at Section IV. Finally, Section
V concludes the paper.
II. PROPOSED FRAMEWORK
This section introduces the proposed methodology (depicted
schematically at Fig. 1) for supporting the efficient modeling
and simulation of cyberphysical systems. Additionally, the
introduced methodology, which is a complete in-the-loop flow,
consists of several chronological steps and supports system
testing both at component, as well as system level. As it is
depicted at Fig. 1, on the left side of the V cycle, initially we
perform a Model-in-the-Loop (MiL) simulation, followed by
a Software-in-the-Loop (SiL) simulation. Then, on the right
side of the V cycle, Hardware-in-the-Loop (HiL) simulation
is applied.
Specifically, the MiL simulation consists of mixing the
physical plant model (including mechanical, electrical, ther-
mal, etc., effects) with the control strategy at the algorithmic
level. This creates a complete CPS model that can be simulated
together to find out whether the behavior of the CPS is correct.
Once the MIL simulation verifies the accuracy of our model,
we proceed with its implementation in software level. The
realization of control model can be done either manually (i.e.
by writing C code), or automatically through code generation
tools. Initial tests can be created at this level, which should be
reusable for the next subsequent phases. Next step involves the
validation of our implementation with the SiL simulations. For
this purpose, the source code is simulated with the same physi-
cal plant model used in the MiL phase. Although different tests
can be applied, it is usual to reuse the same test for assuring
Requirements
System Design
Component Design Integration
CalibrationSystem Test
Component Test
HiL Simulation
SiLSimulation
MiL Simulation
Software Design
Fig. 1. The proposed methodology.
the equivalence of results (back-to-back testing). Finally, the
last phase involves the HiL simulation, where the real plant is
added in the loop (instead of conventional simulation, where
only models are employed). Such a simulation is applicable
both at the development and testing of CPS. More specifically,
with HIL simulation the physical part of a machine (or system)
is replaced by a simulation model.
This is a valuable instrument for system designers, as it
enables among others the incremental system development
and debugging. Next subsection describes in more detail the
employed software tools that instantiate the proposed method-
ology.
A. Software Support
Even though there are plenty of simulation and design
tools that tackle software (SW) and hardware (HW) problems
individually, there are only a few approaches that leverage
problems arising in systems that tightly integrate SW and
custom HW. This mainly occurs due to the challenges related
to system integration that have to be addressed. Even limited,
there are approaches which promise to alleviate the integra-
tion problem in RTL simulation, emulation and prototyping
environments. However, these solutions are often too complex,
slow and expensive. Usually, it is the communication link
between the host computer and prototyping HW that is mostly
constrained.
In order to overcome this limitation, this section describes
the introduced framework that automates the methodology
discussed previously. This framework, depicted schematically
at Fig. 2, incorporates all the necessary toolflows to support
the Mil, SiL and HiL simulations for a CPS. Specifically, the
MiL and SiL simulations are performed based on Simulink [8]
and OVP [9] suites, respectively. However, we have to notice
that apart from these tools, any other flow with similar features
can be employed for the scopes of these simulations.
Such a modularity is highly desirable because it enables
easier framework’s upgrade for supporting additional features.
This is an upmost important objective especially for the CPS
domain due to the continues demand for newer and more flexi-
ble design suites. In addition to that, the introduced framework
provides a PC-based co-simulation technique, which trade-offs
3rd Workshop on Virtual Prototyping of Parallel and Embedded Systems (ViPES 2015)
361
HotTalk API(Device driver, SW stack)
Co-DebuggingCo-Simulation
@ RUN-TIME!
ConventionalPrototyping
ProposedSiL & HiL simulation
Early Prototyping
device-in-the-loop
CPS Functionality Control Mechanisms(e.g. software executed onto ARM)
System model
Execution flow for HiL simulationExecution flow for SiL simulation
Fig. 2. Proposed rapid virtual prototyping for designing CPSs control.
between speed (functional simulation) and accuracy (cycle-
accurate simulation), depending on designer requirements.
The increased simulation speed provided by OVPSim ensures
that complex systems can be modeled in reasonable amount
of time (hundreds of millions of simulated instructions per
second). As the OVP models are pre-built, they support fully
functional simulation of a complete embedded system. Also,
since these models are binary-compatible with the simulated
HW, the developed software can be executed onto the target
(final) system without any modifications, resulting to faster
software development. Another interesting feature provided
by proposed framework is the close interaction between SW
and HW teams during the CPS development phases, which is
provided by adopting TLM-SystemC models. Among others
such a feature enables as long as new IPs are developed,
the HW design team is able to incrementally test these IPs
by replacing a functionality of the employed SystemC/TLM
model with the equivalent HDL prototype mapped onto FPGA
boards. The connection between virtual platform (OVP) and
FPGA is established with HotTalk API [7].
III. BALANCE SYSTEM’S COMPLEXITY WITH ACCURACY
This section describes the proposed instrument for balancing
the system’s complexity with the accuracy of derived solutions.
This is especially crucial for CPS platforms because these
systems usually impose the integration of numerous compo-
nents, such as sensors, actuators, control mechanisms, etc.
The main idea of our technique relies to cluster a number of
system’s operations at representative cases, called scenarios.
Thus, instead of modeling and simulating each operation
individually, we deal only with a few scenarios. This leads to
significant smaller complexity, which in turn improves deign
time and cost. Next, we describe in more detail these steps.
A. System Variability Analysis
An exploration of the system variability initializes the
context of the analysis. The sources of variation can be
adjustable or non-adjustable. Such a distinguish leads to a
premier classification of the system’s operations into run-
time situations (RTSs) and configurations. RTSs represent the
individual uncontrollable operation situations while configu-
rations represent the driven system reactions. In order this
declaration to be appropriately defined, it is presupposed the
deep knowledge of the targeted system’s characteristics.
B. Cost Optimality Exploration
The optimal cost dimensioning of the previous derived
system’s operation situations and configurations is the aim
of the existing step. For this purpose, a number of well-
defined cost metrics are employed. Since we focus on a
multi-objective problem, pareto optimality [10] exploration is
useful for handing the cost validation, giving a set of optimal
design trade-offs. This leads to a Pareto surface of potential
exploitation control points in the multi-dimensional cost space.
C. Classification
The efficient classification of system’s operation space is
the next key issue. Towards this direction, a number of opera-
tion scenarios are defined, which cover the whole system’s
functionality [11] [12]. By grouping individual operations
based on the cost similarity, it is feasible to balance the
number of scenarios with the respecting modeling accuracy.
From modeling perspective the same scenario operations are
represented by the same cost dimensioning and the same
system’s configuration. This leads to a mentionable decrease of
the modeling complexity with an parallel increase of overesti-
mated modeling evaluations due to function fluctuations on the
same scenario. In other words, a fundamental tradeoff exists
between modeling accuracy that is inversely proportional to
the clustering overhead and the modeling complexity that is
proportional to the required modeled operation cases.
In order to clarify this, Fig. 3 plots three individual run-
time operation situations (RTSs) with corresponding Pareto
curves (RTS1, RTS2, RTS3) in a 2 dimensional cost space. The
outcome of the classification step is the cost representation of
the whole scenario by the worst Pareto curve (RTS1). The cost
characterization of the included RTSs by the worst estimation
case inevitably introduces a clustering overhead (light gray and
brown area at Fig. 3). This overhead manifests every time that
these RTSs occur at modeling time. Thus, the total modeling
accuracy is also inversely proportional to the frequency of the
RTS appearance.
D. Control Scheduler
The run-time instantiation of a scheduling mechanism,
which detects the scenarios and triggers the suitable modeling
scheme is the final step. The detection is achieved throught
monitoring the system’s changes. The detection implementa-
tion cost is proportional to the mapping complexity of these
changes into scenarios. So heuristic ways, which keep the
detection overhead in reasonable levels, have to be explored.
A typical implementation relies on decision trees, which are
graphs with nodes and edges that correspond the system’s
variable values to scenarios. Each path of the graph concludes
to a scenario detection.
3rd Workshop on Virtual Prototyping of Parallel and Embedded Systems (ViPES 2015)
362
Fig. 3. Example of clustering overhead.
IV. EVALUATION RESULTS
This section quantifies the efficiency of introduced frame-
work to reduce the modeling and simulation complexity of
CPS through the previously mentioned scenario-based ap-
proach. For this purpose, we present the way that our method-
ology deals with these issues comparing the results with two
border solutions: (i) a full-custom, and (ii) a static system
implementation.
A. Description of Usecase
Our case study occupies with a wireless CPS based on
IEEE 802.16 (WiMAX) communication protocol [13], which
covers a wide range of high and low data rate networks. Fig.
4 outlines the topology of underline CPS. Each distributed
network has an access point base station where the terminals
connect. The base stations are connected each others through
a central station using wires. We consider that the control
of the transmission power is implemented on the distributed
networks. The access point stations collect the local network
data (e.g. noise, available bandwidth etc) and act individually.
The central station controls the whole system applying service
policies. Our objective is to model (MiL) and simulate (SiL)
the transmission signal power between access points and
terminals under different operation situations. More precisely,
our case study handles the signal power transmissions under
different channel noise distributions. Such an approach is
possible to replace any real WiMAX link, or to create potential
links in the context of testing the scaling capabilities of the
whole CPS, through the HiL simulation. Then, it would be
feasible to provide trade-offs between modeling accuracy and
simulation complexity.
B. System Modeling and Simulation
The modeling process starts with the exploration of the
system variability. The modeled components of our targeted
CPS are the transmitter, the receiver and the air channel.
Fig. 5 outlines the 16 function blocks instantiations of these
components according to the WiMAX protocol specification
[13]. More precisely, we focus on the error correction coding,
the signal modulation (iQ mapper), the OFDM modulation
Fig. 4. The employed WiMAX usecase.
Antenna
A/D Converter Simulatedprocess
D/A Converter
I/O device
Control signal(analog)
Measurement signal(analog)
Control signal(digital)
Simulink on a PC
Simulated processmeasurement (digital)
Simulatedprocess
disturbance
Simulatedmeasurement
noise
Fig. 5. Modeling and analysis of a WiMAX link.
and the antenna functionality, since they present the main
functional variation during a transmission. The model rep-
resents the correlation connections among these blocks. In
this direction the Shannon and Harley theorem provides a
theoretical fundamental basis. The theorem bounds the signal
to noise power ratio (SNR) in respect with transmission data
rate, bandwidth and channel capacity [14].
The theoretical minimum SNR (SNRmin) for an error-free
transmission is impossible to reach in practice. The modulation
schemes define how close to this theoretical SNRmin the
transmission can reach. Every modulation scheme is charac-
terized by a SNRmin that allows the demodulation of the
transmitted symbols without errors. Knowing this SNRmin
per modulation scheme (MS), we can define the minimum
signal power per MS for specific levels of noise. Equation 1
3rd Workshop on Virtual Prototyping of Parallel and Embedded Systems (ViPES 2015)
363
Fig. 6. Symbol error probability for different MCs at WiMAX.
Fig. 7. RTS Characterization.
provides the symbol error probability (Ps) for each MS [14].
Ps,M =
(
M − 1
M
)
× erfc×(
3
M2 − 1×Es × average
N0
)12
(1)
Channel coding improves the SNRmin by a factor R [14].
So the derived curves (Equation 1) can be normalized for equal
energy per information bit (pre-coding) bearing in mind that
the energy per transmitted bit is less than the energy per infor-
mation bit by a factor equal to the code rate R. The graphical
expression of the symbol error probability for the modulation
and coding schemes (MCs) of the WiMAX is presented in Fig.
6. Every transmission situation is characterized by two cost
dimensions: the total signal power and the bit error rate (BER).
The signal power is inversely proportional to the symbol error
probability. Finally, each situation is characterized by a curve
in the two-dimensional space of total signal power and BER.
For demonstration purposes, Fig. 7 presents 3 representative
examples of these curves. Note that the complete analysis for
Fig. 7 imposes 594 distinct curves.
For the needs of our analysis, we consider four different
White Gaussian Noise (WGN) distributions. The average
values (μ) and standard deviations (σ) of these distributions
are summarized as follows:
• Channel 1: μ=120mWatt, σ=25mWatt
• Channel 2: μ=200mWatt, σ=20mWatt
• Channel 3: μ=300mWatt, σ=40mWatt
• Channel 4: μ=400mWatt, σ=30mWatt
WGN is suitable metric for testing because it represents
equal energy at all frequencies in a spectrum area and mimics
the effect of random processes that occur in nature. The noise
distribution defines the appearance probability of the running
situations, while a different noise distribution creates an alter
probability density for these situations. Regarding our study,
the probability density of the operation situations, with a noise
range between (μ, μ+σ) and (μ, μ-σ), is 34.13% for a Gaussian
distribution. Correspondingly, the situations probability for (μ,
μ± 2σ), (μ, μ± 3σ) and (μ, μ± 4σ) are estimated. Thus, we
have in total 9 bounding levels of noise, which in combination
with the 66 potential blocks instantiations (depicted at Fig.
5) creates an exploration space of 594 (66×9) simulation
situations per channel distribution, similar to those depicted
at Fig. 7.Simulating each individual case for each channel can creates
huge complexity issues. For example during a transmission any
channel variation leads to a different modulation and coding
scheme. A full-custom approach requires a detail simulation of
each functionality variation. Thus, for each fluctuation an extra
modeling instantiation is implemented. On the other hand, a
static approach that implements only the worst case leads to
remarkable modeling and simulation overestimation.Our methodology overcomes these limitations by clustering
the 594 RTSs (similar to Fig.7) of each channel into scenar-
ios. In this direction, we balance the number of operation
scenarios taking into consideration the modeling accuracy
and the simulation complexity. The noise distribution defines
the appearance probability of the running situations of the
scenarios. Thus, we have a different optimal set of m scenarios
for every channel distribution. We consider a minimum BER,
for a guarantee Quality of Services (QoS), equals to 10−8.
Also, similar to relevant approaches [15], we assume that the
run-time reconfiguration cost of the transmission corresponds
to 10% of the maximum signal power consumption. Without
affecting the generality of this use case, our analysis concen-
trates on the four aforementioned channels providing optimal
modeling scenario sets.Table I outlines the results of the signal power modeling for
the four channels in comparison with an accurate full-custom
and a static model. The first column presents the number of
the modeled instantiations. The full-custom approach takes
into consideration the full 66 combination of the function
blocks (depicted in Fig. 5) for each channel. Respectively,
the scenario approach has a fluctuated number of considered
instantiations (ranging between 10 and 22) depending on the
channel’s characteristics (noise distribution) and the desired
accuracy. Error-correction, M-QAM, OFDM and BW columns
present the required implemented blocks for each system’s
implementation. Finally, the last two columns present the total
function blocks number and the respective achieved modeling
accuracy.
3rd Workshop on Virtual Prototyping of Parallel and Embedded Systems (ViPES 2015)
364
TABLE IOUTCOME FROM THE PROPOSED MODELING AND SIMULATION ANALYSIS.
1/2 2/3 3/4 5/6 2 4 16 64 128 512 1024 2048 1.25 3.5 5 10Full-Custom Approach 66 16 100%
Scenario Approach 10 8 71%
Static Approach 1 4 7%
Full-Custom Approach 66 16 100%
Scenario Approach 14 10 74%
Static Approach 1 4 6%
Full-Custom Approach 66 16 100%
Scenario Approach 18 11 84%
Static Approach 1 4 4%
Full-Custom Approach 66 16 100%
Scenario Approach 22 13 85%
Static Approach 10 4 4%
ModelingBlocks(UL/DL)
ModelingAccuracyM-QAM OFDM mode Bandwidth (BW)
Function BlocksModelingInstantiationsModeling Approach
Cha
nnel
1C
hann
el2
Cha
nnel
3C
hann
el4
Error Corretion
A number of conclusions might be derived based on this
analysis. Among others, the full-custom approach in Channel
1 provides maximum accuracy by instantiating 16 function
blocks, while the scenario approach for the same channel
provides a slightly reduced accuracy (71%) but with the half
number of blocks. Correspondingly, for the rest three channels
we have respecting trade-offs between the system’s accuracy
and the number of modeled blocks. Hence, system designer
can select the appropriate number of scenarios depending on
the desired modeling requirements. Summarized an extended
number of scenarios rise the probability for additional imple-
mented blocks but at the same time increases the modeling
representativeness. Finally, we have to notice that although the
modeling and simulation complexity differs among function
blocks (i.e., error correction, M-QAM, OFDM mode, BW), the
alternative instances per function block exhibits comparable
complexity.
V. CONCLUSIONS
A software-supported framework for enabling efficient mod-
eling and simulation of CPS, was introduced. In contrast
to relevant approaches, the introduced framework exhibits
considerable reduced complexity with a controllable penalty
in system’s accuracy. Additionally, the infrastructure for sup-
porting this feature, called scenario, is applicable to a wide
range of complex systems. For demonstrations purposes, we
proven that it can achieve to balance the number of blocks
that need to be modeled and simulated into a CPS (WiMAX
network), with the accuracy of derived solution, as compared
to static and full-custom approaches.
ACKNOWLEDGMENT
The work presented in this paper is partially supported by
the FP7-2013612069-2013-HARPA EU project.
REFERENCES
[1] E. A. Lee, “Cps foundations,” in Proceedings of the 47th DesignAutomation Conference, ser. DAC ’10. New York, NY, USA: ACM,2010, pp. 737–742.
[2] L. Gheorghe, F. Bouchhima, G. Nicolescu, and H. Boucheneb, “For-mal definitions of simulation interfaces in a continuous/discrete co-simulation tool,” in 17th IEEE International Workshop on Rapid SystemPrototyping (RSP 2006), 14-16 June 2006, Chania, Crete, Greece, 2006,pp. 186–192.
[3] M. Verhoef, P. Visser, J. Hooman, and J. F. Broenink, “Co-simulation ofdistributed embedded real-time control systems,” in Integrated FormalMethods, 6th International Conference, IFM 2007, Oxford, UK, July2-5, 2007, Proceedings, 2007, pp. 639–658.
[4] W. Mueller, M. Becker, A. Elfeky, and A. DiPasquale, “Virtual pro-totyping of cyber-physical systems,” in Design Automation Conference(ASP-DAC), 2012 17th Asia and South Pacific, Jan 2012, pp. 219–226.
[5] A. Cervin, D. Henriksson, B. Lincoln, J. Eker, and K.-E. A rze n, “Howdoes control timing affect performance?” vol. 23, no. 3, pp. 16–30, 2003.
[6] P. L. Marrec, C. Valderrama, F. Hessel, A. Jerraya, M. Attia, and O. Cay-rol, “Hardware, software and mechanical cosimulation for automotiveapplications,” Rapid System Prototyping, IEEE International Workshopon, vol. 0, p. 202, 1998.
[7] D. Diamantopoulos, E. Sotiriou-Xanthopoulos, K. Siozios, G. Econo-makos, and D. Soudris, “Plug&chip: A framework for supporting rapidprototyping of 3d hybrid virtual socs,” ACM Trans. Embed. Comput.Syst., vol. 13, no. 5s, pp. 168:1–168:25, Dec. 2014.
[8] Simulink, “Simulink,” 2015. [Online]. Available: http://www.mathworks.com/products/simulink
[9] OVP, “Open virtual platforms (ovp),” 2015. [Online]. Available:http://www.ovpworld.org
[10] P. Yang and F. Catthoor, “Pareto-optimization-based run-time taskscheduling for embedded systems,” in Proceedings of the 1stIEEE/ACM/IFIP International Conference on Hardware/Software Code-sign and System Synthesis, ser. CODES+ISSS ’03. New York, NY,USA: ACM, 2003, pp. 120–125.
[11] S. V. Gheorghita, M. Palkovic, J. Hamers, A. Vandecappelle, S. Ma-magkakis, T. Basten, L. Eeckhout, H. Corporaal, F. Catthoor, F. Van-deputte, and K. D. Bosschere, “System-scenario-based design of dy-namic embedded systems,” ACM Trans. Des. Autom. Electron. Syst.,vol. 14, no. 1, pp. 3:1–3:45, Jan. 2009.
[12] N. Zompakis, A. Papanikolaou, P. Raghavan, D. Soudris, andF. Catthoor, “Enabling efficient system configurations for dynamicwireless applications using system scenarios,” IJWIN, vol. 20, no. 2,pp. 140–156, 2013.
[13] R. Fantacci, D. Marabissi, D. Tarchi, and I. Habib, “Adaptive modulationand coding techniques for ofdma systems,” Trans. Wireless. Comm.,vol. 8, no. 9, pp. 4876–4883, Sep. 2009.
[14] A. Bateman, “Digital communications design for the real world, 2nded.” p. 248, 1998.
[15] C. Sun, J. Cheng, and T. Ohira, Handbook on Advancements in SmartAntenna Technologies for Wireless Networks. Hershey, PA: InformationScience Reference - Imprint of: IGI Publishing, 2008.
3rd Workshop on Virtual Prototyping of Parallel and Embedded Systems (ViPES 2015)
365