rade framework automatic deployment and data …home.hib.no/ai/elektro/2016/bo16e-rmk.pdf ·...

102
RADE FRAMEWORK – AUTOMATIC DEPLOYMENT AND DATA ACCESS Rebekka Mork Knudsen HEAU12 18. May 2016

Upload: vannhu

Post on 26-Aug-2018

225 views

Category:

Documents


0 download

TRANSCRIPT

RADE FRAMEWORK – AUTOMATIC DEPLOYMENT AND DATA ACCESS

Rebekka Mork Knudsen

HEAU12

18. May 2016

Documentcontrol Report title:

RADE framework – Automatic deployment and data access Date/Version

18.05.16/v2.0 Report number:

Author:

Rebekka Mork Knudsen Field of study:

HEAU12 Number of pages:

102 University councillor:

Johan Alme Grading:

Remarks:

Employer:

CERN Employers reference:

Employer contact info: Odd Øyvind Andreassen ([email protected])

Revision Date Status written by:

v.0.10 20.11.15 Pre-studies Rebekka Mork Knudsen

v.0.11 10.02.16 Updated report Rebekka Mork Knudsen

v.1.0 25.04.16 First draft Rebekka Mork Knudsen

v.2.0 16.05.16 Final version Rebekka Mork Knudsen

3 | P a g e

Preface

This bachelor thesis is a conclusion of my 14-month stay at CERN and my time as a student at the

institute of electrical engineering at HiB. The report is the written result of the subject ELE 105 “Main

project in automation technology”, a 20-credit subject normally carried out during the spring semester.

The project was completed at CERN in Switzerland over the course of 14 months starting March 2015.

I am very thankful for the opportunity I got to work in the international environment at CERN and live

in the beautiful French countryside for the past year. I have met great people and gained experience

in both programming, teamwork and language. I would like to thank the whole MTA team for the

opportunity to work with them, the guidance and the friendships. A special thanks to my supervisor at

CERN, Odd Øyvind Andreassen, for being a great leader and human encyclopedia and to Jakub Rachucki

whom I have worked closely with on this project. I would also like to thank my team leader Adriaan

Rijllart for great leadership, my contact at HiB, Johan Alme for all the advice, my officemates at CERN

and my flatmates in Ferney-Voltaire. Last but not least a huge thanks to Kristian Rauboti, who has

helped me through all ups and downs and whom I could not do without.

4 | P a g e

Abstract

In the accelerator domain at CERN there is a need of integrating industrial devices and creating control

and monitoring applications in an easy and structured way. The Rapid Application Development

Environment (RADE) framework provides the tools and integration into the CERN controls

infrastructure. The framework is constantly updated with new versions of the modules and libraries,

leaving the production of stable releases very important for critical applications. This thesis looks at

two approaches for streamlining and automating manual operations at CERN related to the RADE

framework.

RADAR 2.0 is a LabVIEW application based on a drag and drop concept, for connecting to the controls

infrastructure at CERN with RADE. The interface lets the user configure the connections while the

LabVIEW code is generated automatically in the background with VI scripting.

An automatic deployment system for RADE is needed in order to deliver stable and well tested versions

of the framework continuously to the users. The second part of this thesis looks at the possibilities for

improving the build time and reproducibility of the system by implementing a system for dependency

management and storage of the libraries.

5 | P a g e

Contents Documentcontrol .................................................................................................................................... 2

Preface ..................................................................................................................................................... 3

Abstract ................................................................................................................................................... 4

1 Introduction ..................................................................................................................................... 8

1.1 Setup of the report .................................................................................................................. 8

1.2 CERN ........................................................................................................................................ 9

1.2.1 Accelerators ............................................................................................................... 10

1.2.2 LHC ............................................................................................................................. 11

1.2.3 Experiments ............................................................................................................... 11

1.2.4 CERN control centre .................................................................................................. 12

1.3 CERN structure ...................................................................................................................... 12

1.4 MTA team .............................................................................................................................. 13

1.5 LabVIEW ................................................................................................................................ 14

1.6 LabVIEW at CERN ................................................................................................................... 14

1.7 RADE ...................................................................................................................................... 15

1.7.1 RADE Libraries ........................................................................................................... 15

2 RADAR 2.0 ...................................................................................................................................... 18

2.1 Background ............................................................................................................................ 19

2.1.1 VI-Server .................................................................................................................... 19

2.1.2 VI-scripting ................................................................................................................. 19

2.1.3 Design patterns in LabVIEW ...................................................................................... 20

2.1.4 Object oriented programming in LabVIEW ............................................................... 21

2.1.5 Actor framework ....................................................................................................... 22

2.1.6 CCDB .......................................................................................................................... 22

2.1.7 OASIS ......................................................................................................................... 23

2.1.8 RDA ............................................................................................................................ 23

2.2 Requirements ........................................................................................................................ 24

2.3 Analysis .................................................................................................................................. 25

2.3.1 Design decisions ........................................................................................................ 25

2.4 Architecture ........................................................................................................................... 27

2.4.1 Main Program ............................................................................................................ 28

2.4.2 Configuration Program .............................................................................................. 32

2.5 Implementation ..................................................................................................................... 37

6 | P a g e

2.5.1 Launcher .................................................................................................................... 37

2.5.2 TopLevelGUI .............................................................................................................. 37

2.5.3 Login panel ................................................................................................................ 38

2.5.4 Tree panel .................................................................................................................. 39

2.5.5 Config Panel ............................................................................................................... 44

2.5.6 Monitoring the Main program .................................................................................. 47

2.5.7 Building the Front Panel ............................................................................................ 49

2.5.8 Building the Block diagram ........................................................................................ 50

2.5.9 Main Program ............................................................................................................ 51

2.6 Testing ................................................................................................................................... 57

2.6.1 Test devices ............................................................................................................... 57

2.6.2 Unit testing ................................................................................................................ 57

2.6.3 Integration testing ..................................................................................................... 57

2.6.4 Performance tests ..................................................................................................... 57

2.6.5 Configuration tests .................................................................................................... 58

2.6.6 Stress/load tests ........................................................................................................ 58

2.6.7 Functionality tests ..................................................................................................... 58

2.6.8 Internal testing .......................................................................................................... 58

2.7 Discussion .............................................................................................................................. 59

3 Automatic Deployment System ..................................................................................................... 60

3.1 Background ............................................................................................................................ 61

3.1.1 RADE release process ................................................................................................ 61

3.1.2 Continuous integration .............................................................................................. 61

3.1.3 Automatic software deployment .............................................................................. 62

3.1.4 Compilation ............................................................................................................... 62

3.1.5 Dependencies ............................................................................................................ 63

3.1.6 Agile development .................................................................................................... 64

3.1.7 Test Driven Development .......................................................................................... 64

3.1.8 Testing ....................................................................................................................... 64

3.1.9 Version Control Systems............................................................................................ 65

3.2 Requirements ........................................................................................................................ 66

3.3 Analysis .................................................................................................................................. 67

3.3.1 Evaluation of tools ..................................................................................................... 69

3.4 Architecture ........................................................................................................................... 71

7 | P a g e

3.4.1 Jenkins ....................................................................................................................... 73

3.4.2 Maven ........................................................................................................................ 74

3.4.3 Nexus ......................................................................................................................... 75

3.4.4 Unit Tester ................................................................................................................. 76

3.5 Implementation ..................................................................................................................... 77

3.5.1 Virtual machines ........................................................................................................ 77

3.5.2 Maven ........................................................................................................................ 79

3.5.3 Nexus ......................................................................................................................... 83

3.5.4 Jenkins ....................................................................................................................... 84

3.5.5 Jenkins Jobs ............................................................................................................... 86

3.5.6 LabVIEW and Jenkins ................................................................................................. 93

3.6 Discussion .............................................................................................................................. 95

4 Conclusion ..................................................................................................................................... 96

8 | P a g e

1 Introduction

This chapter is an introduction to CERN, the group of which the project is a part of and the context of

the project.

The project concerns software automation at CERN involving the RADE (Rapid Application

Development Environment) framework 1 in LabVIEW. The framework is used to by hundreds of

engineers and physicists at CERN for communicating between devices, across networks and platforms.

This project investigates two different ways of automating manual tasks related to the use and

distribution of RADE. The first part of the project concerns an interface to the CERN middleware with

automatic code generation. The second part investigates automating the deployment of the RADE

framework.

1.1 Setup of the report

The report starts with an introduction to CERN and the MTA team responsible for this project.

Afterwards there is a chapter focusing on RADE, which is relevant for both parts of the project. As the

project is split in two parts focusing on different aspects of software automation, the two parts are

introduced separately in the report with their own requirements, analysis and implementation. The

results of both projects are also discussed separately leading to a common conclusion. All details, user

documentation and code is enclosed in the appendix.

1 1 An abstraction in which software providing generic functionality can be selectively changed by user code,

thus providing application specific software.

9 | P a g e

1.2 CERN The European Organisation for Nuclear Research (CERN) was established in 1954 and is the world’s

largest research institute for particle physics. Today CERN consists of 21 member states and is located

on the border between France and Switzerland, close to Geneva. Its mission has four strands; research,

technology, collaboration and education. Scientists from all over the world come together to study the

basic constituents of matter and the forces acting between them. There can be up to 13 000 people at

the CERN site at any time, including over 2,250 staff members, users, students, fellows, sub-contractors

and visiting scientists from all over the world.

CERN hosts in total eight accelerators, among them the world’s largest machine, the Large Hadron

Collider (LHC). Physicists and visiting scientists from all over the world study the results of the

experiments and try to answer questions such as; what is dark matter? And why does it seem to be

more matter than antimatter in the universe? In the particle collisions in the LHC they are able to

investigate this by recreating conditions similar to those present right after the big bang.

Engineers build and test the machines and systems that the physicists rely on. This includes the

radiofrequency cavities that boost the particles, the magnets that focus the particle beams and guide

them around the accelerator and the cryogenic system that cools the LHC to temperatures close to

absolute zero (so that the wires can work in a superconducting state). Thousands of electronic

components are built and tested for the operation of the accelerators and experiments [8].

Figure 1: CERN logo

10 | P a g e

1.2.1 Accelerators

The accelerator complex at CERN is a succession of machines that accelerate particles to increasingly

higher energies. Each machine boosts the energy of a beam of particles before injecting the beam into

the next machine in the sequence. Accelerators boost beams of particles to high energies before they

are made to collide with each other or with stationary targets.

Figure 2: CERN Accelerator Complex

The source is a simple bottle of hydrogen gas. An electric field strips the electrons off the hydrogen

atoms to isolate the protons. LINAC 2, the first accelerator in the chain, accelerates the protons to 50

MeV2. The beam is then injected into the Booster, followed by the Proton Synchrotron (PS) and later

the Super Proton Synchrotron (SPS). The protons are finally transferred to the two beam pipes of the

LHC circulating in opposite directions. When the beams have gained enough energy, they are brought

into collision inside four experiments along the LHC [4].

2 Mega electron Volt

11 | P a g e

1.2.2 LHC

The LHC is the world’s biggest machine. It is located between 50 and 175 metres underground at CERN

and is 27km in circumference. It consists of two beam pipes surrounded by vacuum and magnets. The

magnets are cooled down to -271.3°C (1.9 K) in order to operate in a superconducting state. The

magnets are used to centre and bend heavy ions and protons while they are being accelerated in the

beam pipes. The particles reach speeds up to 99.9999999% of the speed of light and energies up to 7

TeV3 each. The beam pipes are brought together at four different locations around the LHC where

experiments are located.

1.2.3 Experiments

The four detectors located along the LHC observe and record the results of these collisions. The four

particle detectors are ATLAS, ALICE, CMS (Figure 3) and LHCb. The different experiments in the

detectors study the interactions between the particles at energies up to 14TeV. The Higgs Boson,

known as the particle that give all other particles mass and the missing link in the standard model of

particle physics, was observed at CERN in 2012.

3 Terra electron Volt

Figure 3: CMS Detector

12 | P a g e

1.2.4 CERN control centre

The performance of the LHC depends critically on the rest of the CERN accelerator complex, as well as

its technical and cryogenic services. The CERN Control centre (CCC) combines the control rooms of the

eight accelerators at CERN as well as the operation of cryogenics and technical infrastructures. It is

split into four “islands” in the shape of a quadrupole magnet (Figure 4). The islands represent LHC, SPS,

PS and Technical Infrastructure (TI). Operators working in the CCC monitor the cryogenics, vacuum,

quench4 protection, access systems, interlocks and powering systems for the different experiments. It

came into operation in the beginning of 2006 for the operation of all the beams and accelerators at

CERN.

1.3 CERN structure

CERN is run by 21 member states, each of which has two delegates to the CERN council. The council is

the highest authority of the organization and has responsibility for all important decisions. It controls

CERN’s activities in scientific, technical and administrative matters, approves programmes of activity,

adopts budgets and reviews expenditure. The council appoints the Director-General who, assisted by

a directorate, runs the laboratory through a structure of departments [8]. As of 2016, Fabiola Gianotti

is the Director-General at CERN. The departments include Engineering, Beams, Technology, Theoretical

Physics, Experimental physics, Information Technology and several others. Each department consist of

multiple groups, and the different groups can contain multiple sections. In some cases the sections

contain specialized teams.

4 A quench refers to the loss of superconductivity in the magnet.

Figure 4: CERN Control Centre

13 | P a g e

1.4 MTA team

The MTA (Measurement, Test and Analysis) team is a part of the ECE (Equipment, Controls and

Electronics) section, STI (Sources, Targets and Interactions) group and the EN (Engineering)

department at CERN.

The STI group has the mandate to study the interactions of beam with matter. The group participates

in the development and test of radiation tolerant electronics and in the development and use of

robotic solutions for remote inspections and interventions in hazardous areas [14].

The ECE section is a part of the STI group responsible for the design, installation and maintenance of

control and measurement systems for the beam intercepting devices in the accelerator and

experimental facilities. They design control systems ensuring positioning of collimators, targets and

massive dumps with accuracies down to the micrometre in hazardous environments. The section’s

responsibilities also include the development and support of FESA (Front End Software Architecture)

classes, expert GUIs and applications for the operation and development of radiation-tolerant motors

and sensors. They also design and develop custom electronics and robotic solutions to satisfy the tight

requirements in terms of precision at CERN. The systems developed by the ECE section are normally

designed to have a very low level of Electromagnetic Emission and high immunity to Electromagnetic

Interference.

The MTA team is a part of the ECE section and their task is to identify commercial solutions that cover

the needs of CERN. The team provides solutions for measurement, test and data analysis applications

according to an industrial model, adapted to CERN requirements. This model aims at leveraging

industrial technology and ensures the coherence with installed systems, which reduces development

effort and simplifies maintenance. In addition, the team continuously develop turn-key solutions and

frameworks that simplifies the integration of industrial equipment and complete control systems at

CERN. Based on the selected commercial technologies and the in-house solutions, the team designs,

implements, maintains and supports control systems for their clients [15]. The team also provides

LabVIEW support for CERN’s over 600 LabVIEW users and has developed the RADE (Rapid Application

Development Environment) framework for integrating LabVIEW with the accelerator infrastructure.

14 | P a g e

1.5 LabVIEW LabVIEW (Laboratory Virtual Instrument Engineering Workbench) is a graphical programming language

from National Instruments. It is commonly used for data acquisition, instrument control, and industrial

automation on a variety of platforms. In LabVIEW the execution is determined by the structure of a

graphical block diagram on which the programmer connects different function-nodes by drawing

wires. It follows a dataflow model where the wires propagate variables and any node can execute as

soon as all its input data become available. LabVIEW programs are called virtual instruments (VIs).

1.6 LabVIEW at CERN In the accelerator domain at CERN there is a need of integrating industrial devices and creating control

and monitoring applications in an easy and structured way. LabVIEW is a flexible programming

environment for developing these kind of applications, including FPGA (Field-Programmable Gate

Array) and Real-Time tasks. The missing link for effective use of LabVIEW in the accelerators was its

integration into the CERN IT and controls infrastructure. The obstacles include different protocols,

libraries, file types and hardware layers. In addition comes the different platforms (Windows, Linux

and Mac) and networks at CERN.

The solution for integrating LabVIEW with the infrastructure at CERN is a framework called RADE,

developed by the MTA section. With the RADE framework developers and engineers don’t have to

cope with all the different interfaces and hardware and can focus on the job at hand, creating turnkey

control and monitoring applications in a quick, stable and flexible way. RADE can, with its high flexibility

and wide range of libraries, be used in almost any control or monitoring application [1]. The LabVIEW

applications at CERN range from alignment systems for the LHC collimators, beam spectrum analyser

for the PS and Post Mortem Analysis5 for the Hardware Commissioning of the LHC.

5 After a failure during the operation of the LHC, leading to beam dump or power abort, a Post Mortem analyzer application will kick in to determine what went wrong and if it is safe to resume the operation.

15 | P a g e

1.7 RADE

The RADE framework aims to give users a total package for development, maintenance and support,

making it quick to implement flexible, stable and maintainable solutions through well-defined

development templates, guidelines and documentation [1]. It is available at CERN for operating

systems Windows, Linux and Mac and LabVIEW versions ranging from 2010 to 2015. It has a separate

palette menu in LabVIEW for users to browse the library6 functions (Figure 5)

1.7.1 RADE Libraries

RADE consists of 19 libraries, constantly being updated with new improvements and functions.

Following will be a closer introduction to some of the libraries frequently used in this project

RIO

RADE Input and Output (RIO) provides access to live data from the front-ends. It provides set, get and

subscription operations for any RDA7 supported device. The set operation writes a new value to the

device and the get operation reads the last value. Subscribe operation monitors the values on the

device and gets updated on change. The RIO library also provides functionality to publish data to the

Accelerator Control Database [2]. RIO includes different communication protocols for connecting to

accelerator devices.

6 A library is a collection of resources used by computer programs 7 Remote Device Access (RDA) is a package providing access to the accelerator devices from application

programs

Figure 5: RADE palette for LabVIEW

16 | P a g e

CMW

The Controls Middleware (CMW) API8 incorporates set, get and subscribe actions to FESA9 and other

CMW devices from LabVIEW. CMW is written in C++ and runs locally with the client. The CMW Wrapper

is a software layer between LabVIEW and the CMW RDA (Remote Device Access) client package [3]. It

is platform independent but not network independent as all the front ends connected are in the

technical network. Figure 6 shows an example of a get operation using the CMW protocol from the

RIO library. URL, cycle and datatype must be initialized before the “get function”. The connection is

closed and the new data is converted to its original format.

JAPC

JAPC (Java API for Parameter Control) is a communication layer to control accelerator devices from

Java. Client programs in LabVIEW can access JAPC parameters with set and get actions or by subscribing

to the data. JAPC is a unified Java API for almost all parameters present in the control system. In the

RADE framework a Tomcat server acts as a mediator for the interface between LabVIEW and JAPC. The

general rule is to use JAPC if your machine is not in the technical network.

RADAR

The RADAR toolkit makes use of the SQL free query database tool and the CMW Wrapper, and creates

a connection to the various front ends driving the CERN accelerators [2]. The toolkit is used for OASIS

signal processing. Renovation of OASIS has since changed the way clients connect and the connection

database for RADAR has become obsolete.

8 An API (Application Programming Interface) is a set of routines, protocols, and tools for building applications. 9 Front End Software Architecture (FESA) is a framework for developing software for the LHC FECs.

Figure 6: CMW Get operation

17 | P a g e

RBAC

Most of the devices/front ends for the accelerator control have authentication and authorization

restrictions. They are protected in order to avoid unintended or accidental actions on the devices and

the access is controlled by Role Based Access Control (RBAC). It provides authentication and

authorization for all RDA driven front ends. It is designed to protect from accidental and unauthorized

access to the LHC and injector controls infrastructure and provides a level of security for accessing

sensitive equipment. The access rights for the front ends are assigned to roles and locations, not users.

It can be used together with RIO as in Figure 6.

MTA-lib

The MTA-lib consists of a set of easy-to-use tools, which provides functions frequently used at CERN.

That includes extensions of standard LabVIEW functions for Array, Boolean, Comparison and other

palettes.

SQL

Allows SQL (Structured Query Language) requests to CERN databases and has some special tools to

access data from logging and measurement databases [2].

RADE Logger

The RADE logger provides error logging to a central server. It has a web interface in Kibana10, accessible

to view statistics of the errors.

10 Kibana is an open source data visualization platform.

Figure 7: RBAC authentication with CMW Set operation

18 | P a g e

2 RADAR 2.0

Operators working in the CCC are in constant need of monitoring critical values for the accelerators

and need easy access to this information. Thousands of different devices in the accelerator domain

with thousands of different device layouts, require a large amount of custom applications, created

manually by developers and system experts. The RADAR library is a part of the RADE framework

created for connecting to these devices with LabVIEW. However, it has become outdated and obsolete

and users are required to have in-debt knowledge of both OASIS and FESA classes in order to use it. In

addition, the library can be difficult to use and all applications must be created manually using

LabVIEW.

The entry point of RADAR 2.0 is to build on the concept of RADAR but to shield the user from this

requirement and create a common interface to all devices. The objective is to create a user friendly

interface in LabVIEW to connect, visualize and manipulate data from the accelerators in an easy way.

Our goal with RADAR 2.0 is to help users set up their own LabVIEW environment to easily connect to

the CERN controls infrastructure with RADE.

19 | P a g e

2.1 Background

The goal of the RADAR 2.0 application is to set up any connections automatically for the user. The user

should only need to configure the connection type and device on a user interface without actually

placing functions on the block diagram and drawing wires. VI-scripting is an add-on to LabVIEW that

opens up for automatically creating the necessary code in the background. If all the coding happens

in the background, no LabVIEW experience is required from the user.

2.1.1 VI-Server

The VI server is a set of functions for dynamically controlling front panel objects, VIs, and the LabVIEW

environment either on the same machine or across networks. All VIs have properties that can be read

or written to and methods that can be invoked using the VI server functions [5]. An example of a use

case for VI server functions is shown in Figure 8. This code example runs the VI whose path is given by

“vi path”, waits until the execution has finished and closes the reference.

2.1.2 VI-scripting

VI scripting is a free add-on to LabVIEW from NI-labs. It lets the developer dynamically create, modify

and run LabVIEW code. It contains several extra VI server classes, properties and methods for creating,

moving and wiring objects. VI scripting nodes can only be used to modify VIs that are not running. Any

attempt on modifying objects on a VI in run-mode will throw an error. The VI scripting add-on is not a

part of LabVIEW’s runtime engine which means that if a VI is compiled to an executable, none of the

VI scripting nodes will work. The VI scripting functions are distinguished from the rest of the VI server

functions by their light blue colour. An example of the use of VI scripting functions is shown in Figure

9. In this example a new VI is created based on a template. All broken wires on the block diagram of

the new VI are removed and the new VI is saved [16].

Figure 8: Example VI server functions

Figure 9: Example VI scripting functions

20 | P a g e

2.1.3 Design patterns in LabVIEW

Design patterns in LabVIEW are patterns that are easily recognizable and based on exiting templates

or frameworks. Using existing design patterns can simplify the development process by providing pre-

existing solutions to common problems.

The state machine in LabVIEW is an example of a frequently used design pattern. In the state machine,

distinct states can operate in a sequence. The order of the state can be pre-determined or based on

messages from queues or user events. When the states are determined by queues, the design pattern

is known as the queue-driven-state-machine (QDSM). It is one of two design patterns used in the

majority of LabVIEW applications. Producer consumer is the other frequently used design pattern in

addition to the QDSM (Figure 10). It consists of a master loop and one or multiple slave loops. The

loops can run asynchronously and the data independence breaks data flow and permits

multithreading. The master loop tells the slave loops what to do. It is a good design pattern for

applications that need to handle multiple processes simultaneously.

An event based design pattern is used when the application needs to handle events coming from the

user interface. Capturing the events in an event structure inside a while loop limits the CPU usage while

waiting for events from the user interface. The user events are enqueued so that race conditions are

avoided and it is ensured that all events are captured. Slave processes can also be driven based on the

event structure.

Figure 10: Example Producer/Consumer design pattern

21 | P a g e

2.1.4 Object oriented programming in LabVIEW

Object oriented programming (OOP) is a concept in most programming languages where the important

keywords are encapsulation and inheritance. Grouping modules and functions into classes and adding

inheritance gives the developer new ways to use LabVIEW. Object-oriented programming can help

organize code, improve maintainability and scalability, and is generally easier to debug because of its

modularity. Expanding a modular application in the future only requires adding the new class to the

project and modifying the subVI that chooses which class to call.

Encapsulation: A block of data, like a cluster, can be encapsulated and configured so that access to the

data is only allowed by certain functions. In that way, a class in LabVIEW can be seen as a cluster that

cannot be unbundled by all VIs.

Inheritance: A new class can be configured to inherit from an existing class and override methods from

the parent class. All methods and properties are inherited from the parent, but the child class can also

have its own unique methods. All classes in LabVIEW inherit from an ultimate ancestor called the

LabVIEW Object class. This allows for storing multiple classes in the same wire and operating on all

classes in a common manner. Objects can later be retrieved using the “To More Specific Class” function

[6].

Dynamic Dispatch

A dynamic dispatch VI is a node that looks like a normal subVI call but actually calls one of several

subVIs. Any child class can create its own implementation of the dynamic dispatch VI and override the

parent implementation. Which VI gets executed is decided at run time, depending on the object on

the class wire of the dynamic dispatch input terminal. This prevents unneeded subVIs from being

loaded into memory [6]. In Figure 11, CMW class and JAPC class inherit from Communication-Layer

class. They all have their own implementation of cl-connect.vi. In this example, the for loop will run 3

times, one for each class on the wire and each implementation of the dynamic dispatch VI will run

once.

Figure 11: Example Dynamic Dispatch

22 | P a g e

2.1.5 Actor framework

When creating large applications in LabVIEW, there is often a need for multiple parallel loops handling

different tasks like acquiring data, updating user interface, logging and error handling. As the

application is expanded it becomes convenient to launch some of these tasks as independent VIs

running asynchronously. The Actor Framework was made to simplify the work of creating multiple

independently running VIs that need to communicate with each other. Although a similar setup could

be created with queues and state machines, actors are far more reusable and scalable because the

actors themselves are LabVIEW classes. The framework consists of two parent classes, Actor and

Message. The Actor is the module with the state data and Message is the messages passed between

actors to trigger state changes.

Actors are launched asynchronously and can handle different tasks in the application. All descendant

classes from the main actor can override the functionality in the Actor Core, which serves as a queued

message handler. This VI receives and responds to messages and data sent to it by other actors in the

system.

Communication between the actors happens with

messages. The messages act in many ways as

queues, but since they are also classes, the

message methods can be overridden by

descendant classes. All message objects inherit

from Message and must override the parents Do.vi

method [7].

2.1.6 CCDB

The CERN controls configuration database (CCDB) contain information from the accelerators such as

the quench protection systems and the cryogenics. The data is structured in Devices, Properties and

Fields. One Device can contain multiple properties and there can be multiple fields in one property.

The properties can be read only, write only or read and write. The database maps the device and server

names to their real locations, the physical devices and pieces of hardware used in controlling and

monitoring the accelerators. It also holds the restriction configurations, telling the client what is

needed or from where you have to connect.

Figure 12: Actor framework

23 | P a g e

2.1.7 OASIS

Open Analogue Signal Information System (OASIS) is a system used by engineers and operators at CERN

to watch the behaviour of systems in real-time. It acquires analogue signals from devices in the particle

accelerators at CERN and displays them in a graphical way.

2.1.8 RDA

RDA (Remote Device Access) provides access to devices from application programs in Java or C++,

running on UNIX or windows platforms. It is based on a client-server model where accelerator devices

are implemented in servers, and client applications access them using the RDA client API.

In the RDA design all accelerator equipment can be accessed as a device. The devices usually

correspond to physical parts such as cable infrastructure, actuators, sensors, measurement devices or

computer hardware. The devices can be controlled via properties describing its state, and each

property has a name and a value. The RDA model defines the basic access methods for the devices

such as set, get and subscribe. By invoking these methods, applications can read, write and subscribe

to the property values [10].

24 | P a g e

2.2 Requirements Operators working in the control room at CERN generally have many screens with multiple applications

running at the same time, monitoring the accelerators. The RADAR 2.0 application should present a

list of all possible devices in the database with the possibility to easily set, get and subscribe from these

devices by a drag and drop action and a quick configuration.

1 Intuitive and easy to use

The assumption is that the end user has no experience in LabVIEW. The application should be

created in such a way that it is easy to understand and manoeuvre.

2 Connection to all devices in CCDB

List of all devices from the controls database for connection.

3 Set, Get and Subscribe option for all devices

Possibility to set, get or subscribe to any field in the database using JAPC and CMW protocols

4 CERN authentication for connection to devices

Most devices have restrictions regarding who can read or write to the different properties.

Authentication should be possible with RBAC, based on user info or location.

5 The application should run on Linux and windows

Linux is the most common operating system in the CCC.

6 Responsive User interface

7 Should be able to reconnect devices if they are down

If a server goes down, it should be possible to reconnect instead of restarting the application.

8 Possibility to add custom user functionality

The user should have the possibility to add custom code do to manipulations on the data.

9 Possibility to have multiple panels

The application should allow multiple projects to be created and open at the same time.

10 Follow MTA coding standards

11 Documentation

The application should be documented on CERN’s internal wiki pages. User-guide for

installation and use of the application and technical documentation.

Following the coding standards and documenting well is important as the application most likely will

be used, supported and maintained after the original developer has left CERN.

25 | P a g e

2.3 Analysis

For the application to be intuitive and easy to use, there should be a minimum of panels with intuitive

graphics with one single menu-panel even if multiple projects are open. Configurations of new

connections should only appear when the user has selected and dragged a device/property onto a

project’s panel. LabVIEW scripting can be used in the background to detect this action and place code

on the front panel and block diagram of the project. When all code is generated on the panel, it should

be able to run as a standalone application. The application is split in two parts where one part is the

monitoring and configuration part responsible for displaying all options to the user, and monitoring,

creating and modifying any new projects. The other part is the Main project, being configured by the

configuration panel. It serves as a connection-template before being populated by the configuration

program. To evaluate the different design decisions for the two parts, they will be referred to as the

“Configuration program” and the “Main program”:

1. The Main Program is responsible for the communication with the different devices and

presenting the results to the user. This part of the application should be able to run as a

standalone application after it has been edited with the configuration program

2. The Configuration program’s main job is to present the user with configuration options and to

modify the Main Program based on these configurations and user actions.

2.3.1 Design decisions

The main program should perform different actions depending on the type of protocol and action the

user has chosen. It should be expandable in case new protocols and device types are added in the

future. It should also be easy to read and understand, both if the user wants to edit it and for the future

maintenance of the program. With this in mind, an object-oriented approach seems to be a good

starting point. Each object the user has configured will be translated into an object belonging to a class.

Any configurations done by the user will be a part of the objects private data. For new functionality in

the future, new classes can be added. OOP encourages cleaner interfaces between sections of the

code, scales better and is easier to debug. In addition, when we are more than one programmer

working on the application, an object-oriented style lets us split up the code more and that makes it

easier to work on different sections of the code without interfering. New features can be added more

easily, and the risk of introducing errors into unrelated sections of code is small. OOP often includes

creating more VIs than without but in that sense it also forces the programmer to develop modular

code where each of the modules make sense. The actor framework was considered for the main

program, but since it will run independently and the user will be given the possibility to make changes

to it, weight was given to the more readable design.

26 | P a g e

An event loop is needed in order to handle events coming from the user interface. In order to keep the

user interface as responsive as possible, it is a good idea to split the application into multiple loops

handling different tasks such as treating the data from devices and handling errors. This led us to a

multiple loop design with user events as main communication. Although queues would be able to do

much of the same job as user events, user events was a good choice as we already needed an event

structure. Dynamic user events can be registered to the same event structure, creating cleaner code.

By this design, we can programmatically generate user events, which can be triggered from other

places in the application. The requirement of multiple main panels can be fulfilled by setting a unique

name for each main panel as LabVIEW can only have one VI with the same name in memory at any

time. For the subVIs, the execution states can be set to clones so that a new instance is launched for

every subVI call and race conditions are avoided.

For the Configuration part of our application, we want to launch as many monitoring-daemons as there

are Main panels. Each of them should be responsible for monitoring actions on one main panel and

generating controls/editing the block diagram. There should also be one configuration menu for the

user in order to create or edit the panels. With many modules running in parallel, the actor framework

pattern prevents race conditions and deadlocks in the communication between modules. The

requirements for the application translates well into the actor framework where the monitoring-

daemons can be launched asynchronously to monitor different panels as they are created. The

framework provides all the benefits of a que-driven-state-machine design but with increased flexibility,

more reuse potential and reduced coupling between modules [7].

The challenge is to ensure that all user actions are taken into consideration in order to make a robust

program. Since the user is not expected to have LabVIEW experience, the application should avoid

making programming errors and clean up after the user if the user does unwanted things with the

program.

27 | P a g e

2.4 Architecture

The application consists of two parts, the Configuration program and the Main program. The main

program handles the communication with the devices and displays the values to the user. The user can

interact with the program to get, set and subscribe to different properties and the data is displayed in

different controls on the front panel. It can run as a standalone application after the user interface has

been configured by the Configuration program. The Configuration Program is designed with the actor

framework and interacts with the user through menu panels, displaying options for creating and

modifying the main program(s). The configuration program launches one monitoring daemon for every

main program. The daemons are responsible for monitoring, modifying and generating code on the

main program using VI scripting.

Figure 13: Configuration program and Main programs

28 | P a g e

2.4.1 Main Program

The Main Program is designed with an object oriented design pattern with multiple asynchronous

loops running in parallel. The different tasks are communicating with each other using user events and

notifiers. Before it has been populated by the configuration program, it serves as a template or a shell

for connecting to devices.

Initialization of the program includes reading the configuration file, initializing the communication and

initializing all objects. The User interface loop takes care of all the user events coming from the front

panel, including button clicks and exit. The Set and Get action is executed from this loop based on the

events. Data Management Loop handles all the data coming from the devices and updates the controls

on the front panel with new values. The error handling loop takes care of treating all errors in the

application. If devices need to reconnect, a request is sent to the Timer loop. The timer keeps track of

the reconnection requests and sends a user event to UI (User Interface) Loop when a request has timed

out. For shutting down the application, an exit-notifier is sent to all loops, references are closed and

the user events destroyed. The architecture of the program is illustrated in Figure 14.

Figure 14: Architecture Main Program

29 | P a g e

Communication

User events: User events are used for communication between the

different loops. Figure 15 illustrates the use of user events in the

program. All loops can send error events to the error handling loop.

The error handling loop can send a reconnection event to the timer

and to the UI loop. It can also send a shutdown event to the UI loop.

Notifiers: While user events can be sent from multiple locations but

only received in one place, notifiers act in the opposite manner. A

notifier is used in the UI loop to notify all other loops to shut down.

This action is normally triggered by the user exiting the program or

a critical error resulting in a safe shutdown.

Error Handling

Errors are treated differently based on how critical they are and the ability for the program to take

actions based on the errors. The error handling system allows a clean exit and provides clear prompts

to the user. In addition to treating and displaying the errors to the user, they are logged locally and in

the team’s logging database.

Local error handling: With local error handling, actions are taken in the code to fix errors locally where

they occur. It is the primary approach for minor errors that can be prevented or controlled and that

should not result in a system shutdown. If the error cannot be handled locally, the calling VI will send

the error to the global error handler to treat it. Errors are cleared after they have been handled locally

or passed on to the global error handler.

Global error handling: Some errors require action from a higher-level VI. For example, critical errors,

which occur when the application cannot continue operating, require a safe shutdown. This involves

actions taken in larger parts of the application. The goal of the global error handler is to handle errors

that could come from multiple locations. Actions expected by a global error handler include logging

the error, displaying the error information to the user, placing the system in a safe state and/or

initiating a safe shutdown. All errors that are not treated locally are sent to the global error handler as

user events.

Figure 15: User events

30 | P a g e

Logging

Information about critical errors are logged to disk. Error logging is a benefit for both the user and

developer. Users have a historical record of what happened in case of an error and a developer can

learn more about what caused the error. The log contains the time of the error, the error code and the

error source. Errors are also logged to the sections central logging instance using the RADE logger.

Classes

We decided to make JAPC and CMW child classes of a common communication-layer class. The Device-

Property class creates and launches objects of the Communication Layer-class. The Device class

initializes the Device-property-class by reading the configuration file and setting the private data of

Device-Property.lvclass.

Figure 16: Class hierarchy Main Program

31 | P a g e

Device class: The device class reads the configuration file and creates as many objects of Device-

Property class as entries in the file. Each object then represents one entry in the file. All configuration

data from the different entries is a part of the class’ private data.

Device-Property class: The Device-Property class links the different objects on the front panel with

entries in the configuration file. Based on these entries (called sections), objects of JAPC, CMW and

Communication-Layer class are created. This class is also responsible for connecting and disconnecting

all the instances.

Communication Layer class: The Communication-Layer class is the parent class of the different

communication protocols (CMW and JAPC). It stores the configuration of the object, the data type of

the control and the reference to the communication. This class has dynamic dispatch methods for

Connect, Disconnect, Set, Get and Subscribe. The child classes will override these methods when

called.

JAPC class: The JAPC class contains the implementation specific for the JAPC protocol. The dynamic

dispatch methods will override communication-layer’s implementation at runtime.

CMW class: The CMW class contains the same methods as the JAPC class, but the implementation is

specifically for CMW.

32 | P a g e

2.4.2 Configuration Program

The configuration programs’ main job is to display the configuration options to the user, monitor the

main program for changes and to dynamically build the front panel and block diagram of the main

panel based on these changes. Because of the many tasks running in parallel, the actor framework

design pattern was chosen for the application.

In LabVIEW, the VI scripting nodes cannot be used to modify running VIs. Events in LabVIEW can only

be registered on VIs that are running. The combination of these two facts results in these design

challenges:

If the Main program is not running, we need to constantly poll for changes instead of using

event structures. This can be challenging for the CPU, especially if multiple panels need to be

monitored at the same time.

If the Main program is running, events can be registered in an efficient way, but the program

must be stopped to generate code on the program with VI scripting. This might be confusing

for a user.

Our solution was to keep all the front panels in edit mode while they can be modified. To minimize the

CPU usage, only the frontmost panel is constantly being monitored as this is the only panel accessible

to the user at any time. In order to perform the least amount of code, another property of LabVIEW is

exploited: for every user action on a VI in edit mode, the “undo state” in LabVIEW is updated. This

property is there to let the user undo any last actions on a VI. To monitor the front panel, the undo

state is monitored for new, deleted or moved elements.

When the undo state is updated, actions are taken based on the state. If objects are deleted, the panel

is cleaned. If there is a new object, it is analysed and a configuration panel can be displayed to the user.

Based on these configurations, controls and indicators are created on the panel. New event cases and

subVIs re added to the block diagram and new elements wired together.

All the menus, configuration panels and monitoring VIs are all actors, running asynchronously and

communicating with each other using messages in the actor framework. As the user is expected to

have more than one monitor, all front panel windows are displayed and centred on the active monitor.

33 | P a g e

Actor framework

The inheritance hierarchy for the actors and main classes in the Configuration Program is shown in

Figure 17: Inheritance hierarchy actors. The LabVIEW Object is the ultimate ancestor. Actor class is a

part of the actor framework and all other actors must inherit from this class to override its methods.

As seen in the figure, Logger is then a class but not an actor. SupervisorActor is used to kill actors and

has the references to all actors inheriting from it. The caller and the callee has the references to each

other’s queue. The Actor VI calls Actor Core which is a dynamic dispatch VI. All new classes set to inherit

from Actor can override the functionality in Actor Core.

Figure 17: Inheritance hierarchy actors

34 | P a g e

The program is started by a launcher VI (Figure 18: Launch chain) which will be used to start the

program by launching the parent actor, Core. Core launches DaemonManager and TopLevelGUI actors.

TopLevelGUI launches the panels LoginPanel and TreePanel. DaemonManager launches FPDaemon

when told to do so by TreePanel.

Figure 18: Launch chain

Core: Launcher launches the application by launching Core. A configuration file containing all

initialization data is read. This includes the names, paths and quantity of actors. TopLevelGUI and

DaemonManager is launched by the Core class, using the Subsystem class.

TopLevelGUI: TopLevelGUI actor is launched by Core and initialized by reading a configuration file

(TLGUI.ini). Using this file, the actor gets the names and quantity of all actors to launch. It then creates

objects of LoginPanel, ControlPanel and TreePanel. This class is responsible for displaying the different

panels to the user and takes case of closing, resizing and moving the panels. The panels include the

login-panel, configuration panel, file-browser and the tree panel. The top-level GUI actor displays the

child actor panels as subpanels. In that way, the size, position and transition is controlled by the top-

level GUI and not the child actors themselves.

DaemonManager: DaemonManager is launched by Core. It receives messages from TreePanel and

launches the folder browser or FPDaemons based on these messages. It keeps track of all the

references and communication and forwards messages to the FPDaemons. When an item is dragged

from the TreePanel, the message is forwarded directly to the active FPDaemon. The active FPDaemon

is found by polling all open instances of Main program to check which one is frontmost.

35 | P a g e

FPDaemon:

This class is responsible for monitoring the front panel of a VI for changes. One FP Daemon object is

launched per instance of the Main program. Its task is to build and delete controls and indicators, edit

the block diagram and edit the different configuration files.

If anything is dragged from the TreePanel in the Configuration Program and dropped on the Main

Programs front panel, FPDaemon displays a configuration panel. Controls, indicators, functions and

wires are dynamically created based on this configuration. If anything is deleted from the Main

Program, FPDaemon deletes all associated controls and cleans up the code on the block diagram. All

modifications on the Main program are done using VI scripting. To avoid unwanted user actions, the

block diagram is cleaned programmatically for broken wires, broken event cases and left-alone

controls.

TreePanel:

TreePanel is launched by TopLevelGUI actor. It serves as a menu for the user displaying all options and

available devices. The user actions are caught with an event structure, and messages are sent to

DaemonManager to act on the actions. The reference of DaemonManager is retrieved from Core who

has access to TopLevelGUI and DaemonManager.

LoginPanel:

The login panel is the first panel launched by TopLevelGUI actor and serves as a login-panel. If/when

the login information is correct, it reverts the resizing and the panel is closed.

GenericPanel:

Sends messages to TopLevelGUI in order to load subpanels/sliding of new panels.

FolderBrowser:

This class is not an actor but a panel to browse for a project path in order to create a new or load an

old project. The folder browser triggers new messages for copying the template project to the chsen

location.

Subsystem:

The job of Subsystem is to create the child objects.

36 | P a g e

Communication

Communication between the different actors happen by messages. The messages are LabVIEW classes

with dynamic dispatch methods. Functionality can happen inside the messages and an actor can only

send messages to an actor it has the reference to.

All message classes inherit from “Message” class which is the ancestor of all messages. Messages are

sent via directed queue from the caller to the actor or via a separate queue from the actor to the caller.

In general, messages should be events along the lines of “you need to know this” and not synchronous

requests for information. The message classes consist of a control with the class private data, a “send-

VI” and a dynamic dispatch “Do-VI”. Whatever is in Do.vi will override the Do method in Message class.

Error handling

Global error handling is built in to the actor framework. All errors are passed to the calling actor and

in that way all errors are handled in a central location. These errors are set to be displayed to the user

and can result in a safe shutdown of the application. Less critical errors are handled locally where it’s

possible and are displayed, logged and later ignored.

All classes inherit from Supervisor actor, and since all error messages are passed to the caller by

default, the error handling mechanism is put in place there. The error handler displays the error

message in a pop up window, logs the error to file and stops the actor. The dynamic dispatch VI Handle

Error.vi is implemented here and will override the error handling VI in Actor class.

37 | P a g e

2.5 Implementation

This chapter presents the different parts of the application more thoroughly. The configuration

program has multiple panels providing different options, acting as the menu for the user to configure

the main program.

2.5.1 Launcher

The Launcher.vi will display the CERN logo for 2 seconds before launching the root actor of the

application, the Core class. When the Core actor is launched, the front panel is closed. Core actor

launches the daemon manager and top-level GUI actor in order to display the next panel for the user.

2.5.2 TopLevelGUI

TopLevelGUI is the actor responsible for loading the different panels. We wanted to remove the default

LabVIEW top- and menu-bar and create our own design for the panels. The design was chosen for the

application to be easily recognizable among all the different applications running on the monitors in

the CCC. A top bar is needed in order for the user to easily be able to move the panels around on the

monitor. A custom blue top bar is implemented by buttons and an event structure catching the mouse

click and drag event, the panel is then moved accordingly. In order to make the design consistent

throughout the different panels, TopLevelGUIs actor core consists of a top and bottom bar and an

empty subpanel in the middle. When the different panels are loaded into the subpanel, the panel is

resized to fit the default size of the loaded subpanel (Figure 19: TopLevelGUI loading subpanel).

Figure 19: TopLevelGUI loading subpanel

38 | P a g e

2.5.3 Login panel

The first panel being loaded by TopLevelGUI and displayed to the user is the login panel (Figure 20:

Login panel). The username and password is checked against the CERN database to ensure that the

login information is correct. A message is displayed at the bottom of the panel if the username or

password is incorrect. If the credentials are correct, they are saved as global credentials for the

application. This was a design choice made due to the fact that authentication is required for all devices

being connected to. With a global login, the user only has to log in once instead of entering credentials

for every single object he is connecting to. It will still be able to use explicit login at a later stage if the

user wants to connect to specific devices using different credentials.

Password Encryption

The password is encrypted with MD5 (Message Digest algorithm 5) providing safe storage of the user

credentials in the application.

Figure 20: Login panel

39 | P a g e

2.5.4 Tree panel

The tree panel will appear after successful login. This

panel is a part of the TreePanel class and is loaded as a

subpanel by TopLevelGUI. The tree panel will act as the

main menu for user to configure the Main program. It

contains different menu options for creating new,

loading old and modifying projects. It also contains a

tree displaying all available devices, properties and

fields to connect to. The currently logged in user is

displayed in the bottom right corner.

The six options on top are the generic options which act

on the application as a whole. The four options “Undo”,

“Redo”, “Edit” and “build” are options specifically for

each project. If multiple projects are open, these options

will affect the project that is active or in front.

An event structure is capturing all the user events from

the panel such as button clicks, selection of elements in

the tree and the dragging elements from the tree. These

events will trigger different messages to be sent to the

daemon manager for further treatment.

Save: A message is sent to DaemonManager for saving the open projects to disk.

Save as: Same as the save option but also has the option to rename the project.

Return: Return to login panel.

Exit: Sends a message to the caller in order to close the application safely. This button has the

same functionality as the red X-button in the top right corner. The user is prompted to save

the projects when exiting the application.

Figure 21: Tree panel

40 | P a g e

New / Load

The options “New” and “Load” in the tree panel (Figure 21: Tree panel) triggers a message to the

daemon manager to create a new object of FPDaemon class. This daemons will be dedicated to

monitor the new or loaded project. Based on the message type (new/load), a new browse panel is

launched for setting the location for a new project or loading an existing one. The browser window is

displayed to the user (Figure 22: Folder Browser with "New" option). It the folder path is ok, a new

template project is copied to the chosen path. The front panel of the main program in the new project

is displayed to the user.

Folder browser

The folder browser is designed to be similar to the windows file browser with the same behaviour for

creating/deleting/renaming folders. In that way it will act as the user expects it to. The tree is initialized

with the folders from the root directory. If a directory is chosen, the name is collected in order to build

a path. New folders can be created by clicking the

“create new folder” button. The folder will get a

default name “NewFolder” and the user can change

the names of folders by clicking the folders and typing

the new name. Any error messages are displayed to

the user.

The browser window is dynamically populated to

decrease loading time and to make sure it does not

hang/crash if an expanded folder contains thousands

of subfolders. The tree is populated while the user

scrolls. Figure 22: Folder Browser with "New" option

shows the folder browser when «new» option is

selected. In this case the user is prompted to set a

folder path and a name for the project. For “Load”,

the user is prompted to give the location of an existing

project. The name of the project is by default

“NewProject” but can be changed by filling the field

“Provide project name” in the browser window.

Figure 22: Folder Browser with "New" option

41 | P a g e

URL Tree

The URL tree (Figure 23: URL tree) is a section of the tree panel (Figure 21: Tree panel) containing all

devices, properties and fields from the database. The URL tree is a very important part of the

application. The idea for the application is that the user will use this tree to browse for specific

properties, drag the element from the tree onto the main panel and controls will be generated to read

or write to this property.

Because of the large number of devices in CCDB,

populating the tree and making it responsive was a

challenge. There are thousands of objects in the

database, a quick search for “LHC” in the tree will

generate 51 313 results. In order to populate the tree

fast, only the device groups are displayed at start-up.

Expanding a group will populate the tree with all visible

devices. The remaining list will be displayed when the

user scrolls or searches for a specific device using the

search field. The tree is expanded with a list of

properties when a device is clicked. In the same way,

the fields are expanded when a property is selected. In

Figure 23: URL tree, “LHC” is the group,

“LHC.BCTFR.A6R4.B1” is a device, “Acquisition” is a

property and the children elements of acquisition are

the fields.

URL: The URL string is updated for every new selection

in the tree. When an element is dragged out of the tree, the action is captured by an event structure

and a message containing the URL is sent to the daemon manager. If the element is dropped on a

panel, the URL is checked for correctness and a configuration panel is displayed to configure the

control(s) that will be generated (2.5.6 Monitoring the Main program).

Search: For every letter typed in the search field, a Redis11 search is performed for devices in the

database. The search is skipped if the search string is empty or contains only one letter due to the huge

amount of results it would generate.

11 Redis is an open source, in-memory data structure store, used as database, cache and message broker,

supporting a wide range of data structures [13].

Figure 23: URL tree

42 | P a g e

Edit

Edit mode is entered by clicking the edit button in Figure 21: Tree panel. The option is available because

it is a fast way for the user to make changes to an object. The alternative would be to delete the object,

find the correct property in the tree and recreate it with a new configuration. With the edit option

everything about the object can be changed except the URL. This includes the user credentials, control

type, action, protocol and cycle for the object.

To enable the edit mode from the Tree Panel, a message is sent to DaemonManager and forwarded to

all front panel daemons when the edit button is clicked. This triggers all panels to be searched for

selected items. If a selected object is found, the edit mode is deactivated. The configuration file

(config.ini) is read to retrieve the configuration of the selected object. A configuration panel (Figure 25

is displayed, initialized with the old configuration data. When the user has edited the configuration,

the old object is deleted and a new object created using the new configuration. All configuration files

are updated with the new object.

Undo/redo

By default, undo and redo actions are handled by LabVIEW. For any changes to a not running VI, the

changes are saved and will be reverted by ctrl + z or “undo” from the LabVIEW menu. However, this

does not apply when the changes are made programmatically from a different application, such as

with VI scripting. In fact the list of revertible changes are emptied when the VI is modified remotely. A

user might expect to be able to revert changes in this way, either by the shortcut keys or a button.

Options for undo and redo are therefore put on the tree panel (Figure 21: Tree panel).

Every time an object is created or deleted from the front panel, the action is written to a configuration

file named undo.ini. The data written to the file contains all the information needed to recreate the

object. This includes the original configuration, the position on the panel and the action type (added

or deleted). When the undo button is clicked on the Tree panel, a message is sent to DaemonManager

and forwarded to the front-panel daemon monitoring the currently active panel. The latest entry in

the undo.ini file is retrieved and the change recreated. The entry from the file is then moved to another

configuration file, redo.ini.

When the redo button is clicked, the same action happens only the latest entry is retrieved from the

redo file, recreated and written to the undo file instead. In this way it is possible to recreate all actions

on the main program triggered by the configuration program. These options always affect the main

program which is currently in front/currently active. This is important as the user might have multiple

panels open at the same time.

43 | P a g e

Build

When building an executable in LabVIEW, the build specifications are set in the LabVIEW project of the

VI. The build specifications contain information about the source files to include, name, settings and

the destination of the executable. These build specs are a part of the template project that is created

from the configuration program. Changes that needs to be made to the build specs include adding the

main VI as a start-up VI for the executable and including certain DLLs.

When the build button is clicked, a message is sent to DaemonManager and forwarded to the active

FPDaemon. The build specs for the project are read and the necessary changes made before the

application is built programmatically. As this can take a couple of minutes, a progress-ring is displayed

to keep the user informed that it is working.

The executable version is placed in the project directory and will run independently from the

configuration program. The figure below shows an example of how the executable version might look.

Figure 24: Executable version Main Program

44 | P a g e

2.5.5 Config Panel

The Config panel is displayed by the FP daemons whenever a configuration of the objects is needed.

This includes whenever a valid URL has been dragged and dropped from the Tree panel (Figure 21:

Tree panel) or an object is selected in edit mode (2.5.4.3 Edit). The panel displays the selected URL,

datatype and currently logged in user on top. This cannot be changed. If multiple fields were chosen,

the URL string will switch to a drop-down menu containing all URLs. The rest of the options can be

changed by the user. This includes the protocol, action type, RBAC authentication and the control type.

Figure 25: Config Panel

Cycle: The cycle is a string that can be set for JAPC and CMW devices. It defines if the data should

originate from a specific cycle.

Protocol: The currently implemented protocols are JAPC and CMW. The protocol defines the

connection protocol to the devices and the choice depends on the device and the network. See

chapter 1.7.1.1 RIO for more about the CERN protocols in RADE.

45 | P a g e

Action: The available action types are “Set”, “Get”, “Set and Get” and “Subscribe” and corresponds to

reading, writing or monitoring the selected property. The action type decides the button type and how

the block diagram is populated.

Set: Set action will generate a set-button and a cluster containing all the chosen fields for the

property in their original datatype. Because the data is set in the original datatype, the table

of control types in Figure 25: Config Panel is hidden for this option. A new event case is created,

populated and wired on the block diagram in order to act on the button clicks from the new

set-button. Figure 26 shows a button and control created on the front panel by the set action.

Get: Get action will generate a get-button and a cluster containing indicators representing all

the chosen fields for the property. The type of control can be configured by the user by editing

Control Type and the fields are therefore not necessarily represented in their original datatype.

An extra, hidden cluster is generated on the VI containing the original datatypes. This is done

because the original datatypes are needed for initializing the objects. An example of the output

from Get action is shown in Figure 27: Get button and cluster.

Set and Get: Set and Get action will generate controls and populate the block diagram for both

set and get. This option is useful when writing and reading from the same property because

the objects does not need to be configured twice.

Subscribe: Subscribe action will generate a cluster containing indicators representing all the

chosen fields for the property. No button is needed since the object will constantly subscribe

to the property. Like with the get action, the control type can be configured and an extra

hidden cluster is created for that reason.

Figure 26: Set button and cluster

Figure 27: Get button and cluster

46 | P a g e

RBAC login

By default, the authentication method is set to Global. The global credentials are displayed in the top

right corner of the Config panel in Figure 25.

Global: This option means that the object will be connected using the credentials that was entered in

the login panel at start-up of the application (2.5.3 Login panel). The global credentials are encrypted

and stored in order to make the configuration faster. In most cases the user will use the global

credentials for all devices.

Custom – The custom option is used for explicit

login with RBAC. This option will trigger a new

panel requesting a new username and password

(Figure 28). This option enables the user to

connect to an object using different credentials

than the global. This is useful because devices

have different access restrictions and a user might

have access to multiple user profiles.

By location – This option will use the computer’s location instead of a username and password as

authentication to connect to a device. Some devices can only be accessed from certain locations and

networks at CERN, for example the control room.

Control Type

The options displayed in the table on the Config Panel depends on the data type of the object. The

original datatype of the object is always an option. In addition there are options for graph, string and

table, their availability depends on the original datatype. For example there is no option to view a

string datatype in a graph but a Boolean will have this option. An example of the control types graph

and table is shown in Figure 29.

Figure 28: Explicit RBAC login panel

Figure 29: Graph and table control types

47 | P a g e

2.5.6 Monitoring the Main program

The challenge is to ensure that all user actions are taken into consideration when monitoring the main

program. This is important in order to make a robust program. Since the user is not expected to have

LabVIEW experience, the application should avoid making programming errors and clean up after the

user if the user does unwanted things with the program. Each instance of the main program is

constantly monitored by a dedicated FP Daemon. Using the reference of the VI, property and invoke

nodes can be used to monitor changes on the front panel. The ideal solution to monitor the panels

would be to receive events from the different user actions, such as deleting or dropping something on

the panel. In that way the daemon would be notified on change, saving CPU resources. Since the VI

being monitored is not running, there is no way to catch these events in an event structure. The

solution was to minimize the use and frequency of polling12 by only doing so in one daemon at a time

and only if the front panel is not running. The DaemonManager forwards incoming messages to the

active FPDaemon. It would be a waste of resources to monitor all panels when in reality only one panel

can be active and modified by the user at a time. Actions are caught by monitoring the undo state

(2.5.4.4 Undo/redo) of the panel. By monitoring the undo state instead of keys and mouse clicks, the

different event types are already separated and the polling does not need to happen as frequently.

Mouse clicks would need to be polled frequently because of the short time of the click-event. The undo

state in LabVIEW does not change until a new event has occurred. This undo state can be read

programmatically using VI scripting. Figure 30: Monitor undo state is a portion of the block diagram

for monitoring a VI. In the figure, a VI scripting node is used to get the undo state of the VI. The undo

state is then used in a case structure to perform different actions based on the state. If the undo state

is “Undo delete”, an object has been deleted from the panel. Two VIs are called to find out what has

been deleted, delete all associated items and to update the configuration files.

12 Actively sampling the status of something

Figure 30: Monitor undo state

48 | P a g e

Deleted objects

Deleted objects are detected by monitoring the undo state on active panels, as in Figure 30: Monitor

undo state. All controls on the front panel are counted and compared with the contents of the

configuration file. Controls belonging together (such as the set-button and cluster) are associated with

each other by their name and control type. If one control is missing some associated controls, the

leftover controls are deleted. The application will not work as expected if only some of the controls in

a group are deleted. In that way VI scripting is used to clean up the “delete action” by the user. If a

button is deleted, the event case for this button will be broken. VI scripting is also used to check for

and delete all broken event cases from the block diagram. Finally, the configuration file is updated and

the “delete event” is added to the undo file.

New objects

When a user drags an element from the URL Tree (Figure 23: URL tree) onto the front panel, a new

object should be created for the chosen URL. This is detected by reading the undo state of the VIs front

panel. A label being dropped on the panel generates the undo state “Undo Copy”. If this event is

detected, the panel is searched for new labels. If a label is found, it is deleted immediately and the URL

sent from the URL tree is read. The URL is analysed to see that it is valid. A URL containing

Device/Property#Field is valid, Device/Property is valid if the user selects the field(s) (Figure 31: Select

URLs will pop up), and all other URLs are invalid.

When one or multiple URLs are valid, the Config panel (Figure 25: Config Panel) is displayed for the

user to configure the new objects. After the object(s) have been configured, the front panel and block

diagram is populated and the configuration files updated.

Figure 31: Select URLs

49 | P a g e

Copied objects

Copying objects is a fast way for the user to create an exact copy of an existing object. It is important

that the application handles these cases as there is no way for a user to tell if the copied object is

connected or not. The copied objects are detected by the FPDaemon in the same way as new objects.

If the undo state is “Undo Copy” but a new label is not found, the front panel is searched for controls

that are not yet in the configuration file. If such objects are found, they must be copies of previously

created objects. The configuration file is searched for the object they have been copied from and this

configuration is used to create a new copy of the object. The copied control is deleted from the panel

and a new object is programmatically created in its place. Copying objects generates the same actions

as new objects only the user does not need to enter any configurations.

2.5.7 Building the Front Panel

The front panel of the main program is generated programmatically using VI scripting. Based on the

configuration entered in the Config panel (2.5.5 Config Panel), controls and indicators are created on

the front panel.

The controls are created based on templates and VI-scripting tools. Figure 33 shows an example of

scripting a new control. Figure 32 shows the result after executing this code. In this case, a template is

used. The new control is created on the front panel of “Owning VI” at position [0, 0]. The label text is

then changed using a property node. Template controls are used in the application to simplify the code

and minimize the work done by scripting and property nodes. In addition, not all control types are

available directly through VI scripting. The silver palette in LabVIEW is not available without using

templates.

Figure 33: Create new control from template Figure 32: Created control

50 | P a g e

2.5.8 Building the Block diagram

For Subscribe action, no modifications are done to the block diagram. For Set/Get actions, the button

clicks need to be registered in the User Interface loop on the block diagram. VI scripting is used to

localize the event structure, add a new event case, and register the event case to a value change event

of the created button. A VI is placed inside the new event case and all wires are connected. For Set

action the cluster and a string also need to be wired to the SubVI. The result is shown in Figure 34.

A part of the code for generating the code on the block diagram is shown in Figure 35. In this example

the references of the wire terminals with matching datatypes are found. The two terminals are then

wired together and finally a SubVI inserted on the wire. The SubVI used is given by SubVI path. As the

datatypes and SubVI is not hardcoded, the code snippet is quite generic and can be used to place any

subVI in any structure as long as the wires are connected to the event/case structure.

Figure 34: Generated event case for Set action

Figure 35: Wire event structure terminals and insert SubVI

51 | P a g e

2.5.9 Main Program

The main program can run individually after it has been populated with controls and the block diagram

has been wired. Before it has been populated, the Main Program simply acts as a shell for connecting

with RIO. Figure 36: Main program - block diagram shows the block diagram before it has been

populated. The VI will run but without controls on the front panel, but without entries in the

configuration file and no event case for acting on the actions, nothing will happen. The VI consists of

four parallel processes, the User Interface loop, Data Management loop, the error handler and the

timer.

Because there can be multiple instances of the Main program running at the same time, the subVIs are

set to be re-entrant and with a shared clone execution. This is because LabVIEW only allows one VI

with the same name in memory at a time. If the same subVI is called from multiple places a race

condition might occur. A clone execution involves launching a separate instance of the subVI for each

subVI call.

Figure 36: Main program - block diagram

Figure 37: Main program - front panel

52 | P a g e

Initialization

In the initialization phase of the program, the main task is to initialize the objects and the

communication. All controls on the front panel are compared with the entries in the configuration file,

and all sections13 to be are kept are stored in the private data of Device class. For every section, one

Device-Property object is created. For every Device-Property object, an object of either CMW or JAPC

class is created based on the Protocol from the section. Figure 38 shows the creation, initialization and

connection of CMW and JAPC objects. In the figure, the datatype is read directly from the cluster and

the object type is decided by “Protocol”. The protocol can be CMW or JAPC, but they are both child

classes of communication layer and can therefore go in the same wire. The red VIs are all dynamic

dispatch VIs, which implementation is decided by the object on the wire. Cl-connect.vi takes care of

opening connection to the device/properties and launching the subscribers.

Open connection

Connection with CMW and JAPC are similar with RADE. The VI for opening connection is polymorphic

and requires the same inputs for both protocols and all actions. Device/Property (The field is filtered

away in this case), cycle and datatype needs to be provided. The user is authenticated with RBAC.

13 A section corresponds to one entry in the configuration file

Figure 38: Creation, initialization and connection to objects

Figure 39: Open connection with RBAC for CMW Set

53 | P a g e

Launching Subscriber daemons

Unlike Set and Get, subscribe is constantly monitoring the device and receives updates every second.

In order not to conflict with the other processes in the program, they are launched as independent

processes running in the background. The VI is launched using the “Start Asynchronous Call” function

as seen in Figure 40. The inputs for the VI are provided to the function and the connection is opened

in the same way as with Set in Figure 39. The daemons can later be killed using notifiers. Only one

unique subscriber will be launched for each Device/property combination.

User interface loop

The user interface loop consists of a while loop and an event structure. Its function is to act on events

coming from the front panel but also reconnect and trigger shutdown. By default it only contains cases

for shutdown and reconnection. Both events are dynamic user events that can be triggered from other

places in the application. In addition it handles the action of a user closing the panel. The event

structure is populated with additional events as they are configured with the Configuration program.

Shutdown: Shutdown is triggered by the user pressing the exit button or triggered from the

error handler.

Reconnection: Reconnection is triggered by the error handler or the reconnection timer. Its

job is to reconnect one or multiple devices. Devices should be reconnected when the RBAC

token has timed out. This happens every eight hours and is relevant when the application has

been running for a long time. Devices are also reconnected when the user password has

changed, but this only affects the devices connected to this specific user name. Reconnection

can also be triggered from the reconnection timer when a device is down.

When new events are added by the configuration program, the User interface loop also handles the

Set and Get action (2.5.8 Building the Block diagram). The set action takes the data from the set-cluster

and sets this data to the device/property using RIO. The Get action does a Get action with RIO and

sends the received data to Data Management loop.

Figure 40: Launch subscribing daemon

54 | P a g e

Data management loop

The process of updating the user interface with new data is separated from the user interface loop to

make the UI more responsive. For that reason the data received by Get and Subscribe functions are

sent to data management loop for treatment using user events. The incoming data does not always

match the datatypes of the cluster. This is because the user can choose the control type for Get and

Subscribe actions. For that reason Data Management needs to handle all kinds of datatype parsing.

Data Management loop finds all the clusters that should be updated with the received data. The

datatypes of the controls are found and the data is parsed into the correct datatype. In some cases the

data should be appended to the old data and in some cases it should be replaced. The Data

Management loop takes care of this logic and updating the controls with the new values.

Timer loop

The timer keeps track of the reconnection requests coming from the error handler. Based on how

many reconnection attempts a device has had, a timeout will be added before the next attempt. The

timeout is on 1, 3, 5 and 10 minutes. All the requests are put in an array and the array is checked every

200ms for timed out elements. On timeout a reconnection user event is sent to the user interface loop.

The timer also keeps track of the RBAC token timeout. When a connection is opened with RBAC, a

token is created which will be valid for 8 hours. If the program stays running for many hours, this might

lead to an interrupt in subscribing. All devices are reconnected after 7 hours to reset the RBAC token.

55 | P a g e

Error Handling

The error handling in the main program must be robust so that the users are notified and errors logged

and treated in a good way. Errors originating in JAPC and CMW require actions from larger parts of the

application. These errors include connection- and authentication-errors and require actions such as re-

authenticating or reconnecting. In order to detect and treat these errors, the error handling routine is

split in two parts, detecting and notifying the error, and treating it.

Notify on error

Detecting errors and notifying the error handler is done by one VI called SendErrorEvent.vi. If this VI

receives an error on the input channel, it will add a timestamp and message to the error end send it as

a user event to the Error Handler. It then clears the error so that the error does not spread through

the program. Figure 41 shows the error case of the SendErrorEvent.vi. If there is no error, the error

cluster is just passed through. Extra information can be added with the error using the Data input.

Errors coming from CMW and JAPC have general error codes which can be hard to interpret. Many

errors share the same error code but differs in the error message. For that reason some custom error

codes are created for the program in order to easily sort the errors from each other when handling

them. This happens as soon as the error is caught in the “SendErrorEvent.vi”. The Error string is

analysed and a new error is generated with a custom error code in “ConvertErrors.vi”. The custom

error code is then passed on to the error handler. If the error does not originate from CMW or JAPC, it

is passed through without being converted. All custom error codes are documented in Error Codes.txt

which follows the application.

Figure 41: Convert, clear and send error

56 | P a g e

Error handler

The Error handler contains a while loop with an event structure. The event structure catches dynamic

registered user events from SendErrorEvent.vi. All the errors received in the global error handler are

logged to file with a timestamp, the error code and source. The timestamp and error code is also

displayed to the user in a simplified format. For certain error codes, the error handler will take action

beyond logging the error. These include:

Connection down for a device: Initiates reconnection by sending a user event to the timer

loop. The user event contains information about the device down and a timestamp for when

the reconnection should be attempted. The reconnection timeout depends on the number of

previously failed attempts and is generated by the error handler based on this.

Authentication error: The user is asked to log in again. If the login succeeds, a user event is

sent to the User Interface loop to reconnect all devices containing the old username.

Logging

The log file for the Main Program is created and saved relative to the VI. This is useful for both the user

and the developer for debugging purposes. The log contains all errors with a timestamp and error code

and is edited from the error handler. All errors are also logged centrally with the RADE logger which is

a part of the RADE framework (1.7.1.6 RADE Logger). This is useful because errors are logged centrally,

they can be accessed and statistics can be made for the errors.

Figure 42: Error handler

57 | P a g e

2.6 Testing

Different kinds of testing was applied during development in order to remove as many bugs as possible.

Code blocks were developed as modular as possible in order to simplify testing. A testable VI is a VI

that easily can be removed and will function outside its intended environment. In addition to testing,

several code reviews was held for getting feedback on the code. The feedback from code reviews was

useful for improvements of the application and ideas for extended functionality.

2.6.1 Test devices

Many of the devices in CCDB are restricted for safety reasons, and because of this, test server was

needed in order to test the functionality of the application with different datatypes and different levels

of restrictions. A test server named Japc4Lv_T2_abcopm04 is running on a virtual machine, containing

different properties and authentication restrictions which was useful when testing the application.

2.6.2 Unit testing

Unit testing is testing modules (VIs) separately. Unit tests were created to test single VIs before

integrating them into larger parts of the application. Unit testing involved testing for unexpected or

boundary values, testing inputs where the expected outputs are known and testing that the expected

errors are generated on failure. This is an easy way of discovering bugs early in the development

process. Once the VI is integrated in a larger system, isolating and finding the origin of the error can

be more difficult.

2.6.3 Integration testing

Integration testing involves testing larger parts of the application with multiple VIs together. Even

though the VIs passed Unit Tests, they can interact in unexpected ways once put together. An example

is the way that they might manipulate shared data, leading to race conditions. This includes

opening/closing config files, handling references and making sure the communication works.

2.6.4 Performance tests

Performance tests define and verify the performance of the features that exist in the application.

Execution time, memory usage and file sizes was tested. Execution time and responsiveness for the

user interface was improved due to performance tests. More efficient browsing in the Folder browser

and URL tree was implemented. Monitoring memory usage of the application during execution led to

the discovery of unclosed communication references.

58 | P a g e

2.6.5 Configuration tests

Configuration tests evaluate the execution of the application on systems with different hardware and

software configurations. This includes tests on multiple platforms.

Testing on Linux: The application was developed on windows but the main operating system

used by the customer (CCC) is Linux. Paths and fonts are different on Linux and some LabVIEW

functions only exist for certain operating systems. Linux tests uncovered problems related to

paths that was windows specific. For OS specific portions of the code, these errors were fixed

using conditional disable structures. Another problem discovered during these tests was the

changes in fonts. LabVIEW does not have the same fonts on all operating systems. If a font is

used on windows and the font type and/or size does not exist on the Linux version, LabVIEW

chooses a random font and size. This was fixed by standardizing on one font (Application font)

and sticking to a few sizes that work on both platforms.

Testing with different screen resolutions: The screen resolution varies a lot and it is important

that the application is able to display the required information even on a minimal screen

resolution. Due to these tests, we implemented custom scaling and resizing of the panels in

the application. A minimum size is set and scaling is enabled for the user to resize the panels

according to the screen.

2.6.6 Stress/load tests

Stress/load tests define ways to push a VI beyond the required limits in the specification. The testing

includes checking how the application handles large quantities of data or runs for an extended time.

Tests were done with a large amount of subscribers to check the update rate and responsiveness. The

application was also tested with a large number of main programs running at the same time.

2.6.7 Functionality tests

Functionality tests include testing that the application fulfil the requirements from the customer. To

certain extent this was not possible without having the actual customer test the application. It is not

possible to test setting, getting and subscribing to many frequently used devices due to access

restrictions. With the help of test devices, most datatypes and combinations have still been tested.

2.6.8 Internal testing

It can be hard to predict how a different user would expect the application to work. Internal testing

was useful to get feedback from others who does not necessarily know how the application works. The

application has been made available in the team for internal testing. Feedback from colleagues include

the appearance of the application, new functionality and previously undiscovered bugs.

59 | P a g e

2.7 Discussion

The final application works well according to the original requirements. During the development

process, changing requirements have been welcomed due to the agile working methods. Because of

this, good ideas have been implemented along the way. Examples include edit mode, undo and redo

options and the ability to connect to multiple fields at once. This has resulted in a better and more user

friendly application. However, one requirement from the requirements document have not been met.

This is the option for the user to add custom functionality to the application in order to manipulate

signals. The idea was to enable the user to put custom pieces of code in the application and also have

a shortcut for frequently used functions such as Fourier transforms. After a while in the development

process we had to stop implementing new functionality and focus on delivering a stable first version

of the application.

Because of the object oriented programming style, it is well facilitated for adding new classes in the

future. The original idea of dragging and dropping items on a panel and have the code generated

automatically in the background is very useful and has received many suggestions for future expansion.

Plans for future improvements include the possibility to perform SQL queries and creating custom

CMW servers.

60 | P a g e

3 Automatic Deployment System

The RADE framework (see 1.7 RADE) is the solution at CERN for integrating LabVIEW with CERN’s

accelerator control and storage infrastructures. The framework is developed, maintained and

distributed by the MTA team at CERN. The team also provides support to CERNs over 600 LabVIEW

users, all running on different platforms and with different LabVIEW versions. Building and distributing

RADE for all of these platforms and LabVIEW versions is a time consuming task. All libraries need to be

tested, commissioned and validated for every version.

This work and the associated release process were in the past all done manually or semi-automated

through scripts and custom tailored tools. A full-featured distribution would typically take up to a week

to complete. When considering that one version of RADE consists of almost 30 000 VIs, that might not

sound too long. Multiplied by 18 different variations of platform/LabVIEW version that makes almost

500 000 VIs to be compiled, tested and distributed. The deployment process is no longer manual,

however, there is no built-in system for managing the libraries’ dependencies. The consequence is that

there is no way of recreating an old release automatically. The versions of the dependencies are not

stored.

The objective of the deploy tool is to automate the deployment process as an integral part of the

workflow. The deploy tool should include a system for automating tests and must be reliable and

repeatable so that if one needs to rebuild a version or roll back to an earlier version, the exact same

versions of the files are used as before. With agile working methods and continuous releases, the goal

is to have new installation packages available for the user continuously.

61 | P a g e

3.1 Background

Software deployment includes all of the activities that make a software system available for use.

3.1.1 RADE release process

In general, a RADE release cycle involves gathering all the libraries that should be a part of the release.

The libraries are then compiled together for all platforms and versions. Tests are run for all the new or

changed libraries and the outcome of the tests are validated. If the previous steps are successful, the

libraries are bundled to an installer which is distributed to the users. The process is shown in Figure

43: RADE release process.

3.1.2 Continuous integration

Developing, building and distributing software at CERN can be a challenging process. On one side, when

working with operational equipment, it is mandatory to carefully plan the potential impact on the

accelerator complex. On the other side, when working with experimental prototypes, new ideas and

designs are tested out rapidly and the software has to be adapted accordingly [11]. Continuous

integration (CI) is a software engineering practice where changes are tested immediately when they

are added to a larger code base. In that way, if a defect is introduced, it can be identified and corrected

immediately. The general rule is that team members commit their work on a daily basis, and builds are

conducted on significant changes. In that way developers get continuous feedback on the software.

The typical result is that defects are smaller, less complex and easier to solve. CI encourages software

to be developed to a high standard where improvements and bug fixes are pushed out reliably and

repeatedly.

Add libraries

Compile TestCreate

installer

Figure 43: RADE release process

62 | P a g e

CI server

A continuous integration server (CI server) is a server whose job is to monitor the execution of repeated

jobs, such as building a software project. It allows for building and testing software projects

automatically, increasing productivity in the team. Jobs on the CI server can be configured to run locally

or on slave nodes and the developers can be continuously notified by the job status.

3.1.3 Automatic software deployment

Software deployment is all of the activities that make a software system available for use. By

automating these tasks, the release process can be done more quickly and reliably. It has the potential

to remove typical “operator errors” that happen when doing repetitive work.

3.1.4 Compilation

When a VI is compiled, LabVIEW simply verifies the existence of any subVIs and relinks the subVIs

to the main VI. LabVIEW also updates the VI to the version of LabVIEW and platform used.

Corrupted VIs that cannot be loaded are detected and reported. Compiling happens automatically

when a VI or project is opened in LabVIEW. Certain circumstances, such as upgrading a VI to a

newer version of LabVIEW, require LabVIEW to recompile the VI. LabVIEW continues to recompile

the VI every time it loads until it is saved. Simultaneously compiling and saving a collection of VIs

is called mass compiling. Mass compiling decreases loading time because LabVIEW does not need

to search for those VIs at a later time. When developing and making changes to the RADE

framework, all VIs are saved in the oldest LabVIEW version used at CERN (2010). LabVIEW ensures

backward compatibility, so what is developed in the 2010 version will also be executable in

LabVIEW 2014 but not necessarily the other way around. When releasing the framework for newer

versions of LabVIEW, the code needs to be compiled for the new version. Although this process is

necessary, it takes a lot of time to recompile the whole framework (consisting of thousands of VIs),

often due to an update in only a few VIs [17].

With the old solution, all RADE libraries were compiled for every release, which is a time consuming

task. An improved solution would be to compile only the parts that have changed.

63 | P a g e

3.1.5 Dependencies

Dependencies in LabVIEW are the files required by a VI in order to run. LabVIEW automatically

identifies the files it requires if they exist in the expected location. This location is the same as when

the VI was last compiled. If LabVIEW cannot find the files, the user is prompted to browse for them.

Missing dependencies will result in a broken VI. By default, all VIs have dependencies to VIs in vi.lib

because this is where all the original LabVIEW functions are stored. For VIs that contain subVIs, the

subVIs become dependencies to the VI. The RADE libraries’ default location is in user.lib.

Today RADE consists of 24 libraries. Some of these libraries have dependencies to each other. An

example is MTA-lib. This library contains useful functions such as custom string and array functions.

VIs from MTA-lib are used in the RIO library and therefore RIO has a dependency to MTA-lib. Figure

44: Dependencies in RADE shows the dependencies in RADE.

When dealing with multiple versions of the same code, it is important to make sure that the VIs are

linked to the correct versions of the dependencies. This can be ensured by making sure that the correct

version of the dependencies exist in the expected location at compile time. One of the problems with

the old solution was that there was no way of recreating the same build in an automatic way because

of lacking versioning history of the dependencies. For each version of a library, there are in total 18

variations of LabVIEW version/operating system. With many variations of each version, it is important

to have a system that makes sure that the VIs are linked to the correct variation of the dependencies.

Figure 44: Dependencies in RADE

64 | P a g e

3.1.6 Agile development

Agile methodology is a working method embracing changing requirements from customers and has

been popular with software developers. It has an iterative approach to development promoting

collaboration and adaptability. Continuous integration, automated testing, pair programming and test-

driven development are techniques used to enhance project agility.

3.1.7 Test Driven Development

Test-Driven Development (TDD) is a software process that relies on the repetition of a short

development cycle: first the developer writes an (initially failing) automated test case that defines a

desired improvement or new function. The developer then produces the minimum amount of code in

order to pass the test, and finally refactors the code. The technique encourages simple designs. The

most important concept of TDD is the fact that the tests should be written before the functionality to

be tested. That ensures that the application is written for testability. Additionally, writing the tests first

leads to a deeper and earlier understanding of the product requirements.

3.1.8 Testing

Testing VIs in LabVIEW involves providing different inputs to a VI to check if it behaves as expected.

There are different kinds of testing, from tests on a single module to tests covering the whole

application. Testing is promoted to be done early in development, following the TDD process. Typical

areas to test are the customer requirements, unexpected inputs, usability for the GUI and error

handling.

Unit testing

Unit testing includes testing the functionality of individual VIs. The VI undergoing tests is referred to

the Unit Under Test (UUT). TDD encourages the units to be kept small. This is a huge benefit when

debugging as the error can be more easily isolated and detected when tests fail. Smaller test cases are

also easier to read and understand. For every change added to the code base software, unit tests

should be performed on each critical item to ensure that bugs, requirements and other past

experiences are considered. Typical tests to be performed are for boundary conditions on the inputs,

such as empty arrays, invalid references, strange combinations, missing files and bad path names.

Regression testing

Regression tests must be performed at each step to verify that previously tested functionality still

works. This kind of testing enables the developer to find out about newly introduced problems earlier

in the integration process.

65 | P a g e

Automatic testing

Test automation is the use of a special software to control the execution of tests and compare the

actual output with the predictions. It increases efficiency and promotes early discovery of bugs. To run

automated tests in LabVIEW, a system must be set up that can run all test-VIs automatically and report

the results back to the developer. An automated test system includes putting the UUT in the state

needed to run the test, run it and capture the output, validate the results and put the UUT back to the

pre-test state. Test automation with unit testing is a key feature of agile software development.

3.1.9 Version Control Systems

A Version Control System (VCS) is a system that tries to solve the question “How to allow users to share

information without accidentally stepping on each other’s toes?” Its mission is to track the versions of

files and directories over time. VCSs facilitate for collaboration and sharing of data and is also a security

in case of a computer crashes since all the files are stored centrally.

Subversion

CERN standardizes on the open source version control system called subversion (SVN) for version

control. It is open source and can be used through command line. In order to use it, an SVN client is

needed, such as TortoiseSVN. Any number of clients can connect to the repository and perform

changes to the files, SVN remembers the changes and keeps the previous versions in memory. The

client can create a local copy of the repository called the working copy. It looks like an ordinary

directory tree containing your files where you can edit and make changes. The working copy can

change, but the repository itself will only change when you explicitly tell the SVN client to commit your

changes. It can be updated to include the latest changes in the repository, merge different versions

and revert changes. The root of the repository is split into three folders; trunk, tags and branches:

Trunk: The trunk of the repository contains the latest working copy of the project under development.

Changes in the trunk can be merged and older versions of the files restored.

Tags: Stable versions of the working copy should be copied to this folder and named with a version

number. The contents of tags should never be changed after the tag is made. In that way all previous

stable versions of the project can be reached.

Branches: For changing the direction of a project or doing major changes, a copy of the trunk is placed

in a branch to continue the work while keeping the stable version in the trunk.

66 | P a g e

3.2 Requirements

RADE must be built and distributed for all supported platforms and versions of LabVIEW used at CERN.

This includes LabVIEW versions ranging from 2010 until 2015 and Windows, Linux and Mac. The

automatic deployment tool should deliver continuous, stable and well tested installation packages of

the RADE framework.

1 Be compatible with the existing SVN repository

2 Be able to execute any programming language or script needed in the build process

3 Run on all our main operating systems (Linux, Windows and OS X)

4 Report any issues encountered automatically

5 Tools should be open source

6 Be easy to maintain

7 Have a plugin based and flexible pool of tools

8 Run Unit Tests automatically as a part of any build

9 The build time should be significantly reduced

10 The system should be easy to use and demand little or no user interaction.

11 The user should be able to choose between building the latest version, the latest stable

version and rebuilding previous versions.

12 The program should be stable, using tools that are well documented and maintained

13 The process should be documented at CERN’s internal wiki pages

67 | P a g e

3.3 Analysis

This project investigates improvements for the deployment of the RADE framework for LabVIEW. The

current deploy tool faces challenges related to build reproducibility and build time. One of the

requirements for the new tool is that the build time should be significantly reduced. A major time thief

in the current deployment process is the compilation. All VIs are compiled in order to tailor the libraries

for each environment (platform and LabVIEW version). With thousands of VIs, this is the single task in

the deployment that takes the most time. Today this compilation is necessary because the libraries are

only stored in one version, LabVIEW 2010, in the SVN repository. Even libraries that has not changed

at all since the previous release needs to be compiled with the rest.

Another requirement is that it should be possible to choose which versions of RADE to build and which

versions of the different libraries to include. With the current tool, all the latest versions of the libraries

are used in every release (Figure 45: RADE release - only latest versions). This ensures that any new

changes becomes a part of the new release instantly. While this would be useful in many cases, there

is no option to avoid it. An example of a use case for this would be that a newly introduced bug in a

library was not caught by any tests and became a part of the new release. The possibility to rebuild the

framework with the previous release of the affected library would be useful. A central concept of build

automation and continuous integration is reproducibility. If the build is programmed to always use the

latest versions, without storing information about the different versions used, reproducibility is lost.

Another problem with this approach is that it forces all libraries to be compiled together for every

release because there is no information about which versions of the dependencies they have already

been compiled and tested with.

Figure 45: RADE release - only latest versions

68 | P a g e

A common solution to fix both issues would be to store all compiled versions of the libraries in a server

together with version information (Figure 46). The libraries should be compiled, tested and sent to this

server for every new commit to the central code repository so that the server is always up to date (with

all 18 variations). When another library needs it (has a dependency to it), the correct variation of the

dependency can simply be downloaded from the server. In this way, the libraries only have to be

compiled once for every variation. If the libraries that has not changed are downloaded from this

server, only the libraries with an actual change need to be compiled. The deployment of the framework

as a whole would not require compiling at all, all libraries are ready when they are downloaded and

the time saved would be considerable. The information about which versions were downloaded from

the server should be saved in order to reproduce builds. This solution would also allow the option of

choosing which versions to use in a build.

In order to set up such a system there is a need for a server to store all the libraries in all variations. A

system for storing all the dependency versions is needed for downloading the correct variations from

the server. It should also be used to recreate the build at a later time. Finally a system for automating

all the steps is needed. To automate the deployment process, there are several tools that can be used.

The tools used should be compatible with SVN and all operating systems. It is also important that the

tools are open source and well maintained. This is to minimize the risk of setting up a system that will

suddenly not be compatible in the close future. If possible, selecting tools already used at CERN is an

advantage. If there is already a community of users at CERN, the system will be easier to maintain in

the future. The main idea is to use these tools together to create a build pipeline for LabVIEW releases.

Figure 46: one RIO build generates 18 variations

69 | P a g e

3.3.1 Evaluation of tools

An essential part of a test driven development environment is the tool selection. If multiple tools follow

the requirements, other criteria are taken into account. Well-maintained tools with a large user group

and communities are preferred. Tools that are already used at CERN is preferred before others as there

is already internal documentation and users.

Continuous integration server

For the automatic deployment tool, the framework versions need to be compiled and tested on their

intended platforms during the build. The CI server can execute and monitor jobs on slave machines

running on different operating systems. Following the requirements, the CI server should be fully

automatic, compatible with SVN, support the common scripting languages, report automatically back

to developers, and have a flexible pool of tools.

Hudson and Jenkins are both plugin-based CI servers used at CERN. They both fulfil the requirements

and are very similar to each other. Both tools have an easy setup and a good interface. They originate

from the same tool, but in the recent years Jenkins has grown a larger community of users and support.

Although they are very similar, Jenkins is Open Source while Hudson is not. As one of the requirements

for the CI server is that it is open source, Jenkins is the best option. In addition, Jenkins is also used in

the section from before, making future development and maintenance of the system easier.

Build automation tool

The build automation tool is needed for managing the dependencies of the libraries, to build and

deploy the binaries to the server. The most important feature of the tool is the dependency

management. In addition to the other requirements, the tool must be compatible with Jenkins, build

zip files and be able to deploy the artifacts14 to a central repository.

Apache Maven and Apache Ant are both java-based build automation tools building and deploying

binaries based on a build script. Maven was created after Ant in order to improve the dependency

management and to simplify the build scripts from Ant. Although Ant has made it possible to search

for and download dependencies using an external program, Maven can be argued to be much simpler.

While Ant needs to be told exactly what to do and how to do it, Maven can be configured to do the

same job in one command. Maven was chosen as the best option due to its simplicity. As long as the

build job does not involve anything out of the ordinary, Maven has the easiest setup. It also integrates

well with Jenkins. The concept is based around a Project Object Model (POM), which are scripts

describing the build process.

14 The resulting output of a build is referred to as an artifact, often a binary file

70 | P a g e

Repository manager

The repository manager needs to be compatible with Jenkins and Maven. In addition to the set

requirements, the tool should have an easy and clean user interface for browsing the repository.

Nexus is a repository manager used for storing binary artifacts. The repository is well integrated with

both Maven and Jenkins. On the Jenkins side, only a single plug-in needs to be installed in order to

deploy artifacts to or download from Nexus. The tool is free and open source and it has a nice web

user interface for configuring and browsing the repository. Nexus is chosen as the binary repository

manager because of its simple setup and UI and easy integration with both Maven and Jenkins.

Unit Testing

Automated unit testing is a part of the deployment process. For automating tests, some software is

needed that runs LabVIEW test-VIs (created by the RADE developers). The alternatives are LabVIEW’s

Unit Test Framework that ships with LabVIEW or a custom Unit-Tester. The automatic unit tester must

be able to run both from command line (if initiated by Jenkins) or with a user interface.

NI has a test framework for testing LabVIEW code. While this framework can be good to test certain

functionality, it is not easy to automate. In addition, the LabVIEW Unit Test Framework only runs on

Windows. The alternative is to create a VI that can be run on command line from a CI server. It should

run all tests specified and return the outcome continuously. As one of the requirements is that the unit

tester must run on all supported platforms, a custom unit tester is the preferred option.

71 | P a g e

3.4 Architecture

As discussed in chapter 5.4, the combination of Jenkins, Maven and Nexus will be used together to

create an automatic deployment tool for LabVIEW releases.

Developers commit their code to a central repository (SVN) and the Jenkins CI engine picks up the

committed software either “on change” or periodically. The latest dependencies are downloaded from

Nexus in order to set up an environment for testing the new version. The CI engine runs tests, and

depending on the outcome of the tests, creates the installer and/or notifies the developer.

Maven is used in order to define and retrieve the correct binary versions from Nexus. The

dependencies are defined with Maven in a pom-file, which defines the build. This file is later stored

together with the binaries in Nexus.

Figure 47: Automatic build

72 | P a g e

The pom file must be unique for every variation of the new version. It must define explicitly which

LabVIEW version and operating system it is compiled for and also link to the exact variation of the

dependencies that it was compiled with. This can be achieved in two ways: The first is to use variables

in the pom file and the other is to create a new pom file for every variation with hardcoded values.

Variables would be easy to implement as both Maven and Jenkins supports this, but it would lead to

problems concerning the reproducibility of the builds. The versions used at any time must be

hardcoded and follow the binaries in the repository. A python script was used to generate the

specialized pom-files. Jenkins calls the python script with some variables. The pom-file is created based

on these variables, containing all the correct dependency information.

In order to keep the python script as simple as possible (it might need changes in the future), all the

configuration in the POM that is general for all libraries are extracted out and defined in a common

parent. This also makes it easier to change common configurations that should affect all libraries. The

pom-file created by Jenkins defines the new Maven project. It can now be used to download all

dependencies from Nexus, zip the source files and deploy the binaries to Nexus. Figure 49 shows how

Jenkins, Maven and Nexus work together to set up an environment on the slave machine. Jenkins calls

SVN to copy the latest version of the library from the tags to the LabVIEW directory on the slave

machine. Jenkins calls the python script to generate a pom especially for this version of the library. A

Maven project is now defined and Nexus is called to update the POM with the latest versions of the

dependencies. When the Pom is updated with the latest information, Nexus is called to download and

unpack the dependencies to the slave. In a matter of seconds from the build job started, the

environment for compiling and testing is set up on the slave machine.

Figure 48: Generate POM file

73 | P a g e

3.4.1 Jenkins

The CI server communicates with SVN, Nexus and Maven. It runs jobs on multiple slave machines

triggered by a user or an update in SVN. The outcome of the builds can be reported back to the

developers. The build is split into three jobs in Jenkins performing different tasks.

Job 1 - Create Tag: The first job creates a tag in SVN and should be executed when there is a major

change in the repository. Jenkins copies the contents of the trunk (the source code in LabVIEW 2010

version) to the LabVIEW directory on the slave machine. Maven is used to download dependencies

from Nexus, which are moved to the LabVIEW directory on the slave. In this environment, compilation

and automatic tests are run from Jenkins. The library should only be compiled for LabVIEW 2010 (all

the RADE development happens in 2010), and the dependencies does not need to be compiled. If any

of these steps fail, the developer will be notified by mail. If they succeed, a tag is created in the SVN

repository and the source code zipped and uploaded to Nexus. This job is separated from the rest

because a developer might want to run automatic tests and tag in SVN without triggering the whole

chain of jobs.

Figure 49: Set up environment on slave

74 | P a g e

Job 2 – Create artifacts: The second job in Jenkins can be triggered after the first job or run separately.

Its job is to create the binaries (artifacts) for all other versions of the library. The latest tag from SVN is

copied to the LabVIEW directory on the slave machine. Maven downloads the dependencies from

Nexus and copies them to the LabVIEW directory. The library is compiled and tested, zipped and

deployed to Nexus. The input parameters to this job decides which LabVIEW version the library is

compiled and tested with. While the first job only creates artifacts (the binaries in Nexus) for LabVIEW

2010, this job creates all the remaining artifacts. By running the first and second job on every major

change, all 18 variations of the library are created and stored in Nexus. This job is separated from the

others so that it can be run separately to create artifacts for all variations of the last tag. This is

particularly useful if a tag has been created outside Jenkins.

Job 3 – Create installer: This job creates the new RADE installation for all platforms. This is done by

downloading all libraries from Nexus, testing them and creating all 18 different installation packages.

By default, the job uses the latest versions of the top-level libraries and their transitive dependencies.

This will give the most stable release as the dependency chain from Maven is followed. This guarantees

that all the libraries that have dependencies to each other have been compiled and tested together (In

the first and second job). It is also possible to use all the latest versions or to choose the version

numbers of each library. A pom-file describing the versions that were used in the build is created and

deployed to Nexus. This POM will act as a log of what versions were used in the exact release. This job

is fast since all the libraries in RADE are already compiled.

3.4.2 Maven

Maven is mainly used for the dependency

management. It is run from Jenkins on the

slave machines. Its job is to search Nexus for

dependencies, update the versions in the

POM, download dependencies, build the

sources and deploy them to Nexus with the

POM.

The Maven and Nexus integration is showed

in Figure 50.

Figure 50: Maven and Nexus

75 | P a g e

3.4.3 Nexus

To decrease the build time, steps have been taken to avoid recompiling all libraries for each build.

Libraries have to be compiled for every new platform or LabVIEW version. The solution was to create

binaries of the compiled versions after every build and store them in Nexus. For each new build, the

pre-compiled dependencies are downloaded from Nexus.

The Nexus repository contains all the binaries and a pom-file for each variation. The pom-file acts as a

map defining each build. This ensures total versioning history and reproducibility for the build. Figure

51 is an example of the folder structure for the RIO library stored in Nexus. One new release of the RIO

library will generate 18 different versions in Nexus (Six LabVIEW versions X three operating systems).

The number of different versions and variations in Nexus emphasizes the importance of good version-

and dependency management.

Figure 51: Structure Nexus repository

76 | P a g e

3.4.4 Unit Tester

The Unit tester is triggered from Jenkins and should run on all the different operating systems. Its job

is to execute Test-VIs that are located in the SVN repository for all the libraries. All tests should be run

automatically and the test output validated. Jenkins should be updated continuously on the test results

and should fail the build job if a test fails.

For some tests it is necessary to set up an environment before the tests can run while in other cases

this is not necessary. The most relevant test environment is the test servers for the RIO library. In order

to test the functionality of a VI, such as for subscribing to a server, the server must be running. If the

server is down the full functionality of the VI cannot be tested.

Figure 52 shows the class and inheritance

structure of the Unit Tester. A Test-RIO

class inherits from the general test-class

and dynamic dispatch VIs override the

functionality from the parent. The Test-

RIO class can launch Test-Server class if

needed.

When the tester is run from command

line (as from Jenkins), it will run without a

user interface and pass the results back to

the command line. It runs all the VIs in the

library’s SVN repository ending with

_Test. If some tests fail, this can be

detected in Jenkins so that the build is

discarded developers informed.

The unit tester is initialized by finding all tests in the SVN repository for the given library and creating

test-objects based on which kind of library it is. Test-Environments are set up if needed and the tests

are launched asynchronously. The tests are launched asynchronously in order to save time by running

in parallel. The test results are fed back to the command line or user interface continuously.

Figure 52: Class hierarchy Unit Tester

77 | P a g e

3.5 Implementation

This chapter will describe how each tool was set up in order to create an automatic deployment system

for LabVIEW releases. As a proof of concept, the implementation is limited to Windows and Linux

releases. Windows and Linux are the most used operating systems at CERN. In addition, virtual

machines for Mac are not available through OpenStack at CERN, which makes the choice of Linux and

Windows natural.

The build system will be able to execute the following tasks through Jenkins’ scripting interface:

Download sources from a SVN repository

Download dependencies to the sources from a Nexus repository

Compile the sources for different LabVIEW versions using a mass-compiler in LabVIEW

Run automatic tests on the sources, reporting back to the CI engine

Generate menu-files using LabVIEW

Tag the sources in SVN

Build the sources and deploy artifacts to Nexus

Create installers

3.5.1 Virtual machines

Jenkins executes the build jobs on slave machines running on Windows and Linux. This requires two

virtual machines to be set up and configured for these jobs. In addition, a separate virtual machine

needs to be set up for the Nexus repository.

78 | P a g e

OpenStack

OpenStack is a cloud operating system that controls pools of computing, storage, and networking

resources throughout a datacentre. This datacentre is managed through a dashboard that gives users

access to resources through a web interface. CERN has a policy to virtualize all server machines and

host them on OpenStack (with some exceptions) [12].

VNC

A graphical environment is needed on Linux in order to build LabVIEW applications. A Virtual Network

Computing (VNC) server works as a frame buffer for LabVIEW to execute its graphical dependencies

[11]. Installing VNC made it possible to run LabVIEW on headless Linux systems and to graphically

configure the system without needing an extra client interface.

PuTTY

PuTTY is used for connecting to the virtual Linux machines. CERN has its own version of PuTTY,

PuTTY_CERN, which allows direct authentication.

X-Win 32

X-win 32 is a display server allowing remote display of Linux windows on windows. Together with

PuTTY and VNC, this allows for remote connection and display of the virtual Linux machines.

Setup

Three virtual machines were set up using OpenStack. Figure 53: Virtual machines setup shows the

installations on the different instances. Rade-lin-20 and Rade-win-20 are the Jenkins slaves, they need

Maven, Python and all LabVIEW versions installed. In addition, Java is required in order to run Maven.

Rade-nexus-lin-20 will be the Nexus server and needs only Nexus installed.

Figure 53: Virtual machines setup

79 | P a g e

3.5.2 Maven

Maven is based around the central concept of a build lifecycle. With the lifecycle, the process for

building and distributing an artifact is clearly defined. Figure 54: Maven lifecycle shows the default

phases of the Maven lifecycle. Plugins can be tied to different phases of the lifecycle, and if they are,

they will automatically be executed when the phase is run. When one phase is executed, all phases

leading up to this one will automatically be run as well, including all the plugins tied to them. This

makes Maven simple to use and configure for the user, if this behaviour is desired (which it is in most

cases). Only few commands are needed, the POM ensures that the desired results are achieved.

Without any special configuration, Maven automatically downloads any declared dependencies, builds

a Jar file and deploys it to the local repository. For LabVIEW releases, the code has to be compiled in a

LabVIEW environment with its dependencies within reach. The desired behaviour is showed in Figure

55: Maven and LabVIEW workflow.

Figure 54: Maven lifecycle

Figure 55: Maven and LabVIEW workflow

80 | P a g e

By default, Maven runs all phases in the lifecycle without an option to pause the build. Since LabVIEW

needs to be compiled and tested in the LabVIEW environment together with its dependencies, this

behaviour needs to be changed. By using profiles in Maven, the build job can be split up by tying the

plugins to different profiles. The separate blue process diagrams in Figure 55 illustrates the two

profiles. Plugins are used to get the extended behaviour (such as building a zip file instead of jar). All

of this is configured in the POM.

Installation

Installation of Maven requires Java. In order to run, the environment variables PATH and M2_HOME

must be set. By default a local repository is created, this is where artifacts will be deployed.

Settings

The installation directory contains a file called settings.xml. Among other things, this file contains

information to customize the behaviour of Maven such as which repository to deploy the artifacts to,

proxies, profiles and authentication. To override these settings, a copy of this file can be placed in the

local repository. In this file, Nexus is declared to be the main artifact repository for the builds.

In order to make the build system as generic as possible, this settings file is stored in Jenkins and not

locally on the slave machines. In this way the settings can be controlled in one place, and a change in

Jenkins would affect all slaves. The plugin for this in Jenkins is called “config file provider”.

POM

The POM is where the project’s identity, structure and dependencies are declared and builds are

configured. The presence of a pom.xml file defines a Maven project and should be unique. The pom

file is always deployed and stored together with the binaries in the build. Since it contains all the

information needed to recreate the build, it is not a good idea to put variables in the POM. It destroys

the concept of reproducibility, the ability to produce the same output when the same inputs are given.

Figure 56 shows the minimum information in a POM, which is created by default.

Figure 56: Default POM

81 | P a g e

The only required fields in the POM are group ID, Artifact ID and version. The combination of these

fields must be unique for the POM. The rest of the contents of the POM includes plugins, profiles,

dependencies and information about which repository to deploy the artifacts to (if different from the

one in settings.xml).

GroupId: The groupID groups a set of artifacts into a logical group. For all the RADE libraries

the groupId will be “RADE. $OS. $LV-ver” where $OS is the operating system and $LV-ver is the

LabVIEW version.

ArtifactId: The artifactId identifies the component in the group and the combination of

groupId and artifactId must be unique for a project. For all the libraries in the project, the

artifactId is the name of the library.

Version: The version of the project follows the same versions as in SVN.

A maven project can inherit plugins and dependencies from another maven project. This greatly

simplifies the configuration of child POMs and ensures that changes affect all projects. The child POMs

can always override the configuration inherited from the parent POM. Dependencies in a parent are

always inherited but variables are not. This property of Maven is used in the POM files for the RADE

libraries because the parent POM can be more generic with variables while the child POMs are

hardcoded. The inheritance section in the POM must be defined as in Figure 57. GroupId, artifactId,

version and relative path is required.

For each version of the libraries there will be 18 unique pom-files. Since they cannot contain variables,

they are generated before each build by a python script. The created POM contains plugin declarations,

dependencies and profiles, the rest of the configuration is declared and inherited from the parent

POM.

Figure 57: Inheritance configuration in POM

82 | P a g e

Generating the POM

In order for the pom file to act as an unchangeable guarantor for the build, there can be no variables

in the pom file as different values for the variables would give a different result. A custom pom file

should therefore be generated for each build based on a set of variables from Jenkins. Python was

used to generate the pom.xml file because it is platform independent, can be run from command line

and has libraries for easily creating xml files. The only variables needed are the library name, the OS,

the LabVIEW version and the release version. The generated pom file defines the plugins needed for

each library and the name of the dependencies. The rest is inherited from the parent pom.

LabVIEW has no built in easy way of providing dependencies automatically, so this has to be defined

in the script based on the known structure of RADE (which libraries are used where). If this changes in

the future, the script will have to be slightly changed.

Plugins

When using a plugin, the groupId, artifactId and version of the plugin must be provided. In addition,

for most plugins, id, goals and phase must also be defined. This is so that Maven knows in which phase

of the lifecycle to execute the plugin and which method of the plugin to use. The plugins used for the

build include Maven-antrun-plugin, Maven-assembly-plugin, Maven-versions-plugin and Maven-

dependency-plugin.

Maven-antrun-plugin: This plugin is used for copying the compiled sources from the LabVIEW

directory to the Maven project directory.

Maven-assembly-plugin: Packages the sources into a zip file.

Maven-versions-plugin: Plugin that updates the dependencies to the latest versions by

checking the Nexus repository.

Maven-dependency-plugin: For downloading the dependencies from Nexus and unzipping

them to the Maven project directory.

Profiles

The plugins are split into two profiles in the POM; Install and Deploy. By splitting the plugins into two

profiles, it is possible to run the profiles separately, and therefore only certain parts of the build job at

a time. This is useful because the library needs to be compiled and tested in-between the different

plugins. The install profile runs the plugins to update the dependency versions and downloads them

from Nexus. The deploy profile gets the compiled sources, zips them and deploys them to Nexus. This

profile should be run after the library has been compiled and tested with LabVIEW, and only if the tests

succeed.

83 | P a g e

3.5.3 Nexus

Nexus is used for storing the binary artifacts from Maven. It can also be reached directly from Jenkins

with a Maven-plugin. Nexus comes with a few default repositories including releases and snapshots.

The Nexus web interface is pictured in Figure 58 with a list of all the default repositories to deploy

artifacts to.

Setup

Nexus was installed on rade-nexus-lin-20 and set up using the web interface. With Nexus installed and

started, the repository address must to be set in the Maven settings.xml file in Jenkins. The binaries

from Maven are deployed to the Nexus repository “Releases” (Figure 59: Nexus repository, Releases).

The authenticated profiles in Nexus must match the authentication section in Settings.xml for Maven

in order to access the repositories. This is configured in the web interface for Nexus.

Figure 58: Nexus repository

Figure 59: Nexus repository, Releases

84 | P a g e

3.5.4 Jenkins

In the automatic deployment system, Jenkins is the tool that ties SVN, Maven, LabVIEW and Nexus

together by calling them from build scripts executed by Jenkins. The web interface for Jenkins looks

like Figure 60, and allows to set up and configure slaves and build jobs. Jenkins, the master node, is

essentially doing management work and executing jobs on the different slaves. The slaves then reports

back the outcome of the builds which can be viewed on the master. Users should only have to interact

with Jenkins without needing to touch the slaves, where the jobs are actually performed.

Three jobs are set up on Jenkins for testing, building and distributing the RADE framework

automatically using SVN, Maven, LabVIEW and Nexus. The jobs can be triggered manually in the Jenkins

UI (Figure 60) or triggered by other events such as a timer, SVN updates or different build jobs from

Jenkins.

I created a SVN repository that is accessible from Jenkins. It contains a parent-pom, an assembly

descriptor, a python script for creating child POMs and all build scripts used from Jenkins. The LabVIEW

Unit Tester, menu generator and mass-compiler are also a part of the repository.

Figure 60: Jenkins interface

85 | P a g e

Setup

There was no need to install Jenkins as the MTA team has a central Jenkins instance running. The virtual

machines and build jobs are set up on this instance.

Slaves

Slaves are a mechanism to load off work on the Jenkins master when build jobs start increasing in size

and number. Also, different slaves can serve different purposes such as OS specific tasks. For our

purpose, slaves are necessary because they represent different environments for the RADE builds.

The slave machines and Jenkins need to establish a bi-directional communication link, such as a

Transmission Control Protocol/Internet Protocol (TCP/IP) socket. There are different ways of setting

up this communication link, but Secure Shell (SSH) is the preferred option for Linux machines. The slave

only needs the masters public SSH key and the rest of the work is done by the master, including starting

and stopping the slaves. The virtual machines Rade-Win-20 and Rade-Lin-21 are configured as slaves

on the Jenkins instance using SSH as launch method. The Linux machine needs no extra configuration

while the windows slave needs to install Cygwin in order to connect through SSH. It was also necessary

to configure the firewall settings to enable the communication.

Maven integration

Maven is integrated with Jenkins by default because it is run from command line. Jenkins can execute

maven builds on a slave machine through the build scripts in Jenkins. In addition there are plugins in

Jenkins for simplifying this, including the config file provider plugin. This plugin stores the local

settings.xml in Jenkins so that it can be used it in the build jobs. Having the settings.xml in Jenkins

instead of locally on the slave node makes it easier to make changes to the settings and make sure all

slaves have the same settings. By calling the settings from Jenkins, this file will override the Maven

settings on the slaves. A name must be set for the file in order to call it in the build job. In the example

in Figure 61 the argument -s $MY_SETTINGS tells Maven which settings file to use and –f $WORKSPACE

where the POM is located (in this case the root of the Jenkins workspace).

Figure 61: Maven call from Jenkins

86 | P a g e

SVN integration

A plugin for Source Control Management (SCM plugin) is useful for connecting to the SVN repository.

This plugin makes it possible to run jobs triggered by change in a SVN repository, to tag in SVN and to

set up an environment in the Jenkins workspace for the build by checking out from SVN repositories.

Nexus integration

Although it is possible to download and deploy binaries to Nexus directly from Jenkins, it is not

necessary with this configuration. Communication with the Nexus repository happens through Maven.

3.5.5 Jenkins Jobs

Each job in Jenkins can be executed on one or multiple slave machines. The jobs consist of

configurations and shell scripts that gets executed on the slaves. Since the scripting language is

different for Linux and windows machines, separate jobs must be created for the two. Possible build

steps include executing windows batch commands, shell scripts, python scripts and setting

environmental variables.

Figure 62: Jenkins job - execute shell script

87 | P a g e

Workspace

The workspace in Jenkins is defined for each slave. The default location for Windows slaves is in

C:\Jenkins\. This location is used in the jobs with the variable WORKSPACE.

Choice parameters

The jobs can be configured to be parameterized. This means that the user sets variables when the job

is started and these variables are used in the jobs. This is very useful for the RADE releases as the user

can choose which library to build, the jobs are configured to be generic enough to handle all libraries.

Post-build Actions

Post build actions are actions that can be executed after a job has finished. A useful option is to send

E-mail notification to developers if the build fails.

Figure 63: Jenkins build job with parameters

88 | P a g e

RADE-create-tag

This job is intended to run once, when a developer has made a change to a library in RADE and wants

to create a tag. Figure 64 illustrates the steps for the job. The developer should manually start the job

by running it in Jenkins. When starting the build, the

user must choose which RADE library to tag. The

LabVIEW version is not a choice since the RADE libraries

always should be tagged in LabVIEW 2010. The trunk of

the chosen library is checked out to the LabVIEW home

directory, in user.lib. A pom.xml file is generated using

the python script. Maven is called to update the

dependencies in the pom to the newest versions

available in Nexus. The dependencies are then

downloaded from Nexus, unpacked and moved to the

LabVIEW home directory in user.lib. With both the

chosen library and the dependencies present, the

library is compiled, tested and menus are generated for

LabVIEW 2010. LabVIEW VIs are called from Jenkins to

execute these tasks. If any of these VIs fail, the job will

fail. If all the steps succeed, the library is zipped and

deployed to Nexus and a new tag created in the SVN

repository.

Figure 65 shows one of the shell scripts in the build job. This script calls “compile-all.vi” to compile the

library (who’s path is given by “$compile_path”). Variables are used to make the job generic enough

that it can used for all libraries in RADE.

Figure 64: Build steps RADE-create-tag job

Figure 65: Compiling library with Jenkins

89 | P a g e

Create-artifacts

This job is created in two different versions, for Linux

and Windows. The two jobs will run in parallel on the

slaves creating artifacts for all the variations not

covered by the first job. The job can be triggered to start

after the first job (RADE-create-tag), by SVN or manually

in Jenkins. The build is parameterized and needs the

library and LabVIEW version as input parameters.

The latest tag from SVN is copied to the LabVIEW

directory on the slave machine. A pom-file is created for

the exact version and variation of the library. Maven

downloads the dependencies from Nexus to the slave.

The library is then compiled for the chosen LabVIEW

version. At this step the Unit tests should have already

been run. However configuration tests can be added

here to ensure that the version is working for this

LabVIEW version and OS. Finally, the library is packaged

and deployed to Nexus.

This should normally be run through a build pipeline in Jenkins (3.5.5.7 Build pipeline). With the build

pipeline, the job is executed multiple times in series with different input parameters to produce the

different variations. It is still possible to run the job individually but will only create artifacts for one

variation per job execution.

Figure 66: Build steps RADE-create-artifacts

90 | P a g e

Create-installer

For releasing the RADE framework, all libraries are downloaded from Nexus and installers are created

and distributed. A pom-file is created in order to download the correct versions and variations of the

libraries from Nexus. The user can choose to build the latest version or the latest stable version.

The most stable version will be the one where the

internal dependencies have already been compiled

together. The latest version includes all latest

versions of the libraries. The option is set by the user

as a parameter when starting the job (Figure 67).

The libraries are downloaded to the slave and unit

tests run. If tests pass, an installer bundle is created.

The installers are created using Inno Setup 15 for

windows and RPM16 for Linux. The installation files

are then copied to the CERN shared folders available

for all users. The workflow for the job is showed in

Figure 68. The POM describing the build is deployed

to Nexus. In this way all previous versions of the

release can be reached and recreated.

This job exists in two versions, one for Linux and one

for Windows. It should be run as a part of a build

pipeline in Jenkins so that it is run automatically for

every LabVIEW version. Running the job separately

will only generate one installer for a particular

version.

15 Inno Setup is an open source installer for Windows. 16 RPM is a package management system for Linux distributions

Figure 68: Build steps: RADE-create-installer

Figure 67: RADE-create-installer parameters

91 | P a g e

Latest stable version: The latest stable version is found by

following the chain of transitive dependencies in the POMs,

Maven does this by default. The latest version of all the “top-

level” libraries are used. These are the libraries that no other

RADE libraries depend on, the “top of the hierarchy” (See Figure

44). The transitive dependencies are used to get the correct

versions of the rest of the libraries. This is demonstrated in

Figure 69 where the latest version of Alarm is used and the

transitive dependencies are followed. This version will be the

most stable because it is ensured that all the libraries used have

already been compiled and tested together (in job 1 and 2). This might result in a RADE release where

the most recent changes are not included (like the latest RBAC version in Figure 69). However it is

ensured that all the libraries in this release are compiled and tested together and that the version is

stable. In order to include the latest changes from RBAC in a stable release, the developer must create

a new tag of RIO and then Alarm. For only minor changes in RBAC, it is often not desired to have to

create a new tag in Alarm and RIO in order to include the latest change in RBAC. This is because there

is no “new” functionality in these libraries, it might violate the intension of the tags in SVN (and create

a lot of them). For that reason, the “Latest version” option is often a good choice even though some

libraries have not been tested together in Job 1 and 2.

Latest version: The latest versions of all the RADE libraries are

used with this option (Figure 70). This does not take into

account the versions of the dependencies the library has been

tested with previously. For that reason the tests should be re-

run before the release. This option is preferred if only small

changes are made and the developer wants the changes to be

included immediately without having to update other libraries.

This option will include all the most recent changes in the

release.

Figure 69: Latest stable version

Figure 70: Latest version

92 | P a g e

Build pipeline

Build pipeline is a job type in Jenkins that can be created after installing the Build Pipeline Plugin. It

allows for triggering multiple jobs in series of parallel. Variables can be passed between the different

jobs. Figure 71 illustrates a build pipeline executing RADE-create-tag and RADE-create-artifacts with

parameters, in series and parallel. The parallel jobs are executed on different slaves.

Figure 73 shows the configuration of the same build-pipeline. By default a failed job will fail the build

pipeline and the remaining build chain stopped. While this is possible to configure in the job, this

behaviour is wanted when building the RADE framework. The installers should not be created and

distributed if some of the artifacts have not been created. Figure 72 shows the Build Graph in the

Jenkins UI for a failed build pipeline.

Figure 73: Jenkins build pipeline configuration

Figure 72: Jenkins failed build pipeline

Figure 71: Jenkins build pipeline

93 | P a g e

3.5.6 LabVIEW and Jenkins

Jenkins executes LabVIEW VIs on the slaves from build scripts configured in the jobs. If the VIs fail, an

exit flag is set from the LabVIEW environment to a test-result script in the workspace (Figure 74). This

script is called by Jenkins and will fail the Jenkins build if the exit flag is set to greater than zero. The

Jenkins interface listens to standard input/output. Pipes are used in LabVIEW to communicate test

results and outputs from the application back to the Jenkins interface. All the LabVIEW VIs used in

Jenkins are checked out to the workspace before the build job starts. They are compiled to the correct

LabVIEW version before being called in the build scripts.

Mass Compiler

The Mass Compiler is the application that compiles the RADE libraries to the given LabVIEW version

and platform. It has the shape of a VI named “compile-all.vi”. The LabVIEW version of the compile-

all.vi decides the versions of the VIs in the library.

The Mass Compiler opens and saves all VIs in a given path. The path is set in Jenkins by setting the

environmental variable “compile_path”. This path is read from the LabVIEW application and decides

which folder to compile. The VI gets called from command line, does not need a UI to run and will shut

down LabVIEW when the compilation has finished. It reports back to Jenkins using pipes and sets an

exit flag on error, in order to notify Jenkins of the failure.

Figure 74: Set exit flag in test-results script file

94 | P a g e

Build menus

In order to build the menu-files for the framework in LabVIEW, these need to be generated. The

generate-mnu.vi is called from the Jenkins build scripts. It reads environmental variables set in Jenkins

in order to know which library to build menus for. If the menu files exist, the menu icon is loaded and

the palette updated. The VI communicates back to the Jenkins interface using pipes. It reports on error

by setting the exit flag in the test-results script and the application quits LabVIEW when finished.

Unit tester

The Unit Tester reads the environmental variable “LIB” set from Jenkins in order to decide which test

objects to launch. If the tester is run from Jenkins, it will run without a GUI and report back to the

Jenkins interface using pipes. If it is opened outside Jenkins, the library can be chosen manually. The

unit tester finds all tests located in the test-folder of the library. All tests are launched in parallel to

shorten the execution-time. Failed tests are marked with an exit flag in the test-results script in the

Jenkins workspace.

If the library is RIO, test servers are launched for each test in order to set up an environment for the

test. This server is killed when the test is finished.

95 | P a g e

3.6 Discussion

The automatic deployment system has the potential to deliver stable, reproducible builds, still saving

time in the process. By integrating Maven and Nexus with the continuous integration server, doors

were opened in terms of managing the dependencies in RADE in a better way. In the original

deployment system, the RADE dependencies were only managed by build scripts without a system for

keeping track of the different versions, making it impossible to reproduce builds in an automatic

manner. By compiling and testing every new tag of a library for all possible variations and storing the

binaries in Nexus, the release of the framework only involves downloading the libraries from the

server. This solution saves a considerable amount of time. Automatic testing gives developers

immediate feedback on introduced bugs and broken modules. Testing the versions on all platforms

immediately also reveals platform-dependent errors in an early phase. All information related to the

builds are stored in POM-files in the Nexus server, functioning as history-logs for the releases. In that

way, any build can be reproduced simply by downloading re-building the same pom-file.

The system requires a predictable structure in the SVN repositories in order for the automatic builds

to work. Any changes in the structure, dependency hierarchy in RADE or certificates would require jobs

and scripts to be updated in Jenkins.

96 | P a g e

4 Conclusion

A proof of concept for a new automatic deployment tool for the RADE framework has successfully

been developed. By integrating the Jenkins continuous integration server with Maven and Nexus, a

new and improved system for managing the dependencies in RADE was included in the deployment.

The improved dependency management has brought great advantages to the system, saving build time

and enabling build reproducibility through a full versioning history of the releases. Compiling and

testing libraries on all platforms immediately after committing the new version allows for earlier

discovery of platform-dependent errors. Automatic Unit testing at every level promotes early feedback

to the developers on errors and broken modules. The work done for the automatic deployment system

has revealed new options for releases of LabVIEW frameworks and applications at CERN and some of

the concepts of this project has already been implemented for the RADE deployment. The solution

reveals future use cases for the Nexus repository, allowing downloads of single libraries (and

dependencies) from RADE.

The first version of RADAR 2.0 has been developed for the operators in CCC with positive feedback.

The application simplifies the connection to the CERN controls infrastructure by automatically

generating the required code based on a simple drag and drop action by the user. This gives a common

LabVIEW interface to get, set and subscribe to data from the accelerator domain without requiring

LabVIEW programming knowledge from the user. The panel is monitored and the code generated using

VI scripting. User functionality includes the options to delete, recreate and edit the generated objects.

The application is developed using an object oriented approach and the actor framework, making the

application expandable for future functionality. The development of the application is planned to be

continued at CERN and new functionality will be added, such as SQL queries, custom CMW servers,

and the option for inserting custom code by the user.

97 | P a g e

Appendiks A Bibliography

[1] O. Ø. Andreassen, P. Bestmann, C. Charrondière, T. Feniet, J. Kuczerowski, M. Nybø and A. Rijllart,

"Turn-Key applications for accelerators with LabVIEW-RADE" ICALEPCS 2011. Available:

http://cds.cern.ch/record/1401876/files/WEMAU007.PDF?version=1 [Found 11 04 2016]

[2] EN-STI-ECE-MTA, "RADE" [Online] https://rade-drupal.web.cern.ch/ [Found 03 2016]

[3] R. Sorokoletov, "CMW User Manual" CERN edms. Available:

https://edms.cern.ch/ui/file/1054268/1/CMW-UserManual.pdf [Found 11.04.2016]

[4] CERN, "The accelerator Complex" [Online]. http://home.cern/about/accelerators [Found 04

2016]

[5] National Instruments, " Capabilities of the VI Server" [Online] http://zone.ni.com/reference/en-

XX/help/371361M-01/lvconcepts/capabilities_of_the_vi_server/ [Found 04 2016]

[6] National Instruments, "LabVIEW Object-Oriented Programming: The decisions behind the

design", 21.05.2014. Available: http://www.ni.com/white-paper/3574/en/ [Found 04 2016]

[7] A.C. Smith, S.R. Mercer, "Using the Actor Framework in LabVIEW"

[8] CERN, "About CERN" [Online]. http://home.cern/about [Found 03 2016]

[9] CERN BE department, "OASIS" [Online] https://espace.cern.ch/be-

dep/CO/APs_Section/SitePages/OASIS.aspx [Found 04 2016]

[10] CERN AB-CO-IN. (2007, July) "cmw-rda" [Online]. http://cmw.web.cern.ch/CMW/products/cmw-

rda/cmw-rda.html [Found 05 2016]

[11] O. Ø. Andreassen and A. Tarasenko, "Continuous integration using LabVIEW, SVN and Hudson"

ICALEPS 2013. Available: http://epaper.kek.jp/ICALEPCS2013/papers/momib08.pdf [Found 22

04 2016]

[12] CERN IT, "CERN openstack private cloud guide" [Online].

https://clouddocs.web.cern.ch/clouddocs/ [Found 04 2016]

[13] Redis, "Redis"[Online]. http://redis.io/ [Found 04 2016]

[14] CERN EN-STI, "EN-STI" [Online]. http://en.web.cern.ch/en-sti-group [Found 03 2016]

[15] A. Rijllart, "MTA" [Online]. https://wikis.web.cern.ch/wikis/display/MTA/MTA+Mandate [05 16]

[16] NI, "VI scripting"[Online] http://sine.ni.com/nips/cds/view/p/lang/en/nid/209110 [Found 04 16]

[17] Source; http://zone.ni.com/reference/en-XX/help/371361K-01/lvhowto/mass_compiling_vis/

98 | P a g e

Appendiks B Abbreviations ALICE A Large Ion Collider Experiment

API Application Programming Interface

ATLAS A Toroidal LHC Apparatus

CCC CERN Control Centre

CCDB CERN Controls configurations Database

CERN Conseil Européen pour la Recherche Nucléaire

CI Continuous Integration

CMS Compact Muon Solenoid

CMW Controls Middleware

CPU Central Processing Unit

DLL Dynamic-Link Library

ECE Equipment, Controls and Electronics

EN Engineering

FEC Front End Computer

FESA Front End Software Architecture

FPGA Field-Programmable Gate Array

GUI Graphical User Interface

IT Information technology

JAPC Java API for Parameter Control

LHC Large Hadron Collider

LHCb Large Hadron Collider Beauty

MD5 Message Digest algorithm 5

MTA Measurement, Test and Analysis

MTA-lib Measurement, Test and Analysis library

NI National Instruments

OASIS Open Analogue Signal Information System

OOP Object Oriented Programming

OS Operating System

POM Project Object Model

PS Proton Synchrotron

QDSM Queue-Driven State Machine

RADE Rapid Application Development Environnent

RBAC Role Based Access Control

RDA Remote Device Access

RIO RADE Input Output

RPM Rpm Packet Manager

SCM Source Control Management

SLC Scientific Linux CERN

SPS Super Proton synchrotron

SSH Secure Shell

STI Sources, Targets and Interactions

SVN Subversion

SQL Structured Query Language

TDD Test Driven Development

TI Technical Infrastructure

UI User Interface

URL Uniform Resource Locator

UUT Unit Under Test

VCS Version Control System

VI Virtual Instrument

VNC Virtual Network Computing

99 | P a g e

Appendiks C Project management The project is split into two parts; RADAR 2.0 is a cooperation between Rebekka Mork Knudsen and

Jakub Rachuki. The Automatic deployment system is fully developed by Rebekka Mork Knudsen.

C.1 Organization The RADAR 2.0 project was completed by Rebekka Mork Knudsen and Jakub Rachuki over the course

of five months, both were involved in all parts of the development of the application.

C.2 Project form Scrum was chosen as the preferred project form for the RADAR 2.0 project because of its flexibility.

Weekly meetings and individual goals and tasks were set. With the use of SVN and Jira, this was a

successful form as it led us to both focus on individual tasks and parts of the application but with a

high degree of cooperation.

For the Automatic deployment system, no particular project form was chosen.

C.3 Timetable The MTA team was moved into another section during my stay at CERN, and this caused the projects

to change together with the team’s goals and priorities. For this reason, no timetable for the project

has been followed.

100 | P a g e

Appendiks D Attachments

Attachments include the RADAR 2.0 user guide and configuration of the automatic deployment system

from the CERN wiki. The source code for RADAR 2.0 is enclosed in pdf format, a separate attachment

for the Main Program and the Configuration Program. The documentation for the automatic

deployment system includes all of the scripts and files called from Jenkins and the job-configurations

in pdf format.

D.1 Annex List Attachment 1 – Configuring Jenkins, CERN wiki.pdf

Attachment 2 - RADAR 2.0 Userguide.pdf

D.2 Source code documentation Attachment 3 - RADAR 2.0 Documentation - Main Program.pdf

Attachment 4 - RADAR 2.0 Documentation - Configuration Program.pdf

Attachment 5 – UnitTester Documentation.pdf

Attachment 6 – Jenkins Configuration.pdf

Attachment 7 - Installer-Tools Documentation.pdf

Attachment 8 – Jenkins Scripts (folder)

Attachment 9 – Maven Tools (folder)

101 | P a g e

Appendiks E List of figures Figure 1: CERN logo ................................................................................................................................. 9

Figure 2: CERN Accelerator Complex .................................................................................................... 10

Figure 3: CMS Detector ......................................................................................................................... 11

Figure 4: CERN Control Centre .............................................................................................................. 12

Figure 5: RADE palette for LabVIEW ...................................................................................................... 15

Figure 6: CMW Get operation ............................................................................................................... 16

Figure 7: RBAC authentication with CMW Set operation ..................................................................... 17

Figure 8: Example VI server functions ................................................................................................... 19

Figure 9: Example VI scripting functions ............................................................................................... 19

Figure 10: Example Producer/Consumer design pattern ...................................................................... 20

Figure 11: Example Dynamic Dispatch .................................................................................................. 21

Figure 12: Actor framework .................................................................................................................. 22

Figure 13: Configuration program and Main programs ........................................................................ 27

Figure 14: Architecture Main Program .................................................................................................. 28

Figure 15: User events ........................................................................................................................... 29

Figure 16: Class hierarchy Main Program .............................................................................................. 30

Figure 17: Inheritance hierarchy actors ................................................................................................ 33

Figure 18: Launch chain ......................................................................................................................... 34

Figure 19: TopLevelGUI loading subpanel ............................................................................................. 37

Figure 20: Login panel ........................................................................................................................... 38

Figure 21: Tree panel ............................................................................................................................. 39

Figure 22: Folder Browser with "New" option ...................................................................................... 40

Figure 23: URL tree ................................................................................................................................ 41

Figure 24: Executable version Main Program ....................................................................................... 43

Figure 25: Config Panel .......................................................................................................................... 44

Figure 26: Set button and cluster .......................................................................................................... 45

Figure 27: Get button and cluster ......................................................................................................... 45

Figure 28: Explicit RBAC login panel ...................................................................................................... 46

Figure 29: Graph and table control types.............................................................................................. 46

Figure 30: Monitor undo state .............................................................................................................. 47

Figure 31: Select URLs ........................................................................................................................... 48

Figure 32: Created control..................................................................................................................... 49

Figure 33: Create new control from template ...................................................................................... 49

Figure 34: Generated event case for Set action .................................................................................... 50

Figure 35: Wire event structure terminals and insert SubVI ................................................................. 50

Figure 36: Main program - block diagram ............................................................................................. 51

Figure 37: Main program - front panel .................................................................................................. 51

Figure 38: Creation, initialization and connection to objects ............................................................... 52

Figure 39: Open connection with RBAC for CMW Set ........................................................................... 52

Figure 40: Launch subscribing daemon ................................................................................................. 53

Figure 41: Convert, clear and send error .............................................................................................. 55

Figure 42: Error handler ........................................................................................................................ 56

Figure 43: RADE release process ........................................................................................................... 61

Figure 44: Dependencies in RADE ......................................................................................................... 63

102 | P a g e

Figure 45: RADE release - only latest versions ...................................................................................... 67

Figure 46: one RIO build generates 18 variations ................................................................................. 68

Figure 47: Automatic build .................................................................................................................... 71

Figure 48: Generate POM file ................................................................................................................ 72

Figure 49: Set up environment on slave ................................................................................................ 73

Figure 50: Maven and Nexus ................................................................................................................. 74

Figure 51: Structure Nexus repository .................................................................................................. 75

Figure 52: Class hierarchy Unit Tester ................................................................................................... 76

Figure 53: Virtual machines setup ......................................................................................................... 78

Figure 54: Maven lifecycle ..................................................................................................................... 79

Figure 55: Maven and LabVIEW workflow ............................................................................................ 79

Figure 56: Default POM ......................................................................................................................... 80

Figure 57: Inheritance configuration in POM ........................................................................................ 81

Figure 58: Nexus repository .................................................................................................................. 83

Figure 59: Nexus repository, Releases .................................................................................................. 83

Figure 60: Jenkins interface ................................................................................................................... 84

Figure 61: Maven call from Jenkins ....................................................................................................... 85

Figure 62: Jenkins job - execute shell script .......................................................................................... 86

Figure 63: Jenkins build job with parameters ....................................................................................... 87

Figure 64: Build steps RADE-create-tag job ........................................................................................... 88

Figure 65: Compiling library with Jenkins .............................................................................................. 88

Figure 66: Build steps RADE-create-artifacts ........................................................................................ 89

Figure 67: RADE-create-installer parameters ........................................................................................ 90

Figure 68: Build steps: RADE-create-installer ........................................................................................ 90

Figure 69: Latest stable version ............................................................................................................. 91

Figure 70: Latest version ....................................................................................................................... 91

Figure 71: Jenkins build pipeline ........................................................................................................... 92

Figure 73: Jenkins build pipeline configuration ..................................................................................... 92

Figure 72: Jenkins failed build pipeline ................................................................................................. 92

Figure 74: Set exit flag in test-results script file .................................................................................... 93