test generation strategies for large- scale iot security ... · pdf filegenerate them as well...

68
Deliverable D2.2 Test generation strategies for large- scale IoT security testing – v1 Version Version 1.0 Lead Partner Easy Global Market Date 22/08/2016 Project Name ARMOUR – Large-Scale Experiments of IoT Security Trust Ref. Ares(2016)4739903 - 23/08/2016

Upload: vungoc

Post on 20-Mar-2018

214 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

Deliverable D2.2

Test generation strategies for large-scale IoT security testing – v1

Version Version 1.0

Lead Partner Easy Global Market

Date 22/08/2016

Project Name ARMOUR – Large-Scale Experiments of IoT Security Trust

Ref. Ares(2016)4739903 - 23/08/2016

Page 2: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

2

Call Identifier H2020-ICT-2015

Topic ICT-12-2015 Integrating experiments and facilities in FIRE+

Project Reference 688237

Type of Action RIA – Research and Innovation Action

Start date of the project February 1st, 2016

Duration 24 Months

Dissemination Level X PU Public CO Confidential, restricted under conditions set out in Model Grant Agreement CI Classified, information as referred to in Commission Decision 2001/844/EC

Abstract D2.2 built upon D1.1 which identified vulnerability patterns and D2.1 which identified Test patterns to propose a methodology to refine the test description under the format of test patterns. These Test Patterns are presented and automation approaches to generate them as well as TTCN-3 test cases for execution through model based testing are proposed. Test Patterns dereived following the proposed methodology were described within D1.2 (‘Experimentation approach and plans’) building upon results from tools set within WP2 and will be further detailed within D2.3 (‘Testing framework for large-scale IoT security testing’).

Disclaimer This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 688237, but this document only reflects the consortium’s view. The European Commission is not responsible for any use that may be made of the information it contains.

Page 3: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

3

Revision History

Revision Date Description Organisation

0.1 04/07/2016 Creation. First outline proposal EGM

0.2 13/07/2016

Added contributions on Sections 3 (SMA), 6 (ODINS, JRC) and 7 (SMA, ODINS)

ODINS, JRC, SMA

0.3 05/08/2016 General contribution to the document EGM

0.4 12/08/2016

Added contributions to Sections 1 (EGM, SMA), 3 (SMA, EGM) and 7 (SMA, EGM)

EGM, SMA

0.6 16/08/2016 Proof-reading of the document EGM, SMA, ODINS

0.7 18/08/2016 Partners validation and remarks SYN, UPMC

1.0 22/08/2016 Final review EGM, SMA, UPMC

Page 4: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

4

Table of Contents 1 Introduction ................................................................................................................... 6

2 Best practices for testing IoT Systems .......................................................................... 8

2.1 Introduction ............................................................................................................ 8

2.2 Test system configurations .................................................................................... 8

2.2.1 Server side testing (SST) ............................................................................... 9

2.2.2 Server side testing – Multiple roles ................................................................. 9

2.2.3 Client side testing (CST) ............................................................................... 10

2.2.4 Test configuration synthesis ......................................................................... 10

2.3 Expressing test scenarios from Test Patterns ..................................................... 11

3 Test generation ........................................................................................................... 15

3.1 Using Smartesting Test Purposes for Model-Based Security Functional and Vulnerability Testing ....................................................................................................... 17

3.1.1 Test Generation from Smartesting Test Purposes ....................................... 19

3.1.2 Model-Based Security Functional Testing .................................................... 20

3.1.3 Model-Based Vulnerability Testing ............................................................... 23

3.2 Model-Based Robustness Testing ....................................................................... 24

3.3 MBT Drawbacks and Pitfalls not neglected in ARMOUR ..................................... 26

4 Test implementation ................................................................................................... 27

4.1 Test platform components ................................................................................... 27

4.2 Test publication ................................................................................................... 28

4.3 ATS Compilation .................................................................................................. 29

4.4 System Adapter and CoDec ................................................................................ 30

5 Test execution ............................................................................................................ 31

5.1 Process overview ................................................................................................ 31

5.2 Configuration ....................................................................................................... 31

5.3 Management........................................................................................................ 32

Page 5: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

5

5.4 Reporting ............................................................................................................. 32

6 Steps towards benchmarking and metrics .................................................................. 33

7 Illustration ................................................................................................................... 35

7.1 Proof of concept scope – Exp1 ............................................................................ 35

7.2 Pattern-Based Testing – Exp1 ............................................................................. 36

7.3 Test Case Execution – Exp1 ............................................................................... 38

7.4 Discussion and cost-efforts.................................................................................. 41

8 Conclusion .................................................................................................................. 43

9 References ................................................................................................................. 44

10 Annexes .................................................................................................................. 45

10.1 Refined Test Patterns .......................................................................................... 45

10.1.1 Experiment 1 ................................................................................................ 45

10.1.2 Experiment 2 ................................................................................................ 48

10.1.3 Experiment 5 ................................................................................................ 54

10.1.4 Experiment 6 ................................................................................................ 61

10.1.5 Experiment 7 ................................................................................................ 67

Page 6: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

6

1 Introduction While deliverables D1.11 and D2.12 have been framing the global approach to identification of vulnerability patterns and related test patterns, these patterns still do not define detailed tests to be executed. This approach is based on market best practices for security testing and is summarised within Figure 1:

1. A risk analysis is being performed: in ARMOUR case, we relied on the analysis done in the context of oneM2M standardisation activities, which covers the whole IoT/oneM2M domain. Moreover, in relation with labelling activities, the discussions are on-going in the context of AIOTI-WG4 (trust label) as well as within EC services, while contribution from ARMOUR consortium is expected, mainly in the framework of WP5 activities.

2. Vulnerability patterns are produced: this has been presented in D1.1, being the reference security framework for ARMOUR. These vulnerability patterns have been used to guide ARMOUR experimenters in defining the scope of their experimentations within the project context and express them in the form of test patterns.

In addition to the steps that have been described above and have been treated in the previous Deliverables, this Deliverable defines the formal methodology for the following steps:

3. Test generation: it includes automated activity for deriving test-relevant artefacts such as test cases, test data, test oracle and test code. In ARMOUR context, partners will spend effort towards expressing the generated tests as Test Purposes and TTCN-3 abstract test suites. The methodology is further described within sections 3 and 4.

4. Test implementation: this describes the steps to be performed to transform the test cases, often described in an abstract form, into test suites executed onto the system under test. This requires, in particular, the development of adaptation layers between the test system (the Tester) and the System Under Test, as described in more details within section 5.

5. Test Execution: this last step is about configuring, executing in the targeted environment and reporting the results of the experiments. Results can be stored and exploited in contexts such as labelling and certification. Execution will be proposed at 2 level:

o In an isolated environment containing only the test system and the system under test

o In a realistic environment provided by FIT-IoT Lab, in which the system under test is under operation, interacting with other nodes/devices, while also being tested by the test system.

Reporting will be ensured thanks to the respective dedicated components of the FIESTA testbed.

1 D1.1, “ARMOUR Experiments and Requirements” 2 D2.1, “Generic test patterns and test models for IoT security testing”

Page 7: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

7

Figure 1 - ARMOUR progresses toward test generation and execution

Page 8: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

8

2 Best practices for testing IoT Systems This section provides a set of best practices, as seen from our experience in testing IoT platforms [7] and the oneM2M testing workgroup, as well as including the long experience developed by ETSI in the telecom sector and successfully deployed within the mobile sector through 3GPP.

2.1 Introduction

When developing tests, it is critical to identify and share vision on:

• What is the system under test, its boundaries and the observation points? In the present case, a black-box approach is chosen, meaning that the test design strategy has to identify what are the elements being inside the black-box and the ones being outside. In this respect, only interactions taking place outside of the box are of interest.

• What is the purpose of the test? The purpose of a test is to verify that a system behaves as expected. This expected behaviour has to be formalised in an unambiguous way.

This section provides an approach to describe tests following best practices from the standard development organisations such as ETSI.

2.2 Test system configurations

To execute a test, 2 parts are basically needed:

• The System Under Test (SUT), which is the part we want to test and seen as a black box

• The Tester (human or an automated service or process) which executes tests by interacting with the SUT

An IoT deployment usually consists of the following types of nodes:

• Server node (SN): most often referred to as Server. It provides advanced services and is most of the time located in the cloud

• Client node (EN): these include in particular all sensors and actuators deployed. These can be associated with more or less communicating and computing capabilities but are in general rather constraints. These could also be applications such as mobile one.

• Middle node (MN): this includes all gateways, router, etc. used

Nodes can have different roles as they can be:

• Requestor: they initiate the connection by sending a request • Responder: they respond to a request.

These roles are immutable as nodes can play both. However, for sake of simplicity, we make the following assumption in the remaining of this section:

Page 9: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

9

• Server node is a responder • Client node is a requestor • Middle Node can play both roles

Based on this decision, two test configurations can be identified: server side testing and client side testing.

2.2.1 Server side testing (SST) The SUT is the Server Node. The test system plays the role of the Client Node. This is the simplest approach as the test system initiates the test sequence by sending a request to the SUT (Figure 2).

(a) (b)

Figure 2 - server side testing: (a) Considered IoT deployment. (b) Test configuration.

In the case of a more complicated system that includes intermediate nodes, the approach remains the same; only interface may be different depending on whether Middle Node is outside (Figure 3-a) or inside (Figure 3-b) the black box.

(a) (b)

Figure 3 - server side testing with a middle node being (a)outside or (b) inside the blackbox

2.2.2 Server side testing – Multiple roles There is a specific configuration used in testing, when the test system plays multiple roles at the same time. As usual, the SUT is the Server Node, and the test system plays the role of a client and the role of a server (Figure 4). In this case, after receiving information from the test system as client, the SUT should route the response through the second server, but due to the fact that the two are simulated in the test system, we can directly verify the verdict.

Figure 4 - Server side testing with multiple roles of the test system

Page 10: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

10

2.2.3 Client side testing (CST) The SUT if the Client Node. The test system plays the role of the Server Node. In this case, the test system cannot initiate the test sequence and has to wait for the SUT to start the communication. This is usually done by an operator as depicted in Figure 5.

(a) (b) Figure 5 - client side testing: (a) Considered IoT deployment. (b) Test configuration waiting for test sequence

initialisation.

However, automation can also be brought on that stage by defining an additional layer, the Upper Tester, which plays the role of the operator (Figure 6). When requested by the Test system to the Upper Tester, the SUT initiates the connection. This set-up is however more complex as it requires not only the addition of the Upper Tester but also the modification of the SUT to connect to the Upper Tester.

2.2.4 Test configuration synthesis The following Table 1 provides a synthesis of the different test configurations, which will be encountered in ARMOUR experiments:

Table 1 - Test configurations

Config ID: SST_01 Config ID: SST_02

Figure 6 - Use of an Upper Tester to initiate test sequence

Page 11: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

11

Config ID: SST_MR_01 Config ID: SST_MR_02

Config ID: CST_01 Config ID: CST_02

2.3 Expressing test scenarios from Test Patterns

A test purpose in TPLan (hereafter denoted as TPLan) [1] is a standardized a structured notation developed by ETSI for concise and unambiguous description of what is to be tested rather than how the testing is performed. It is an abstract test scenario derived from one or more requirements, defining the steps needed to be tested if the SUT respects the corresponding requirement.

More concretely, it’s a structured natural language that defines a minimal set of keywords for use in the TPLan combined natural language or even graphical representations. Although it is minimal, it is generic and if there is a need, the user can define his own extensions to the notation specific to his application’s domain. Usually, in its simplest form, it consists of few headers, like “TP id”, “Summary”, “REQ ref.” and other details specified by the user. Then, its main part is composed of three parts: precondition, stimuli and response. They are defined in three different sections that start with “with {…}” – for the preamble and “ensure that {…}” for the test body. The test body is composed of two parts: “when {…}” – for the main part of the test (the stimuli), and “then {…}” – for the response of the stimuli. The post condition usually is omitted because its function is to revert the test case to the initial conditions, before the test case. Inside the brackets, in prosaic form are described all conditions and information needed for the TPLan.

TPLan which helps to write test scenarios in a consistent way brings considerable benefits, for instance:

• it removes possibilities for free interpretation of the test procedure, • easy differentiation of the precondition, the stimuli and the response and • it is a basis for a representation in the test tools.

In the standardisation process of oneM2M, the Test Working Group adopted the use of TPLan for their test scenarios, but with minor modifications. Namely, the sections “when” and “then” are extended with the notation of the data flow, from which to which entity the data is transferred. Also, the test scenario is put in a table, so every block and every header is separated in a different cell. This eases the visual representation for the users. The postamble is not defined neither in ETSI TPLan nor in oneM2M TPLans because in

Page 12: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

12

the majority of test cases, it should simply revert the system to its initial state, which means to delete added elements and the modified variables to be reverted back to their initial values.

TP Id TP/oneM2M/CSE/DMR/BV/001 Test objective Check that the IUT returns successfully an empty content of TARGET_RESOURCE_ADDRESS

resource when the ResultContent set to 0 (Nothing). Reference TS-0001 10.1.3 & TS-0004 6.3.3.2.7 Config Id CF01

PICS Selection PICS_CSE Initial conditions with {

the IUT being in the "initial state" and the IUT having registered the AE and the IUT having created a resource TARGET_RESOURCE_ADDRESS of type RESOURCE_TYPE and the AE having privileges to perform RETRIEVE operation on the resource TARGET_RESOURCE_ADDRESS }

Expected behaviour Test events Direction when { the IUT receives a valid RETRIEVE request from AE containing To set to TARGET_RESOURCE_ADDRESS and From set to AE_IDENTIFIER and ResultContent set to 0 (Nothing) }

IUT AE

then { the IUT sends a Response message containing Response Status Code set to 2000 (OK) and no Content attribute }

IUT AE

Figure 7 - oneM2M TPLan example

An example of TPLan, taken from OneM2M document “oneM2M-TS-0018 Test Suite Structure and Test Purposes”, is given in Figure 7. From the example, we can easily understand what is the goal of this TPLan, from which requirements it is derived, and the test details: in which state should the SUT be in order to execute the test, the content of the test body and the correct response in order the test to pass.

We have to note that in those figures we changed some of the header fields used by oneM2M to adapt them to the ARMOUR context. For example, instead of using Requirement reference, as it is described in oneM2M, we changed the field to refer to the Test Pattern from which it is derived. Also instead of the PICS_selection, we renamed this header to “Stage”, in order to define the exact environment for the execution of this TPLan.

Two of the initial Test Patterns contributed by the ARMOUR partners are added in Annex 1, Refined Test Pattern. Unlike the initial Test Patterns, these are augmented with a communication diagram and more details how the test should be rolled out. The Test Pattern 6 describes communication between Sensor and a Firmware Manager: The sensor requests a new firmware from the Firmware Manager and verifies the signature of the received firmware. If the signature is valid, the firmware will be installed, and will be rejected otherwise.

In order to facilitate and unify the experiments in this project, we propose using the TPLan as described here for all experiments we have to define. Using the same template as oneM2M TST WG, Figure 8 and Figure 9 show the Test Pattern ID6, contributed for Exp1, in the oneM2M TPLan. Compared to the initial test pattern, it is clearer and more concise, the entities and the communication between are well defined.

Page 13: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

13

Hence, these TPLans help to disambiguate the testing intentions, thus they cannot be used for automated test generations. Based on this property of the TPLans, on the one hand in the ARMOUR approach we propose to use the TPLan’s knowledge for generation of executable scripts. On the other hand, if the TPLan are not available, we propose to generate them based on the knowledge represented in the models at the same time that executable scripts are generated. Thus the benefits are twofold. On the one hand, communication with the domain experts familiar with TPLan is made possible, and on the other hand, scripts are ready for automated execution. In the next section we describe the MBT approach and how we express formally the TPLans towards a generation of executable scripts. It is indeed expected that generating TPLan like test purposes from MBT will greatly enhance the testing approach by extending test coverage and ease the overall traceability.

Test Purpose Id EXP1_ID6_01 Test objective Check that the Sensor successfully installs a firmware with a valid signature from the Firmware

Manager (FM). Test Pattern Reference

TP_ID6

Config Id CST_01 Stage Bootstrapping

Initial conditions with { the Sensor being in the "initial state” }

Expected behaviour

Test events Direction when { the Sensor sends a valid RETRIEVE_ FIRMWARE_UPDATE request to FM }

FM Sensor

then { the Sensor receives a NEW_FIRMWARE response from FM containing new Firmware with valid signature and the Sensor succeeds to validate the firmware signature and the Sensor sends a Response message containing Response Status Code set to INSTALLED_FIRMWARE }

FM Sensor FM Sensor

Figure 8 - Test Pattern ID6 contributed by ODINS

Page 14: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

14

TP Id EXP1_ID6_02 Test objective Check that the Sensor successfully discards a firmware with an invalid signature from the

Firmware Manager (FM). Test Pattern Reference

TP_ID6

Config Id CST_01 Stage Bootstrapping

Initial conditions with { the Sensor being in the "initial state” }

Expected behaviour Test events Direction when { the Sensor sends a valid RETRIEVE_FIRMWARE_UPDATE request to FM }

FM Sensor

then { the Sensor receives a NEW_FIRMWARE response from FM containing new Firmware with invalid signature and the Sensor fails to validate the firmware signature and the Sensor sends a Response message containing Response Status Code set to FAILED_SIGNATURE }

FM Sensor FM Sensor

Figure 9 - Test Pattern ID6 contributed by ODINS

Page 15: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

15

3 Test generation M2M existing standards as well as emerging standards, such as oneM2M, put extreme importance into the definition of securtiy requirements related to security functions in addition to functional requirements. Moreover, our experience in security testing and analysis of IoT systems, showed that the IoT system’s Security and Trust will depend on the resistance of the system with respect to:

• misuses of the security functions, • security threats and vulnerabilities, • intensive interactions with other users/systems.

In ARMOUR we have identified four IoT segments and seven experiments that cover different aspects of these segments (see D1.1). Based on the work performed in D1.1, D2.1 and the testing needs of each experiment, we have identified a set of security requirements and vulnerabilities that must be fulfiled by the developed systems. In order to validate them with respect to the set of requirements, we have identified three test strategies:

1. Security Functional testing (compliance with agreed standards/specification): aims to verify that system behavior complies with the targeted specification, which enables to detect possible security misuses and that the security functions are implemented correctly.

2. Vulnerability testing (pattern driven): aims to identify and discover potential vunerabilities based on risk and threat analysis. Security test patterns are used as a starting point, which enable to derive accurate test cases focused on the security threats formalized by the targeted test pattern.

3. Security robustness testing (behavioral fuzzing): compute invalid message sequences by generating (weighted) random test steps. It enables to tackle the unexpected behavior regarding the security of large and heterogeneous IoT systems.

These test strategies may by applied in combination or individually. Model-Based Testing (MBT) approaches have shown their benefits and usefulness for systematic compliance testing of systems that undergo specific standars that define the functional and security requirements of the system. In ARMOUR, we propose a tailored MBT automated approach based on standards and specifications that combines the abovementioned three test strategies built upon the existing CertifyIt technology [5] and TTCN-3 [4] for test execution on the sustem under test (SUT) into one MBT IoT Security Testing Framework. On the one hand, the CertifyIt technology has already proven its usfulness for standrd compliance testing of critical systems, for instance on GlobalPlatform smartcards. Thus, building the ARMOUR approaches upon CertifyIt will allow to get the benefits of a proven technology for conformance testing and introducing and improving it in the domain of IoT. On the other hand, Testing and Test Control Notation version 3 (TTCN-3) is a test scripting language widely known in the telecomunication sector. It is used by the third Generation Partnership Project (3GPP) for interoperability and certification testing, including the prestigious test suite for Long Term Evolution (LTE)/4G terminals. Also, the European Telecommunication

Page 16: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

16

Standards Institute (ETSI), the language’s maintainer, is using it in all of its projects and standards’ initiatives, like oneM2M. Finally, this testing framework will be deployed within the ARMOUR test beds (FIT IoT lab and FIESTA), for large-scale testing and data analysis. Based on the evaluation of the testing needs in T1.1 and T2.1 we have identified three possible levels of automation: the ARMOUR MBT approach with automated test conception and execution based on TPLan tests and TTCN-3 scripts, manual TPLan and TTCN-3 conception and their automated execution on the SUT and finally in-house approaches for testing. We summarize the approach and illustrate the ARMOUR MBT Security Testing Framework in Figure 10. It depicts the three kind of approaches concsidered in ARMOUR based on the experiments needs as discussed previousely: tailored MBT approach, manual conception and in-house approach.

Figure 10 - ARMOUR MBT Security Testing Framework

As, illsutrated in the figure, the MBT approach, relies on MBT models, which represent the structural and the behavioral part of the system. The structure of the system is modeled by UML class diagrams, while the systems behavior is expressed in Object Constraint Language (OCL) pre- and postconditions. Functional tests are obtained by applying a structural coverage of the OCL code describing the operations of the SUT (functional requirements). This approach in the context of security testing is complemented by dynamic test selection criteria called Test Purposes that make it possible to generate additional tests that would not be produced by a structural test selection criterion, for instance misuse of the system (Model-Based Security Functional Testing) and vulnerability tests, trying to bypass existing security mechanisams (Model-Based Vulnerability Testing). These two approaches generate a set of test cases that is stored into a database and then executed on the system; To the difference of them, robustness

Page 17: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

17

testing in our context, based on the same model, will generate randomly a test steps based on the same MBT model by exercicing different and unsual corner cases on the system in a highly intesnsive way, thus potentially activating an unexpected behavior in the system.

In the following sections we will describe in more details the ARMOUR testing approaches using the CeritfyIt technology. 3.1 Using Smartesting Test Purposes for Model-Based Security

Functional and Vulnerability Testing

Within the ARMOUR approach for Model-Based Security Functional Testing we propose to use the CertifyIt technology for security functional testing and extending if needed to cover security testing objectives from the ARMOUR experiments. For instance, a dedicated language called Smartesting Test Purpose Language (hereafter denoted as TP) can be used to express functional security requirements and security test patterns. The consortium will focus on adopting the tool and approach, where possible and according to the requirements and specificities of each experiment within the project.

More concretely, the TP language is based on regular expressions and allows the test engineer to conceive its scenarios in terms of states to be reached and operations to be called. The language relies on combining keywords, to produce expressions that are both powerful and easy to read by a test engineer.

We define below the grammar of the language: Test purpose : (quantifier_list COMMA)? seq EOF; quantifier_list : quantifier (COMMA quantifier)*; quantifier : FOR_EACH BEHAVIOR var FROM behaviour_choice | FOR_EACH OPERATION var FROM op_choice | FOR_EACH LITERAL var FROM literal_choice | FOR_EACH INSTANCE var FROM instance_choice | FOR_EACH INTEGER var FROM integer_choice | FOR_EACH CALL var FROM call_choice; op_choice : ANY_OPERATION | ANY_OPERATION_BUT op_list | op_list; call_choice : call_list; behaviour_choice : ANY_BEHAVIOR_TO_COVER | ANY_BEHAVIOR_TO_COVER_BUT behaviour_list | behaviour_list; literal_choice : IDENTIFIER (OR IDENTIFIER)* | keyword; instance_choice : instance (OR instance)* | state | keyword; integer_choice : CURLY_OPEN INT (COMMA INT)+ CURLY_CLOSE | keyword;

Page 18: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

18

var : DOLLAR IDENTIFIER; keyword : SHARP IDENTIFIER; state : ocl_constraint ON_INSTANCE instance | keyword; instance : IDENTIFIER; ocl_constraint : STRING_LITERAL; seq : bloc (THEN bloc)*; block : USE control restriction? target?; restriction : AT_LEAST_ONCE | ANY_NUMBER_OF_TIMES | INT TIMES | var TIMES; target : TO_REACH state | TO_ACTIVATE behaviour | TO_ACTIVATE var; control : op_choice | behaviour_choice | var | call_choice; call_list : call (OR call)* | keyword; op_list : operation (OR operation)* | keyword; operation : IDENTIFIER; call : instance '.' operation parameters; parameters : PARENTHESIS_OPEN (parameter (COMMA parameter)*)? PARENTHESIS_CLOSE; parameter : FREE_VALUE | IDENTIFIER | var | INT; behaviour_list : behaviour (OR behaviour)* | keyword; behaviour : BEHAVIOR_WITH_TAGS tag_list

| BEHAVIOR_WITHOUT_TAGS tag_list; tag_list

: CURLY_OPEN tag (COMMA tag)* CURLY_CLOSE; tag : REQ COLON IDENTIFIER | AIM COLON IDENTIFIER; TIMES : 'times' ; FOR_EACH : 'for_each' ; BEHAVIOR : 'behavior' ; OPERATION : 'operation' ; INTEGER : 'integer' ; CALL : 'call' ; INSTANCE : 'instance' ; LITERAL : 'literal' ; FROM : 'from' ; THEN : 'then' ; USE : 'use' ;

Page 19: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

19

TO_REACH : 'to_reach' ; TO_ACTIVATE : 'to_activate' ; ON_INSTANCE : 'on_instance' ; ANY_OPERATION : 'any_operation' ; ANY_OPERATION_BUT : 'any_operation_but' ; OR : 'or' ; ANY_BEHAVIOR_TO_COVER : 'any_behavior_to_cover' ; ANY_BEHAVIOR_TO_COVER_BUT : 'any_behavior_to_cover_but' ; BEHAVIOR_WITH_TAGS : 'behavior_with_tags' ; BEHAVIOR_WITHOUT_TAGS : 'behavior_without_tags' ; AT_LEAST_ONCE : 'at_least_once' ; ANY_NUMBER_OF_TIMES : 'any_number_of_times' ; COMMA : ',' ; CURLY_OPEN : '{' ; CURLY_CLOSE : '}' ; BRACKET_OPEN : '[' ; BRACKET_CLOSE : ']' ; PARENTHESIS_OPEN : '(' ; PARENTHESIS_CLOSE : ')' ; COLON : ':' ; DOLLAR : '$' ; SHARP : '#' ; REQ : 'REQ' ; AIM : 'AIM' ; FREE_VALUE : '_' ; DOT : '.' ; DOUBLE_DOT : '..'; fragment DIGIT : '0'..'9' ; fragment IDENTIFIER_FIRST : 'a'..'z' | 'A'..'Z' | '_' ; fragment IDENTIFIER_BODY : 'a'..'z' | 'A'..'Z' | DIGIT | '_' | '/'; IDENTIFIER : IDENTIFIER_FIRST IDENTIFIER_BODY* ; INT : DIGIT+; STRING_LITERAL : '"'(~('\\'|'"'))*'"'; WHITESPACE : (' ' | '\t' | '\r' | '\n')+ { skip(); }; COMMENT : '/*' .* '*/' {$channel=HIDDEN;}; LINE_COMMENT : '//' ~('\n'|'\r')* '\r'? '\n' {$channel=HIDDEN;};

The syntax of the language makes it possible to design test purposes as a sequence of quantifiers or blocks, each block being composed of a set of operations (possibly iterated at least once, or many times) and aiming at reaching a given target (a specific state, the activation of a given operation, etc.).

3.1.1 Test Generation from Smartesting Test Purposes The generation process is guided by the Test Purposes. More precisely, the Smartesting CertifyIt tool makes the usage of the behavioural model in UML and the test purposes. Then, each test purpose produces test objectives, seen as a sequence of intermediate goals given to the test generation engine. We can define the test generation process as follows:

• an internal processing extracts the test objectives which represent a sequence of intermediate goals for the test generation engine.

• the symbolic test generator engine tries to cover each test objective to produce one test.

A dedicated Smartesting Test Purpose editor as part of the tool CertifyIt has been improved to make the TP writing more user friendly. Its aim is to provide auto-completion

Page 20: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

20

and syntax colouring usability features to express the security functional requirements and the security test patterns with textual but formal representations in the shape of a TP. Thus, in the context of the ARMOUR project its usability features have been improved in order to facilitate the expression of the security testing objectives by non-experimented users of the technology.

In the sub-sections below we illustrate the usage of the Smartesting TP Language with CertifyIt in the context of security functional and vulnerability testing.

3.1.2 Model-Based Security Functional Testing To illustrate this approach with Model-Based Testing, we will use an example from the oneM2M initiative. In oneM2M for compliance testing the manually drafted test purposes in TPLan are used. Within the scope of ARMOUR, we have created an MBT model based on available functional and security related documents to generated TPLan-based descriptions and TTCN-3.

The model in Figure 11 represents one view of the class diagram for the oneM2M scope that has been used for security functional compliance generation of Test Purpose and TTCN-3 test cases. This view represents the application entity (AE) for instance a sensor that communicates with a register (RegistrarCSE) that represents the oneM2M platform installed on a server.

Figure 11 - oneM2M class diagram –compliance testing simplified view

To guide the generation of TPLans and their corresponding TTCN-3 test scripts, in the model TPs are defined that guide the generation engine. Form each abstract test case either a TPLan or a TTCN-3 can be produced. To create the TPLans a structured information is added as description of the related documents, thus allowing the automated creation of the required elements of a TPLan. To create TTCN-3 test case, a TTCN-3 publisher specific for the CertifyIt tooling and the ARMOUR experiment’s needs is being developed in a way to be generic for any model based on the CertifyIt tooling.

Page 21: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

21

The generation is guided by the testing security requirements defined in the document “TS-0003_Security_Solutions”. For illustration purposes, consider a security requirement derived from Section 7.1.4 related to an access control decision, which corresponding algorithm is described in the section 7.1.5 of the document. The requirement states that if the entity is to authenticate another entity using a certificate, then the entity shall perform basic path validation as part of verifying the other’s entity.

Based on this analysis we define the test purpose that complements the model described in Figure 12.

Figure 12 - Sample for Test Purpose for oneM2M (compliance)

In Figure 13 we illustrate the abstract test case generated from the previous test purpose, which goal is to test an impersonation error.

Page 22: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

22

Figure 13 - Generated abstract test case (compliance)

The oneM2M TPLan generated by the model is shown in Figure 14. It precisely describes the conditions and the results, and thus it is very easy to conclude the final verdict. The TTCN-3 example is discussed in Section 4, Test implementation.

Figure 14 - oneM2M TPLan generated with MBT

TP Id TP/oneM2M/CSE/SEC/BI/002 Test objective Check that the IUT responds the originator with an error when the

originator sends a request including a different AE identifier Reference TP_SEC_7_2, TS-0003-22 Config Id CF01

PICS Selection PICS_CSE Initial conditions with {

the IUT being in the "initial state" and the IUT having registered the AE with AE_ID set to valid AE_IDENTIFIER }

Expected behaviour Test events Direction when { the IUT receives a valid #REQUEST request from AE containing From not set to AE_IDENTIFIER }

IUT AE

then { the IUT sends a Response message containing Response Status Code not set to 2000 (OK) }

IUT AE

Page 23: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

23

To conclude, to ensure standard compliance of security functional requirements, we illustrated the generation of abstract test cases and their automated publication of TPLans and TTCN-3; hence, as demonstrated within the oneM2M interoperability event [6] at Seoul (South Korea), it is possible to use this same model for functional requirements testing, based on the formalization of existing TPLans covering functional requirements.

3.1.3 Model-Based Vulnerability Testing Complementary to the compliance testing using the ARMOUR MBT Approach, for model-based vulnerability testing of IoT systems we use the ARMOUR security framework defined in D1.1 and the security test patterns defined in D2.1.

For our illustrative example on Exp7 and the oneM2M platform testing, the MBT model for compliance testing is reused and completed with vulnerability related information.

Thus, for instance the model in Figure 15 illustrates the view including vulnerability information, such as SQL Injection.

Figure 15 - oneM2M class diagram – security pattern-based testing view

For example, consider the security test pattern tackling the issue on injection (identifier TP_ID10, D2.1). To generate injection abstract test cases in order to create TPLan and TTCN-3 test scripts, we defined a test purpose, as illustrated in Figure 16.

Page 24: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

24

Figure 16 - Formalizing the security test pattern TP_ID10 for test generation

This TP allows to generate ten test cases related to SQL injection. An example of an abstract test case is given in Figure 17.

Figure 17 - SQL Injection abstract test case – example

3.2 Model-Based Robustness Testing

An independent tool based on the CertifyIt technology, is used for robustness testing of security components. Based on the same MBT models used for security functional and vulnerability testing it generates test cases using behavioural fuzzing. Contrary to the previous two approaches where the test case generation aims to cover the test objectives produced from the test purposes, the tool generates weighted random test cases to cover the test objectives based on the expected behaviour of the system.

Page 25: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

25

The tool relies on the principle to rapidly generate as higher as possible number of fuzzed tests with high number of steps in a given period of time using a weighted random algorithm. The generated tests are valid with respect to the constraints in the MBT model. Thus, contrary to most fuzzers, the produced test cases on the one hand are syntactically correct with respect to the system’s inputs. On the other hand, as it uses a weighted random algorithm and measures the coverage of the behaviours, it avoids duplication of the generated tests, which makes the test evaluation and assessment easier.

Another advantage of this tool is its rapidity and contrary to classical functional testing approach, it explores the system states in various uncommon contexts and putting potentially the system into a fail state.

Figure 18 depicts on high level the test generation algorithm. Each step covers a test objective, which corresponds to one behaviour of the system identified in the model in the OCL expression in an operation with specific pre-existing tags @REQ/@AIM (illustrated on the figure).

A step is generated by randomly choosing the available states possibly leading to an error from a given context of the system (for instance start exploring the system’s behaviour when administrator operations are called from normal user connection). More specifically, a test step is selected only if it activates a new behaviour (tag), otherwise the algorithm continues the exploration in order to conceive a test step that activates a new tag. The test generation stops either when the test generation time allocated to the generation engine has elapsed or by fulfilling the user conditions (for instance all tags in the model are covered, or a stop signal).

Figure 18 - Behavioural Fuzzing Test Step Selection

It is a complementary approach to functional and security (functional and vulnerability) testing, offering multi-fold benefits:

• it activates behaviours in uncommon contexts that potentially lead to system failures, unknown by domain experts,

• it rapidly generates tests, the algorithms being random are extremely quick and feedback to the user is sent rapidly (hundreds of tests in few minutes),

• it re-uses the same MBT model used for functional and security testing,

Page 26: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

26

• it re-uses the same adaptation layer, thus no extra effort is needed for executing the test cases.

Nevertheless, the random algorithms lack power in constructing complex contexts of the system, which is bypassed by the use of existing valid test sequences, produced using functional testing, as preamble of the test case. Thus, lowering its impact on the quality of the test cases.

Within the ARMOUR project, we will further work on the tool’s smoother integration with the CertifyIt technology. In addition, its test generation strategy may be adapted based on the ARMOUR experiment’s robustness testing needs.

3.3 MBT Drawbacks and Pitfalls not neglected in ARMOUR

As pointed out in the MBT User Survey from 2014 [2] and the MBT ISTQB Syllabus [3] the successful introduction of MBT into an organisation depends on paying attention on typical misleading expectations and pitfalls of MBT, for instance: (1) MBT solves all problems, (2) MBT is just a matter of tooling, (3) MBT models are always correct and (4) possibility of test case explosion.

For successful definition and introduction of the ARMOUR MBT approach for Security & Trust of IoT systems we evaluated the seven ARMOUR experiments and the need for type of testing that is security functional, pattern driven and behavioural fuzzing (MBT, manual or in-house approach). The analysis of each experiment helped to identify its testing needs and objectives, based on which we propose an initial mapping to the ARMOUR testing approaches that can be applied, as illustrated in Table 2. This tackles pitfalls (1), (2) and (4). In addition, to avoid pitfalls (1), (3) and (4), the MBT models for the experiments, as already started for EXP1 and EXP7 and the MBT output artefacts (for instance the generated test cases) are validated incrementally and iteratively.

Table 2 - Initial foreseen test generation strategies for ARMOUR Experiments

EXP Security-Functional (compliance)

Pattern-driven Behavioral fuzzing

Manual / In-house

MBT

EXP 1. Bootstrapping and group sharing procedures X X X X

EXP 2. Sensor node code hashing X X X X

EXP 3. Secured bootstrapping/join for the IoT X X X

EXP 4. Secured OS / Over the air updates X X X

EXP 5. Trust aware wireless sensors networks routing X X X

EXP 6. Secure IoT Service Discovery X X X

EXP 7. Secure IoT platform X X X X X

Page 27: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

27

4 Test implementation The Testing and Test Control Notation version 3 (TTCN-3) is a standardized language designed for testing and certification. It is widely used language, well established in the telecommunication, automotive and medical domain, with intentions to expand in the banking sector soon [4]. A TTCN-3 test suite consists of one or more modules. A module is identified by a name and may contain type, port and component declarations and definitions, templates, functions, test cases and control part for executing those test cases. The control part can be seen as the main function in other programming languages as C or Java. All code must be inside a module. The module can import definitions (types, templates, ports, components, functions, test cases and alt steps) from other modules and use them in its own definitions or control parts. Compared to the "normal" programming languages, TTCN-3 has much larger range of data types. This way, TTCN-3 allows creating a close correspondence between the data types in the System Under Test (SUT) and the testbed. The standard data types that we can found in other programming languages are present here (integer, float, boolean, (universal) charstring). There are some data types that are unique to TTCN-3 that allows to manage the test case result (verdicttype), and reflect its usage as a test scripting language with a protocol testing background (bitstring, hexstring, octetstring). The TTCN-3 functions are virtually the same as the functions in other languages. We can specify the function's name, its input parameters and return type/value. Alternatives (alt) are structures used inside a function or test case. When we use operations like timeout or receive, because these are blocking operations, the execution will not proceed before a matching message is received or the timer has expired. The alt statement allows several blocking statements to be regrouped, and the first blocking operation that will finish its execution will unblock the whole group and the test or function can continue with the execution. A test case is the main structure in TTCN-3. Its syntax is the same as a function, with the only difference that here after the parameters we have to specify which component the test case is running on, with the clause "runs on". Inside we assign a verdict (pass, inconc, fail) depending on the state of the test case. 4.1 Test platform components

The test case produced by TTCN-3 is abstract, because it does not know how or where to send the content, it only knows that it has to send it. This is where the TTCN-3 Control Interface (TCI) and the TTCN-3 Runtime Interface (TRI) get into the process, as shown on the figure 19. The TCI is composed of Test Management (TM) module, Test Logging (TL) module, Coding and Decoding (CD/Codec) module and Component Handling (CH) module. The TRI is System Adapter (SA) and Platform Adapter (PA) modules, as shown on the figure 1. All of these modules are already provided by most TTCN-3 tools, except the Codec and the SA modules. The first is needed to convert TTCN-3 structures to a binary, text, XML, JSON or some other serializable format, and the latter is needed in order to communicate with the SUT, because it implements the protocols how the

Page 28: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

28

converted data from the codec can be sent or received from the SUT (for example convert the structure to a HTTP request or response, if the protocol used is HTTP).

Figure 19 - TTCN-3 architecture

4.2 Test publication

At the moment we successfully generate all possible tests in CertifyIt, we have to convert (or, in the CertifyIt terminology - publish) them in a target language. The default publishers in CertifyIt are XML, JUnit and HTML. The XML publisher is the base publisher that will convert all test cases and other model data to a XML file that then can be used by other publishers to generate tests in other formats. The publisher in JUnit generates tests that can be executed in Java with a glue code i.e. a code that will be an adapter between the SUT and the JUnit tests converting the abstract values into concrete ones. The HTML publisher serves mostly as documentation because its output is simply a web page where all steps for every test case are shown.

The TTCN-3 publisher works on the same principle: The TTCN-3 generated tests are joined with an adapter file where all types, templates, functions and components are defined. Then we can proceed to the next phase, test compilation.

The TTCN-3 example generated from the model discussed in the previous sections is shown on Figure 20. It follows the same structure as the oneM2M TP generated from the model. The TTCN-3 tests start with variables’ declaration. After that, it comes the static part of the preamble, to configure the component. The second part of the preamble and

Page 29: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

29

the test body is derived from the abstract test case. The send/receive instructions are part of the test body and are detected by the naming convention that they are in fact templates, not functions. The messages used in the receive part are taken from “observer” functions in the model. In the end, as stated in section 2.3, the postamble is generated in static manner.

Figure 20 - TTCN-3 example

4.3 ATS Compilation

Once we got our Abstract Test Suite (ATS) in the TTCN-3 format, we have to execute it. In order to do that, we need to use a compiler that will transform the TTCN-3 code in an intermediate language, like C++ or Java. This is done on purpose, because the TTCN-3 code is abstract in a sense that it doesn’t define the way to communicate with the SUT, or how the TTCN-3 structures will be transformed to a real format. This is a task for the System Adapter and the Codec, which are described in detail in the next section.

Page 30: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

30

There are few commercial and non-commercial compilers that are available for download from the official TTCN-3 webpage3. Usually, all of the compilers follow this procedure:

1. compile TTCN-3 to a target language (Titan uses C++, TTWorkbench uses Java), 2. add the Codec and SA written in the target language.

In the scope of this project, we are using Titan because it’s an open source project.

4.4 System Adapter and CoDec

As we said earlier, all the elements in the TTCN-3 architecture described in Figure 12 are provided by the compiler except the Codec and the SA, because their role in the test system is to convert ATS data (TTCN-3 structures to XML, JSON, or binary string and vice versa) and transfer it thorough the interfaces (IP, serial port, etc.).

In the case of Eclipse Titan, the SA and the Codec are added as additional C++ files that are compiled together, but in the Spirent’s TTWorkbench, the SA and the Codec use an API and compile separately from the tests. Then the SA and the Codec are added to the test executable as plugins and executed.

Titan uses a C++ API that eases the process of making a Codec and a SA. The codec has simple function in the system, to convert the TTCN-3 structure to a native message and vice versa. As per the TTCN-3 specification, Titan is able to automatically convert the TTCN-3 structures to binary form, simple textual form, XML and JSON. Titan already has developed codecs for different structures like HTTP, CoAP, MQTT and other protocols. In the Titan terminology, these are called Protocol Modules.

The SA, or in Titan’s terminology a Test Port, has the role to send the message prepared by TTCN-3 and the Codec to the SUT and inversely, to receive the messages by the SUT and send them to the Codec to decode them into TTCN-3 structure, that later can be compared against a template. The SA can be used to send and receive messages over IP stack (UDP/TCP), but also over serial or USB ports. In order to execute TTCN-3 tests as explained in the experiences, Titan should implement IPv6 port that can communicate via UDP and transfer the CoAP messages prepared by the Codec.

3 http://www.ttcn-3.org/index.php/tools

Page 31: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

31

5 Test execution

5.1 Process overview

The goal of the testing process, as described in Figure 1: ARMOUR progresses toward test generation and execution. is to execute the MBT-generated tests to the SUT. In order to do that, we need to follow a certain procedure.

Figure 21 - Test execution

As described in Figure 21, after the publication of our tests in TTCN-3, we will use the Titan compiler to compile them to C/C++ code. To this code, we will add the C/C++ code of the SA and the Codec. Using a makefile generated by Titan, we can compile the complete suite into a Test Executable. Its execution can be controlled by Titan via configuration file explained in the next section.

If the test configuration is of type CST (01 or 02), before starting the actual test case, the test system should notify the Upper Tester that the corresponding test case has been started and the client should start the communication with the server, in this environment, with the test system.

At the end of the execution, we obtain a log of the execution, with a verdict for every test case executed. The testcases’ results can be traced back to their Test Purposes and Test Patterns.

5.2 Configuration

Module parameters, or shortly “modulepars”, are a feature of TTCN-3 described in its specification. They are similar to constants and are used to define the parameters of a test suite. Those parameters can be, for example, the address of the SUT, the timeout period,

Page 32: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

32

or some details of the SUT that can be chosen depending on the SUT implementation. Those parameters also are called Protocol Implementation eXtra Information for Testing (PIXIT).

Titan can use these module parameters inside its configuration file in order to provide configuration of the ATS without the need to recompile it. In the same file the user can configure the SA, for example, if it uses an UDP test port, through which system port of the test system it should send or receive the data.

5.3 Management

TTCN-3 provides a whole structure inside a module to control the order of execution of the test cases. Inside the control part, the tester has full control over the execution. He can create and start one or more components that can run the tests in parallel, can use conditions and loops to create an execution of the test suite that is closely adapted to the needs of the SUT or the tester itself.

Also, Titan widens this control by implementing a higher level of control inside the configuration file. Here the user can specify, if there are more than one module with control parts, which module will be first to execute, which will be last, the exact order of the test cases etc.

5.4 Reporting

Every TTCN-3 compiler has the right to implement its own way to store the test execution results in a form of logging file. The open source compiler Titan uses a proprietary log format from Ericsson by default, but its logging interface is very rich and thus it isn’t difficult to create new logging formats adapted to user’s needs.

The logging facility in Titan can produce logs with different level of details to deal with different requirements for the console log and log file. The logging parameters can be configured in Titan’s configuration file by setting the ConsoleMask and FileMask options, respectively for the level of information that should be shown on the console and saved in the log file.

TTCN-3 has strict control on a test case verdict. The five different values for test verdict are: “none”, “pass”, “inconc”, “fail”, “error”. Every time a test case is started, its verdict is automatically assigned to “none”. The tester thorough the test case can change it to “pass” “inconc” or “fail” depending on the conditions that are defined in the test case. To prevent failed test to become valid, when the test case is executing and the verdict is set to “fail”, it can’t be reverted back to “pass” or “inconc”. The same is for “inconc”, it can’t revert to “pass”. A special verdict is the “error” and it’s reserved for the runtime errors, in order to distinguish them from SUT errors.

Page 33: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

33

6 Steps towards benchmarking and metrics ARMOUR has as one of the main objectives to define an approach for Benchmarking Security & Trust technologies for experimentation on large-scale IoT deployments. In this respect, it is a major necessity to provide tools for IoT stakeholders to evaluate the level of preparedness of their system to IoT security threats. Benchmarking is the typically approach to this and ARMOUR will establish a security benchmark for end-to-end security in the large-scale IoT, by building up on the ARMOUR large-scale testing framework and process. Additionally, ARMOUR experiments will be benchmarked using the ARMOUR benchmarking methodology.

Several dimensions will be considered including:

• security attacks detection, • defence against attacks and misbehaviour, • ability to use trusted sources and channels, • levels of security & trust conformity, etc.

Benchmark results of the Large-Scale IoT Security & Trust solutions experiments will be performed and datasets will be made available via the FIESTA testbed. Additional benchmarks of reference secure & trusted IoT solutions will be performed in order to establish a baseline ground-proof for ARMOUR experiments but also to start to create at proper benchmarking database of secure & trusted solutions proper for the large-scale Internet-of-Things.

In order to define this methodology and the different dimensions to be considered, ARMOUR will:

• take into account existing Security Framework like PCI DSS v3, SANS Top 20, IETF WGs and the NIST Cyber Security Framework.

• Consider the methodology defined in ARMOUR for generic test patterns and test models based on the threats and vulnerability identified for IoT and the procedure to provide test generation based on the previous one

• Identification of metrics per functional block (authentication, data security, etc.) to perform various micro- and macro-benchmarking scenarios on the target deployments

• Finally, the idea to collect metrics from the evaluation and then use the metric to categorized them (taxonomy) in different functional aspects and based on this provide a label approach to the security assessment.

• The definition of threats and vulnerabilities and metrics can be associated to components of protection profiles for the IoT type of device under test as defined in the Common Criteria methodology. The threats and vulnerabilities can be associated to the Security problem definition (ASE_SPD) of the Protection Profile while the metrics of evaluation could be associated to the Security objectives (ASE_OBJ) of the Protection Profile.

Micro-benchmarks provide useful information to understand the performance of subsystems associated with a smart object. Micro-benchmarks are useful to identify possible performance bottlenecks at architecture level and allow embedded hardware and software engineers to compare and asses the various design trade-off associated with

Page 34: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

34

component level design. Macro-benchmarks provide statistics relating to a specific function at application level and may consist of multiple component level elements working together to perform an application layer task.

The final objective it is to provide to the security benchmarking and assurance that should include a measure of our level of confidence in the conclusion. As an example of such a measure of confidence used to evaluate the security of an IT product or system, we mention the Evaluation Assurance Levels (EAL)4 from Common Criteria. EAL is a discrete numerical grade (from EAL1 through EAL7) assigned to the IT product or system following the completion of a Common Criteria security evaluation, which is an international standard in effect since 1999. These increasing levels of assurance reflect the fact that incremental assurance requirements must be met to achieve Common Criteria certification. The intent of the higher assurance levels is a higher level of confidence that the system's security features have been reliably implemented. The EAL level does not measure the security of the system; rather it rather simply states at what level the system was tested.

• EAL 1 Functionally Tested • EAL 2 Structurally Tested • EAL 3 Methodically Tested and Checked • EAL 4 Methodically Designed, Tested and Reviewed • EAL 5 Semiformally Designed and Tested • EAL 6 Semiformally Verified Design and Tested • EAL 7 Formally Verified Design and Tested

Progresses made within WP5 in the definition of benchmark and label will be closely looked at within WP2 to adjust the reporting level of the test execution platform so to fulfil the WP5 needs.

4Common Criteria for Information Technology Security Evaluation, Version 3.1, revision 4, September 2012. Part 3: Security assurance requirements, CCMB-2012-09-

003.

Page 35: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

35

7 Illustration Here we will demonstrate a proof of concept of the process described in the previous chapters. This illustration will further identify implementation issues that have been tackled, identify positive points and issues and suggest directions for improvement. Finally, we will quantify efforts needed to deploy the tool chain.

7.1 Proof of concept scope – Exp1

To test the Bootstrapping procedure as described for Experiment 1 (see D1.1), we identified that we can easily produce security test cases using the pattern-based using models. In this experiment we create the MBT models based on the experiments input artefacts: security test patterns and the experiment’s refinement documents. As a proof of concept we concentrate on the security test pattern TP_ID6. In addition, based on the experiments needs we will further increment the model to produce test cases using the ARMOUR MBT approach based on patterns.

The Table 3 gives the entities that are part of the system architecture in Experiment 1. Hence, in our scope linked to TP_ID6 the elements on which we focus are Sensors, Firmware Manager and Firmware.

Sensors Reprogrammable embedded device called RexLAB that can act as PANA Client (PaC)

Gateways

In tests on the experiment of the bootstrapping stage, they can be implemented by the following devices:

• Server that has some software to act as PANA Authentication Agent (PAA)

• RexLAB that can act as PAA In tests on the experiment of the group sharing stage, they can be implemented by the following devices:

• RexLAB + assistance device o RexLAB that can act as publisher and/or consumer of

information o The assistance device is necessary to delegate in it

operations of the CP-ABE cryptographic scheme (encryption and decryption) due to the cost of these, in terms of required resources. This device must be reliable and not constrained

AAA server, Capability Manager, Attribute Authority, PDP and

Pub/Sub server

Servers that have the corresponding software to perform the specific function in each case

Page 36: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

36

Firmware Manager Server that is accessible to the sensors and gateways to download firmware

Sniffer Device which has a sniffer software

Table 3 - Experiment 1 architecture entities

7.2 Pattern-Based Testing – Exp1

Based on an iterative and incremental collaboration we defined the test objectives in a form of two test scenarios written in TPLan (as depicted in Figure 8 and Figure 9).

Based on these input elements i.e. the security test patterns and the TPLans that structure more formally the testing objectives we created the MBT model that is going to be used for test case generation.

Thus the structural part of the MBT model as depicted by the class diagram in Figure 22 represents the Sensor with the functions being able to perform (send a request, receive a response, send a response). The class Firmware_Manager represents the firmware manager and its available operations: receive a request, send a response and receive a response from another entity. Further, the class Request represents the sensor’s request for firmware update to the firmware manager. While, the class Response, may represent either the response of the firmware manager to the sensor’s request or the sensor’s response to the information sent by the firmware manager.

Figure 22 - Class Diagram - Exp1

We further formalized the TPLans into a machine readable language – Smartesting Test Purposes to generate abstract test cases, later to be exported in TTCN-3 using the CertifyIt Publisher for TTCN3.

Figure 23 depicts the test purpose formalizing TP_ID6. In more details, the test purpose iterates over the three possible response of the sensor when it receives a firmware with

Page 37: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

37

valid signature (INSTALLED_FIRMWARE), with invalid signature (FAILED_SIGNATURE) and when there is no firmware to install (NO_FIRMWARE_TO_INSTALL).

Figure 23 - Smartesting Test Purpose for TP_ID6 – Exp1

The Smartesting test generation engine produces three test generation targets with respect to the test cases we would like to produce. For instance, to cover TP_ID6 the tool generates three test cases to activate each of the three sensor’s responses when receiving a firmware to install by a firmware manager.

Figure 24 - Abstract Generated Test Cases - Exp1

These test cases are further exported in TTCN-3 (see Figure 24 on the top - TTCN-3 Publisher bouton). They can be executed on FiT IoT Lab using the TITAN test execution tool and the system adapter. We discuss test case execution in the following Section.

Page 38: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

38

Figure 25 - MBT generated TTCN-3 test case

On the figure 25 we can see the exported TTCN-3 case. It follows the guidelines of the abstract test. As explained in section 4.2, the port configuration and the postamble are static and the preamble and the test body are derived from the abstract test. The only difference here with the other one in section 4.2 is that here the send and receive parts are called within a function, in order to reduce redundant code and to be easier to read.

7.3 Test Case Execution – Exp1

Once detailed how the design of our tests will be specified, the mapping between the entities which are listed in the experiments for the bootstrapping and group sharing stages (they were defined in D1.1) and the physical devices that implement them is shown. In addition, new entities that appear in tests are included. Thus, devices which implement these entities will constitute our architecture of the test environment (introduced already in Table 3). Moreover, the execution of different tests will be conducted through the EPICURE5 platform that allows, on the one hand, managing the reservation of the different testbeds that introduces, and on the other hand, their planning. Such platform has an experimentation component which is constituted by two modules: the OMF module, which deals with the execution of experiments in testbeds; the SFA module, which is responsible of the publication of such testbeds so they can be discovered by users that perform the experiments. Next, the entities constituting both modules are described.

• OMF module. This module uses the OMF standard in order to allow users conducting experiments on the different testbeds. Such standard defines three entities: the OMF experiment controller (EC), OMF resource controller/s (RC) and a XMPP publish/subscribe server. Thus, the EC and the different RCs are communicated

5 http://epicure.inf.um.es/

Page 39: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

39

through the publish/subscribe server, so the communication model between these three entities is the next one:

On the other hand, referring to the available resources in the testbeds, OMF allows to offer users different types of these. Specifically, for our tests on the experiments in the bootstrapping and group sharing stages, we will use a set of reprogrammable sensors called RexLAB. Finally, it is necessary to clarify that the data test will be published in an accessible location to users so that they can consult them.

• SFA module. This module uses the SFA standard in order to allow users to plan and to book testbeds to carry out their experiments. To facilitate this task, it includes the MySlice submodule that offers users a web portal through which they can perform the following actions:

- Select testbeds which are wished to use in the experiment. - Check the status of testbeds to know its availability at a specific time. - Book testbeds to carry out the experiment.

To conclude, take into account the possibility of federating our two experiments through Phase 3 of them (data publication). That is, the entity that initiates the bootstrapping stage, once it has gained access to the network and has obtained the capability token (phases 1 and 2 of the bootstrapping experiment), it publishes in the Pub/Sub server a new encrypted value on a given topic. This new encrypted value will be notified to those entities subscribed to such topic (phase 3 of the group sharing experiment).

Next, it is described the process to execute any of our tests through the EPICURE platform:

1. The user logs in the web portal to book and to schedule the testbed with RexLAB reprogrammable sensors.

2. After booking and scheduling the corresponding testbed, the user receives information on how to access EC to carry out their experiments. Thus, such user usually obtains an IP address of the EC as well as a protocol to connect with it (generally by SSH).

Page 40: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

40

3. The user runs one of the tests including the EC or defines a new test by means of the experiment’s description language OMF (OEDL).

4. During a test’s execution, the EC gets in communication with the resources controller (RC) of the testbed with reprogrammable sensors using the OMF protocol to execute some scripts (.sh) which do certain actions (e.g. download and install a firmware, sensor’s boot, etc.).

5. The RC communicates with different RexLAB sensors by the CoAP protocol so they execute certain commands (start, status and stop) which develop a determined action over a specific sensor.

6. The RC publishes execution data of the test in an accessible location to the user so they can be consulted.

7. Once the test has been completed, the user verifies test data, which also might be stored in the future into the ARMOUR data test bed FIESTA.

Figure 26 - TTCN-3 proof of concept execution

In this milestone we succeeded to create a proof of concept for execution of a DTLS message that will be sent by TITAN as a test system and received on a node at FIT IoT Lab, as shown on Figure 26. After the reception, the SUT responds with a message correspondingly. If the correct message is received by TITAN, it will change the verdict to pass. If it receives something else, the verdict will be changed to fail and if after 10 seconds there are no messages, the verdict will be inconc, because we are not sure what has caused the delivery failure. In the next project milestone, the abstract test cases will be made available for execution on FIT IoT Lab and because the system adapter for TITAN is almost available there are no foreseen difficulties for concretizing the test cases.

Page 41: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

41

7.4 Discussion and cost-efforts

In order to evaluate the cost effectiveness of the approach in the Table below we give metrics spent on test objectives analysis, model conception, test generation and publication, adaptor conception, testbed configuration, test execution time on Experiment 1 (Exp1).

Hence, metrics related on the objective analysis, testbed configuration and test execution time are metrics independent from the model-based approach and they are eligible for the other testing approaches (semi-automated, in-house).

Type of Metric #hours

Test objective analysis

2

MBT Model conception

1

Test case generation

0,0017

TTCN-3 Publisher and Adapter

None (simple empty request on FIT IoT Lab)

Test case publication in TTCN-3

0,0005

System adapter conception

3 for initial set-up but non-recurring

Testbed configuration

Currently in progress

Average test execution per test case

≤ 1 second

Table 4 - Exp1 proof of concept metrics

On the one hand, Table 4 shows that Exp1 is a simple model and will increase as long as new test objectives will be analysed. For the TTCN-3 publisher and adapter for Exp1, no effort has been employed. Nevertheless, we have spent an initial effort on Exp7 (Secure IoT Platforms – oneM2M standard) to create general TTCN-3 publisher, which effort will be tempered by its re-use on other IoT segments, as shown for instance on Exp1. The test execution for Exp1 is not time consuming, and we believe, when the FIT IoT Lab testbed configuration will be available, it will not exceed the already measured test execution time.

Page 42: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

42

To conclude, this proof of concept on Exp1 showed that the model-based approach can be introduced for the other experiments, with respect to their needs, and integrated within the FIT IoT Lab Testbed.

Page 43: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

43

8 Conclusion Deliverable D2.2 has been proposing a strategy based on MBT to generate tests within the ARMOUR context. Firstly, a traditional, yet robust, approach based on the Test Purpose Language being used within standardization context is described. Then, a proposal is made to extend the state of the art on the topic by adding a model based generation of the TPLan. A second level of innovation is proposed by introducing the automatic generation of TTCN-3 test cases from the MBT model, thus allowing a perfect traceability from the requirements down to the TPLans and the test execution.

Initial trials have been made on few samples of the planned experiments to evaluate the proposed concepts. These trials have been successful and the methodology will be proposed to WP1 for further description of the experiments within D1.2. Trials included the testing of experiments deployment which will be done within WP3.

Finally, initial thoughts on exploitation of Test results within a benchmarking program have been shared and will be further discussed within WP5.

Page 44: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

44

9 References [1] S. Schulz, A. Wiles, S. Randall, “TPLan- A Notation for Expressing Test Purposes”,

Testing of Software and Communicating Systems, 19th IFIP TC6/WG6.1 International Conference, TestCom 2007, pp. 292-304

[2] R. V. Binder, B. Legeard, A. Krammer “Model-Based Testing: Where does it stands?”, ACM Queue, 13, 2014, pp. 40-48

[3] ISTQB, ISTQB Foundation Level Certified Model-Based Tester Syllabus, 2015. [4] Colin Willcock et al. An Introduction to TTCN-3. 2011. [5] F. Bouquet, C. Grandpierre, B. Legeard, F. Peureux, N. Vacelet, and M. Utting, ”A

subset of precise UML for model-based testing,” in 3rd int. Workshop on Advances in Model Based Testing, 2007, pp. 95–104.

[6] oneM2M Interop 2, 10-13 May 2016, Seongnam-City, South Korea, http://www.etsi.org/news-events/events/1045-onem2m-interop-2

[7] F. Le Gall, D. G. Jimenez, L. Artusio, T. Nagellen, J. Bernard, L. Gruber, E. Jaffuel, B. Legeard, Testing a webservices based ecosystem using MBT: the case of the Future Internet Public Private Partnership (FI-PPP), UCAAT 2014, Munich, Germany

Page 45: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

45

10 Annexes

10.1 Refined Test Patterns

From the test pattern library introduced in D2.1, the selection and customization of a set of these was performed to apply on our experiments in the bootstrapping and group sharing stages. Such selection was carried out taking into account the vulnerabilities that can occur in both stages and that are verified by the set of selected patterns. Thus, the test design which we will perform is based on such set of custom test patterns.

Next, some examples of design of our tests are detailed in order to clarify how we will conduct their specification. Thus, for each expeirment, it is shown the test pattern ID in which the test will be based, the stage in which it will be applied, its UML sequence diagram and a brief description.

10.1.1 Experiment 1 Test pattern ID TP_ID6

Stage Bootstrapping

Test diagram

Test description

Entities:

• Firmware Manager. Server that stores signed security firmware to carry out the bootstrapping stage. Firmware can be downloaded by the sensors.

• Sensor. Entity which accesses to the Firmware Manager to download and install a firmware.

Steps:

1. The sensor downloads a firmware from the Firmware Manager.

2. The sensor performs a signature verification process. If the

Page 46: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

46

verification is valid, the sensor must install the firmware in order to the test be satisfactory. On the other hand, if the verification is invalid, the firmware must be discarded in order to the test be satisfactory. Otherwise, the test is not satisfactory.

Test pattern ID TP_ID8

Stage Group sharing

Test diagram

Test description

Entities:

• Attribute Authority. Server that generates and manages CP-ABE private keys.

• Gateway. Entity which contacts the Attribute Authority to obtain its CP-ABE private key.

• Sniffer. Entity which attempt to observe and interpret communication between the gateway and the Attribute Authority to obtain the CP-ABE private key.

Steps:

Note: The whole communication between the gateway and the Attribute Authority is carried out using the CoAP-DTLS (COAPS) protocol.

1. The gateway requests its CP-ABE private key to the Attribute

Page 47: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

47

Authority including its certificate. 2. The Attribute Authority extracts the set of attributes of the

entity from the certificate and it generates the CP-ABE private key associated with such set.

3. The Attribute Authority sends to the gateway its CP-ABE private key.

4. The sniffer intercepts the messages exchanged between the gateway and the Attribute Authority and tries to read the private key CP-ABE.

If the sniffer cannot read the CP-ABE private key, the test will be satisfactory. Otherwise, the test will not be satisfactory.

Test pattern ID TP_ID13

Stage Group sharing

Test diagram

Test description

Entities:

• Pub/Sub Server. Server that stores and manages publications and subscriptions on different topics.

• Gateway (Producer). Entity which publishes values encrypted by the CP-ABE cryptographic scheme in the Pub/Sub Server.

• Gateway (Consumer). Entity that is subscribed to the Pub/Sub Server to receive notifications when the value of a certain topic is updated.

Steps:

Note: The consumer gateway CA has a CP-ABE private key that

Page 48: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

48

satisfies the policy which was used by gateway producer to encrypt information. Moreover, the consumer gateway CB has a CP-ABE private key that does not satisfy such encryption policy.

1. The consumer gateways CA y CB are subscribed to the same topic (e.g. "room's temperature") in the Pub/Sub Server.

2. The producer gateway publishes a new encrypted value of the topic in the Pub/Sub Server.

3. The consumer gateways receive the corresponding notifications from the Pub/Sub Server with the new value of the topic and they try to decipher it using their CP-ABE private key.

If CA gets back the value of the topic and CB does not, the test will be satisfactory. Otherwise, the test will not be satisfactory.

10.1.2 Experiment 2

Test Pattern ID TP_ID1 and TP_ID9

Stage Entities Authentication

Test diagram

Test Description

Entities:

• Malicious Device. 3rd party device that discovered Master Keys from a legitimate device and tries to access a software update

• Software Provisioning Server. Server storing software versions to update legitimate devices

Steps:

1. Malicious device discovers master keys - Hardware ID

Page 49: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

49

(TP_ID9), Software Fingerprint (TP_ID1), etc. - from legitimate devices;

2. Malicious device starts authentication attempt by sending a Software Access Request message to the Provisioning Server;

3. Provisioning server sends an authentication challenge; 4. Malicious device applies a derivation algorithm over the master

keys in order to produce a valid reply to the challenge; 5. Provision Server accepts or rejects the access request.

Steps 2, 3, 4, and 5 are continually executed until the malicious device successfully authenticates with the Provisioning Server and was access to a software update, or all the key derivation algorithms were tested.

The test is considered satisfactory if the authentication of the malicious device was reject, and not satisfactory otherwise.

Test Pattern ID TP_ID3 and TP_ID13

Stage Software update distribution

Test diagram

Test Description

Entities:

• Legitimate Sensor. Device already authenticated with the Software Provisioning Server

• Software Provisioning Server. Server storing software versions to update legitimate devices

• Malicious Entity. Entity able to eavesdrop the communication between a Software Provisioning Server and a legitimate sensor and wants to have access to a software image

Steps:

1. Legitimate sensor sends a software request to the software provisioning server;

Page 50: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

50

2. Both the software provisioning server and the malicious entity receives the software request;

3. Malicious entity tries to read information from the request; 4. The software provisioning server encrypts the requested software

image and sends it to the legitimate sensor; 5. Both the legitimate sensor and the malicious entity receive the

encrypted software image; 6. The malicious entity tries to decrypt the software, using in potential

information acquired from the software request message.

These steps will be executed multiple times to test the performance and energy consumption on the sensor for different encryption algorithms.

The test is considered satisfactory if the Malicious Device was not able to decrypt the software, and not satisfactory otherwise.

Test Pattern ID TP_ID4

Stage Entities Authentication

Test diagram

Test Description

Entities:

• Legitimate Sensor. Device attempting to authenticate to have access to software updates

• Software Provisioning Server. Server storing software versions to update legitimate devices

• Malicious Entity. Entity able to eavesdrop the communication between a Software Provisioning Server and a legitimate sensor and wants to have access to a software image

Steps:

1. Legitimate Sensor sends a software request to a software provisioning server;

2. Malicious Device intercepts the request from the Legitimate Sensor and changes Device ID and/or Software ID (to a version for which the software fingerprint was previously known) and sends it to the

Page 51: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

51

Provisioning Server; 3. Malicious Device send the altered message to a software provisioning

server to start the authentication process; 4. Software provisioning server sends corresponding challenge; 5. Malicious device attempts to successfully reply to the challenge.

The test is considered satisfactory if the authentication of the malicious device was reject, and not satisfactory otherwise.

Test Pattern ID TP_ID5

Stage Entities Authentication

Test diagram

Test Description

Entities:

• Software Provisioning Server. Server storing software versions to update legitimate devices

• Malicious Device. Entity that tries to replay messages sniffed during the authentication phase of a legitimate device in an attempt to successfully authenticate itself against a Software Provisioning Server

• Legitimate Sensor. Device attempting to have access to software updates

Steps:

1. Legitimate sensor performs the messaging exchange to authenticate against a software provisioning server, while a malicious entity sniffs all the messages sent by the legitimate sensor;

2. After the authentication of the legitimate sensor, the malicious device uses the messages sniffed to start the authentication process (with the same or a different Software provisioning server) and to reply to the authentication challenge;

Page 52: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

52

The test is considered satisfactory if the authentication of the malicious device was reject, and not satisfactory otherwise.

Test Pattern ID TP_ID6

Stage Authentication and software distribution

Test diagram

Test Description

Entities:

• Malicious Provisioning Server. Malicious entity that impersonates a legitimate provisioning server that tries to mislead legitimate sensors to install malicious software

• Legitimate Sensor. Device attempting to have access to software updates

Steps:

1. The Malicious Provisioning Server announces the existence of a new software in order to lure Legitimate Sensors to initiate the software update process;

2. Malicious Provisioning Server tries to mutual authenticate with the legitimate sensor through the challenge exchange;

3. Malicious Provisioning Server accepts the authentication and sends a malicious software image to the Legitimate Sensor.

The test is considered satisfactory if the authentication of the malicious device was reject or if the legitimate sensor detects that the software image is not legitimate, and not satisfactory otherwise.

Page 53: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

53

Test Pattern ID TP_ID8

Stage New software announcement and authentication

Test diagram

Test Description

Entities:

• Software Provisioning Server. Server storing software versions to update legitimate devices

• Malicious Device. Acts as man in the middle between a Software Provisioning server and a Legitimate Sensor

• Legitimate Sensor. Device attempting to have access to software updates

Steps:

1. The Malicious Device announces the existence of a new software version in order to lure Legitimate Sensors;

2. A legitimate sensor sends a software access request to the Malicious device, which uses this message to start an authentication process with a Software Provisioning Server;

3. The Malicious device uses the Challenge messaging sent by the Software Provisioning Server to authenticate against the Legitimate Sensor;

4. After a successfully authentication, the Malicious Device sends a malicious software image to the Legitimate Sensor.

The test is considered satisfactory if the authentication of the malicious device was reject or if the legitimate sensor detects that the software image is not legitimate, and not satisfactory otherwise.

Page 54: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

54

10.1.3 Experiment 5 Test Pattern ID TP_ID4

Stage Under normal network operation

Test diagram

Test Description

Entities:

• Server node - Device that receives and replies to requests. • Malicious node - Device that modifies the data sent by the Server

node before forwarding them to the Client node. • Intermediate node - Legitimate node that replaces the malicious

node in the routing path once it is detected by the Server node. • Client node - Device that requests data from the Server node.

Steps:

Note: The Malicious node has taken part in the RPL routing procedure and was identified as the next hop (most-Trusted) for Server node to follow towards the RPL tree root.

1. Client node sends Request 1 to Server node. 2. Server node replies to Request 1 with Message 1 using the

Malicious node as the next hop. 3. Malicious node modifies Message 1 and forwards it to the Client

node. 4. Server node overhears the channel, detects Message 1 forgery by

the Malicious node, and adjusts its routing table. 5. Client node sends Request 2 to Server node. 6. Server node replies to Request 2 with Message 2 using the

Intermediate node as next hop.

Page 55: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

55

The test is considered successful if Server node adjusts its routing table so that after the first request and reply to Client node it detects the Malicious node and excludes it from its routing path.

Test Pattern ID TP_ID5

Stage Under normal network operation

Test diagram

Test Description

Entities:

• Server node - Device that receives and replies to requests. • Malicious node - Device that overhears messages sent by the

Server node and sends them to the Client node to perform a replay attack.

• Intermediate node - Legitimate node that replaces the malicious node in the routing path once it is detected by the Server node.

• Client node - Device that requests data from the Server node.

Steps:

Note: The Malicious node has taken part in the RPL routing procedure and was identified as the next hop (most-Trusted) for Server node to follow towards the RPL tree root.

1. Client node sends Request 1 to Server node. 2. Server node replies to Request 1 with Message 1 using the

Malicious node as the next hop. 3. After some time, Malicious node retransmits Message 1 to Client

node.

Page 56: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

56

4. Server node overhears the channel, detects Message 1 retransmission by the Malicious node, and adjusts its routing table accordingly.

5. Client node sends Request 2 to Server node. 6. Server node replies to Request 2 with Message 2 using the

Intermediate node as next hop.

The test is considered successful if Server node adjusts its routing table so that after the retransmission of an older message to Client node it detects the Malicious node and excludes it from its routing path.

Test Pattern ID TP_ID6

Stage Bootstrapping and under normal network operation

Test diagram

Test Description

Entities:

• Server node - Device that receives and replies to requests. • Malicious node - Device that receives messages sent by the

Server node to the Client node and discards them instead of forwarding them (black-hole, grey-hole attack).

• Intermediate node - Legitimate node that replaces the malicious node in the routing path once it is detected by the Server node.

• Client node - Device that requests data from the Server node.

Steps:

Note: The Malicious node has taken part in the RPL routing procedure and was identified as the next hop (most-Trusted) for Server node to

Page 57: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

57

follow towards the RPL tree root.

1. Client node sends Request 1 to Server node. 2. Server node replies to Request 1 with Message 1 using the

Malicious node as the next hop. 3. Malicious node receives Message 1 and discards it. 4. Server node overhears the channel, detects Message 1 forwarding

failure and adjusts its routing table. 5. Client node sends Request 2 to Server node. 6. Server node replies to Request 2 with Message 2 using the

Intermediate node as next hop.

The test is considered successful if Server node adjusts its routing table so that after the failed forwarding of a message to Client node by the Malicious node results in its exclusion from the routing path towards the Client node.

Test Pattern ID TP_ID7

Stage Under normal network operation

Page 58: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

58

Test diagram

Test Description

Entities:

• Server node – Device that receives and replies to requests. • Malicious node - Device that receives messages sent by the

Server node to the Client node and discards them instead of forwarding them (black-hole, grey-hole attack).

• Intermediate node - Legitimate node that replaces the malicious node in the routing path once it is detected by the Server node.

• Client node - Device that requests data from the Server node.

Steps:

Note: Contexts A and B are two different setups where the metrics with which the RPL tree is constructed differ. In both setups, the Malicious node has taken part in the RPL routing procedure and was identified as the next hop (most-Trusted) for Server node to follow towards the RPL tree root.

1. Context A is set up. 2. Client node sends Request 1 to Server node. 3. Server node sends Message 1 to the Malicious node.

Page 59: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

59

4. Malicious node discards Message 1. 5. Server node overhears the channel, detects Message 1

discarding by the Malicious node, and adjusts its routing table. 6. Client node sends Request 2 to Server node. 7. Server node replies to Request 2 with Message 2 using the

Intermediate node as next hop. 8. Context B is set up. 9. Client node sends Request 1 to Server node. 10. Server node sends Message 1 to the Malicious node. 11. Malicious node discards Message 1. 12. Server node overhears the channel but ignores Message 1

discarding by the Malicious node. 13. Client node sends Request 2 to Server node. 14. Server node sends Message 2 to the Malicious node. 15. Malicious node discards Message 2. 16. Server node overhears the channel, detects Message 2

discarding by the Malicious node, and adjusts its routing table. 17. Client node sends Request 3 to Server node. 18. Server node replies to Request 3 with Message 3 using the

Intermediate node as next hop.

The test is considered successful if Server node adjusts its routing table, to exclude a Malicious node from the path to the root, depending on the Context in which it operates, i.e., within Context A the Malicious node is immediately detected and excluded from future Message transmissions, and within Context B, there is a tolerance for Malicious node’s behaviour.

Test Pattern ID TP_ID8

Stage Under normal network operation

Page 60: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

60

Test diagram

Test Description

Entities:

• Server node - Device that receives and replies to requests. • Malicious node - Device that had acquired valid credentials and

sends stored data requests to Server node Steps:

Note: The Malicious node had in some way acquired valid credentials to authenticate itself as a valid node and is within transmission range of Server node.

1. Server node encrypts and stores data. 2. Malicious node requests stored data from Server node. 3. Server node initiates the authentication handshake. 4. Malicious node replies with valid authentication credentials. 5. Server node validates the authentication credentials and responds

with the stored data. 6. Malicious node receives the stored data and attempts to decrypt

them.

The test is considered successful if the Malicious node fails to decrypt and read the data even though the authentication handshake was successful.

Page 61: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

61

10.1.4 Experiment 6 The Test Pattern 1 describes communication between Server node and a Malicious node. The malicious node tries to authenticate itself to the server with generated credentials. If everything is ok, the server should reject the request.

Test Pattern ID TP_ID1

Stage Under normal network operation

Test diagram

Test Description

Entities:

• Server node - Device that receives and replies to requests. • Malicious node - Device that issues key deletion/replacement

requests to Server node.

Steps:

Note: The Malicious node is within transmission range of the Server node.

7. Malicious node sends a key deletion/replacement request to Server node.

8. Server node receives the request and initiates the Authentication handshake procedure.

9. Server node requests authentication credentials from the Malicious node.

10. Malicious node replies with generated credentials. 11. Server node validates the authentication credentials.

The test is considered successful if the Malicious node fails to pass the authentication procedure.

Page 62: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

62

On figure 27, the TP_ID1 is written in oneM2M TPLan format.

TP Id EXP6_ID1_01 Test objective Check that the Server successfully discards a request with an invalid authentication from a

Malicious node. Test Pattern Reference

TP_ID1

Config Id SST_01 Stage Normal network operation

Initial conditions with { the Server being in the "initial state” and the Server received a Key deletion request and the Server responded with an authentication request }

Expected behaviour Test events Direction when { the Server receives a CREDENTIAL from Malicious node }

Server Malicious node

then { the Server fails to validate the signature }

Server Malicious node

Figure 27 - TPLan_ID1 contributed by SYNELIXIS

Test Pattern ID TP_ID4

Stage Under normal network operation

Test diagram

Test Description

Entities:

• Server node - Device that receives and replies to requests. • Malicious node - Device that overhears messages sent by the

Server node.

Page 63: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

63

• Client node - Device that requests data from the Server node.

Steps:

Note: The Malicious node is overhearing the channel and is within transmission range of Server and Client node.

1. Malicious node starts overhearing the channel 2. Client node sends Request 1 to Server node. 3. Server node replies to Request 1 with Message 1. 4. Malicious node overhears Message 1 and modifies it. 5. Malicious node forwards modified Message 1 to the Client mode. 6. Client node initiates authentication procedure.

The test is considered successful if the Malicious node fails to pass the authentication procedure.

Test Pattern ID TP_ID5

Stage Under normal network operation

Test diagram

Test Description

Entities:

• Server node - Device that receives and replies to requests. • Malicious node - Device that overhears messages sent by the

Server node. • Client node - Device that requests data from the Server node.

Steps:

Note: The Malicious node is overhearing the channel and is within

Page 64: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

64

transmission range of Server and Client node.

1. Malicious node starts overhearing the channel. 2. Client node sends Request 1 to Server node. 3. Server node replies to Request 1 with Message 1. 4. Malicious node overhears Message 1 and retransmits it to Client

node. 5. Client node initiates authentication procedure.

The test is considered successful if the Malicious node fails to pass the authentication procedure.

Test Pattern ID TP_ID6

Stage Under normal network operation

Test diagram

Test Description

Entities:

1. Server node – Device that receives and replies to requests. 2. Malicious node - Device that issues software update requests to

Server node.

Steps:

Note: The Malicious node is within transmission range of Server node.

1. Malicious node sends a software request to Server node. 2. Server node receives the request and initiates the Authentication

handshake procedure. 3. Server node requests authentication credentials from the Malicious

Page 65: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

65

node. 4. Malicious node replies with generated credentials. 5. Server node processes the authentication credentials.

The test is considered successful if the Malicious node fails to pass the authentication procedure.

Test Pattern ID TP_ID8

Stage Under normal network operation

Test diagram

Test Description

Entities:

• Server node - Device that receives and replies to requests. • Malicious node - Device that overhears messages sent by the

Server node. and attempts to decrypt them. • Client node - Device that requests data from the Server node.

Steps:

Note: The Malicious node is overhearing the channel and is within transmission range of Server and Client node.

1. Malicious node starts overhearing the channel. 2. Client node sends Request 1 to Server node. 3. Server node replies to Request 1 with Message 1. 4. Malicious node overhears Message 1. 5. Malicious node attempts to decrypt Message 1.

The test is considered successful if the Malicious node cannot read the

Page 66: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

66

contents of Message 1.

Test Pattern ID TP_ID13

Stage Under normal network operation

Test diagram

Test Description

Entities:

• Server node - Device that receives and replies to requests. • Malicious node - Device that had acquired valid credentials and

sends stored data requests to Server node

Steps:

Note: The Malicious node had in some way acquired valid credentials to authenticate (DTLS) itself as a valid node and is within transmission range of Server node.

7. Server node encrypts and stores data. 8. Malicious node requests stored data from Server node. 9. Server node initiates the authentication handshake.

Page 67: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

67

10.1.5 Experiment 7 Experiment 7 is oriented towards security functions verification of oneM2M and FIWARE, and intends to use oneM2M Test Purpose’s description approach. Because of its working process, we don’t define Test Patterns in order to obtain Test Purpose.

TP Id TP/oneM2M/CSE/SEC/BI/002 Test objective Check that the IUT responds the originator with an error when the

originator sends a request including a different AE identifier Reference TP_SEC_7_2, TS-0003-22 Config Id CF01

PICS Selection PICS_CSE Initial conditions with {

the IUT being in the "initial state" and the IUT having registered the AE with AE_ID set to valid AE_IDENTIFIER }

Expected behaviour Test events Direction when { the IUT receives a valid #REQUEST request from AE containing From not set to AE_IDENTIFIER }

IUT AE

then { the IUT sends a Response message containing Response Status Code not set to 2000 (OK) }

IUT AE

TP Id Reference REQUEST

TP/oneM2M/CSE/SEC/BI/002_01 RETRIEVE

TP/oneM2M/CSE/SEC/BI/002_02 CREATE

TP/oneM2M/CSE/SEC/BI/002_03 DELETE

TP/oneM2M/CSE/SEC/BI/002_04 UPDATE

TP/oneM2M/CSE/SEC/BI/002_05 NOTIFY

10. Malicious node replies with valid authentication credentials. 11. Server node validates the authentication credentials and responds

with the stored data. 12. Malicious node receives the stored data and attempts to decrypt

them.

The test is considered successful if the Malicious node fails to decrypt and read the data even though the DTLS authentication handshake was successful.

Page 68: Test generation strategies for large- scale IoT security ... · PDF filegenerate them as well as TTCN3 test cases for execution- through model based testing are proposed. Test Patterns

68

TP Id TP/oneM2M/CSE/SEC/BI/003 Test objective Check that the IUT accepts an AE registration (allowed App-ID, S-AE-ID-STEM not provided by

AE) Reference TS-0003-8 Config Id CF02

PICS Selection PICS_CSE Initial conditions with {

the IUT being in the "initial state" }

Expected behaviour Test events Direction when { and the IUT having a CSEBase resource containing a ServiceSubscribedProfile resource containing a ServiceSubscribedNode resource containing RuleLinks attribute pointing to a ServiceSubscribedAppRule resource containing applicableCredIDs attribute set to None and allowedApp-IDs attribute indicating APP-ID and allowedAEs attribute indicating a value starting with ‘S’ and containing wildcard and the IUT receives a valid CREATE request from AE containing Resource-Type set to 2 (AE) and From not set to empty and Content <AE> containing App-ID attribute set to APP-ID }

IUT AE

then { the IUT sends a CREATE request to the IN-CSE containing Resource-Type set to 10002 (AEAnnc) and From not set to ‘/S’ and Content <AEAnnc> containing App-ID attribute set to APP-ID and nodeLink attribute set to ??? and Labels indicating ‘Credential-ID:None’ }

IUT IN-CSE