software dependability lab - fachbereich informatik...apr 14, 2017  · last updated: 2017-04-14...

8
Last updated: 2017-04-14 Software Dependability Lab Summer Term 2017 P2P Live Streaming OSSim/OMNeT++ Module Analysis/ Extension Background P2P-based online streaming and Video on Demand (VoD) applications attract millions of users due to the features delivered by their distributiveness, e.g., off-loading the video content from the source and avoiding bottlenecks/ single point of failure. Nonetheless, the absence of a controlling-auditing authority inevitably results in various security threats. Such attacks impact drastically on the streaming quality due to the real- time constraints of streaming P2P applications. Accordingly, an exhaustive analysis for the attack’s impact is needed in order to acquire a clear vision about the suitable countermeasures. For this, we make use of the OMNeT++ tool. OMNeT++ is a widely used simulator which provides various P2P protocol modules. OSSim is a very useful extension for analyzing security threats in various P2P on-line streaming platforms. Objectives Our goal is three-fold: Evaluating the impact of various attack scenarios using OSSim over OMNeT++. o OMNeT++: https://omnetpp.org o OSSim: Nguyen, G, et al. "Ossim: A generic simulation framework for over- lay streaming." In Proc. SCSC, 2013, p.p: 30-38. Implementing a new attack strategy. Implementing a simple countermeasures while minimizing the resulting over- head. Prerequisites Knowledge on P2P protocols (attending one of the following TUD courses: a) P2P, b) communication Networks II or c) introduction to Network Centric Sys- tems). Very good knowledge in C++. Contacts Hatem Ismail [email protected] S2|02 E206

Upload: others

Post on 10-Mar-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Software Dependability Lab - Fachbereich Informatik...Apr 14, 2017  · Last updated: 2017-04-14 Software Dependability Lab Summer Term 2017 P2P Live Streaming OSSim/OMNeT++ Module

Last updated: 2017-04-14

Software Dependability Lab Summer Term 2017

P2P Live Streaming OSSim/OMNeT++ Module Analysis/ Extension

Background P2P-based online streaming and Video on Demand (VoD) applications attract millions of users due to the features delivered by their distributiveness, e.g., off-loading the video content from the source and avoiding bottlenecks/ single point of failure. Nonetheless, the absence of a controlling-auditing authority inevitably results in various security threats. Such attacks impact drastically on the streaming quality due to the real-time constraints of streaming P2P applications.

Accordingly, an exhaustive analysis for the attack’s impact is needed in order to acquire a clear vision about the suitable countermeasures. For this, we make use of the OMNeT++ tool. OMNeT++ is a widely used simulator which provides various P2P protocol modules. OSSim is a very useful extension for analyzing security threats in various P2P on-line streaming platforms.

Objectives Our goal is three-fold:

Evaluating the impact of various attack scenarios using OSSim over OMNeT++.

o OMNeT++: https://omnetpp.org o OSSim: Nguyen, G, et al. "Ossim: A generic simulation framework for over-

lay streaming." In Proc. SCSC, 2013, p.p: 30-38.

Implementing a new attack strategy.

Implementing a simple countermeasures while minimizing the resulting over-head.

Prerequisites Knowledge on P2P protocols (attending one of the following TUD courses: a) P2P, b) communication Networks II or c) introduction to Network Centric Sys-tems).

Very good knowledge in C++.

Contacts Hatem Ismail [email protected] S2|02 E206

Page 2: Software Dependability Lab - Fachbereich Informatik...Apr 14, 2017  · Last updated: 2017-04-14 Software Dependability Lab Summer Term 2017 P2P Live Streaming OSSim/OMNeT++ Module

Last updated: 2017-04-11

Software Dependability Lab Summer Term 2017

Cache Timing Analysis

Background Side-channel attacks represent a serious threat for varied complex systems e.g., the

Cloud. This class of attacks stems from the usage of shared resources between the at-

tacker and the victim (e.g., the cache), and are hardly detectable through a normal intru-

sion detection system. A reason for that is that the side-channel attacker does not em-

ploy the traditional communication channels to compromise the confidentiality of the

victim, but uses the shared resources. Cache-based side-channel attacks exploit the

cache as a side-channel and usually rely on profiling the cache during victims operations

to extract confidential information.

Objectives A prerequisite for conducting cache-based side-channel attacks is the analysis of the

cache usage. The objective of this lab is the design and implementation of a tool for ana-

lysing the cache access and measuring the hits and miss ratios considering selectively

chosen workloads.

Prerequisites C / C++ programming skills

Contacts Tsvetoslava Vateva-Gurova

[email protected]

+49 6151 16 25236

S2|02 E207

Page 3: Software Dependability Lab - Fachbereich Informatik...Apr 14, 2017  · Last updated: 2017-04-14 Software Dependability Lab Summer Term 2017 P2P Live Streaming OSSim/OMNeT++ Module

Last updated: 2017-04-14

Software Dependability Lab Summer Term 2017

Enhancing Model Checking Tools With Program Slicing

Background Program slicing is a well-established technique that can be used for a wide range of pur-poses such as debugging or software partitioning. Given a so called slicing criterion, a program slicer computes a portion of the code which is relevant to the criterion. There exist many techniques to achieve this goal with different degrees of precision and effi-ciency. One of the goals that can be achieved using this technique is reducing the code size for model checking by filtering out code which is irrelevant to the specification under consideration.

Objectives In this Lab, the student is required to extend an existing model checker with program slicing capability, test the implementation and analyze its performance.

Prerequisites C/C++ programming experience Experience with Model Checking tools and basic knowledge of bounded model

checking are beneficial but not mandatory.

Contacts Habib Saissi [email protected] +49 6151 16 25234 S2|02 E221

Page 4: Software Dependability Lab - Fachbereich Informatik...Apr 14, 2017  · Last updated: 2017-04-14 Software Dependability Lab Summer Term 2017 P2P Live Streaming OSSim/OMNeT++ Module

Last updated: 2017-04-14

Software Dependability Lab Summer Term 2017

Race detection for LLVM based on the maxi-mal causality model

Background Despite the numerous static and dynamic race detection techniques proposed in the lit-erature, data races remain of the most common bugs found in modern concurrent soft-ware. Recently a promising dynamic approach has been proposed to provide automatic and sound support for data race detection. Given an execution trace of a program, the ap-proach is able to find the maximal number of data races that can be found using a single trace. For that purpose a causality model is built that describes the maximal number of feasible traces that can be inferred from the original trace without any knowledge about the program. This model contains different possible interleavings of the trace that might potentially lead to data races. The model is encoded in a logical formula which is passed to an SMT to check for the existence of data races.

Objectives The goal of this project is to port the race detection approach, originally developed for Java, to LLVM. This involves reading through the paper describing the approaching and implementing in the LLVM framework using the original Java implementation for guid-ance.

Prerequisites C/C++ Programming skills.

Experience with LLVM and the Clang compiler is beneficial but not mandatory.

Experience with SAT/SMT solving tools such as Z3 is beneficial but not manda-tory.

Contacts Habib Saissi [email protected] +49 6151 16 25234 S2|02 E221

Page 5: Software Dependability Lab - Fachbereich Informatik...Apr 14, 2017  · Last updated: 2017-04-14 Software Dependability Lab Summer Term 2017 P2P Live Streaming OSSim/OMNeT++ Module

Last updated: 2017-04-14

Software Dependability Lab Summer Term 2017

Linux Driver Debugging with Static Analysis and Dynamic Tracing

Background Due to its simplicity and effectiveness, driver developers often use the kernel’s printk fa-cility for debugging rather than complicated debugger-based techniques, i.e., they insert print statements into the code in order to get deeper insights in the execution of a piece of code. Unfortunately, it is a cumbersome and error-prone task to instrument a device driver us-ing this technique. Especially for larger debugging tasks with function and call parameter tracing, many structurally similar printks have to be inserted. If the code or debug needs change, a good part of the manual instrumentation work has to be changed or redone as well. A tool that analyses the driver code and automatically instruments the code based on the current debugging needs would be very helpful.

Objectives The objective of this lab is to refine and improve our existing tool chain for automating driver debugging. Our tool can already instrument the driver code for doing function call tracing. Basically, function calls are wrapped with logging statements for recording the function name but also the function arguments. This functionality needs further refine-ment for capturing call contexts. Furthermore, the tool is still missing the ability to watch changes in data structures. A log post-processing and visualization tool is also still miss-ing. Please note that it is open for discussion which aspects of our tool chain are to be extended in this lab.

Prerequisites Our tool chain relies on Clang/LLVM1 for analyzing C source code and Coccinelle2 for per-forming the actual code instrumentation. Hence, a good knowledge of the C and C++ programming languages, including pre-processor macros, is very useful, especially, as Linux drivers are written in C and make use of advanced macros. Ideally, students have also experience with the GNU make tool, the GCC and Clang compilers, standard com-mand lines tools, and shell scripting.

Contacts Oliver Schwahn [email protected] +49 6151 16 25234 S2|02 E221

1 http://www.llvm.org 2 http://coccinelle.lip6.fr

Page 6: Software Dependability Lab - Fachbereich Informatik...Apr 14, 2017  · Last updated: 2017-04-14 Software Dependability Lab Summer Term 2017 P2P Live Streaming OSSim/OMNeT++ Module

Last updated: 2017-04-14

Software Dependability Lab Summer Term 2017

Testing & Debugging of Linux Drivers in User Space

Background It has often been shown that drivers are major contributors to reliability issues of con-temporary commodity OSs such as Linux. The development, testing, and debugging of device drivers is difficult. One of the reasons is the complex in-kernel development envi-ronment which is hard to cope with as there is a lack of tool support and documentation. For user space software development, there exists many tools that support developers in programming, finding documentation, testing, and debugging. This is not the case for Linux driver development. If the plethora of user space tools and techniques could be applied to Linux drivers, the driver development process could be drastically improved.

Objectives The objective of this lab is to develop a prototype tool that allows for the automated moving of Linux drivers to user space in order to make them easier accessible to testing and debugging tools and techniques that are commonly in use for user program devel-opment. The driver code has to be compiled separately from the kernel to allow execu-tion in a user space process. The kernel interface as well as the hardware interface needs to be abstracted in a suitable way that allows to exercise code paths in the driver. As initial approach, function calls into the kernel could be replaced by mock code and hard-ware interactions could be simulated by a simple model or real hardware could be passed through. Relying on User-mode Linux1 for certain functionalities may be a good ap-proach. However, we do not need a whole functioning Linux system in user space.

Prerequisites Good knowledge of the C programming languages is very useful as Linux is written in C. Ideally, students have experience with the Linux kernel tool chain and have built their own Linux kernels in the past. Students should also have experience with the GNU make tool, the GCC compiler, standard command lines tools, and shell scripting.

Contacts Oliver Schwahn [email protected] +49 6151 16 25234 S2|02 E221

1 http://user-mode-linux.sourceforge.net

Page 7: Software Dependability Lab - Fachbereich Informatik...Apr 14, 2017  · Last updated: 2017-04-14 Software Dependability Lab Summer Term 2017 P2P Live Streaming OSSim/OMNeT++ Module

Last updated: 2017-04-14

Software Dependability Lab Summer Term 2017

Automated Robustness Testing Against Library Errors

Background In order to cope with the complexity of software systems, functionality is often encapsu-lated in libraries that can be reused across different programs. In order to properly use library functions, it is important to closely investigate their return values, as error condi-tions are often reflected certain indicator values. Unfortunately, it is easy to miss certain corner cases for complex library functions, especially as the error conditions and corre-sponding return codes are often not even properly documented in the libraries’ API spec-ifications. For this purpose, a group of researchers has developed LFI, the library fault injector. LFI performs a static analysis on library binaries that are loaded by a program to find all pos-sible errors that can be signaled by library functions. By using static analysis on the actual library implementation, LFI is independent from the completeness or accuracy of the li-brary’s API specification.

Objectives The goal of this project is to integrate LFI with GRINDER, a test automation framework developed at our group. GRINDER has been designed with the goal to be a generic test-ing tool for fault injection experiments, as they are performed by LFI. We have success-fully used GRINDER for robustness testing the Android OS kernel and the AUTOSAR au-tomotive OS. Porting LFI to GRINDER will enhance the coverage of test types that GRINDER supports and help us to improve GRINDER’s architecture and implementation.

Prerequisites GRINDER is written in Java, but the interface classes for the different target systems of-ten require glue code to be written in other languages, such as Python, C, or shell scripts. Basic knowledge of databases is beneficial, as GRINDER uses a MySQL DB for managing test data, but not mandatory. LFI is implemented in C++. Theoretically that code should not require changes, but you never know…

Contacts Stefan Winter [email protected] +49 6151 16 25226 S2|02 E221

Page 8: Software Dependability Lab - Fachbereich Informatik...Apr 14, 2017  · Last updated: 2017-04-14 Software Dependability Lab Summer Term 2017 P2P Live Streaming OSSim/OMNeT++ Module

Last updated: 2017-04-14

Software Dependability Lab Summer Term 2017

Mining POSIX Specifications For Better OS Testing

Background Any software test case has two important components: test inputs (stimuli) and a check for the expected output (oracle). While the possible test inputs can be derived from the interface implementation of the software under test, the oracle is much more difficult: How the software is supposed to react in response to the stimuli is defined by the soft-ware’s specification and this is often ambiguous, incomplete, and written in natural lan-guage, which makes deriving oracles a tedious manual task. We have developed a robustness testing tool for the Linux Standard Base (LSB), which is the Linux-specific extension of the POSIX API, in other words: the interface for pro-grammers to the Linux OS. We circumvent the oracle problem for our tests by testing different versions of Linux against each other. If the LSB specification for the different versions is the same, but the test output differs, one of the versions must be incorrect.

Objectives Our strategy to cope with the oracle problem has a drawback: The test effort is doubled. Moreover, if two versions have the same bug, our oracle is not able to detect it. Luckily, the Open Group (the standard committee for POSIX, on which the LSB is based) has pub-lished the POSIX specification in a structured HTML format online. The goal of this pro-ject is to automatically extract oracle information from the HTML POSIX specifications. For this and similar purposes, a wide variety of tools and libraries exist that can be lever-aged. Ideally, the test specification format that we currently use is extended by the ob-tained oracle information and the impact that the usage of this information has is vali-dated.

Prerequisites Any prior exposure to web mining is beneficial. Basic knowledge of XML benefits the in-tegration with the existing test specification format. For running the tests, basic Linux system usage and configuration is required. It is essential to run the tests in a virtual en-vironment to not threaten the host system! Python (2) and MySQL knowledge is benefi-cial in case the test tool needs any tweaks.

Contacts Stefan Winter [email protected] +49 6151 16 25226 S2|02 E221