putting the dynamic into software security testing · 2018-10-30 · 6 vector testing solution...
TRANSCRIPT
V1.1 | 2018-03-05
Detecting and Addressing Cybersecurity Issues
Putting the dynamic into software security testing
2
Code ahead!Automotive Software Security Testing
3
Automated vulnerability detection and triageAutomotive Software Security Testing
+ =
4
Vector was engaged with a large, US Tier 1 and we were addressing “software quality”u They acknowledged they had software quality issuesu Concern was related to how these quality issues could affect securityu Project goals morphed into low-hanging “security fruit” (for both the customer and the attacker)!
Our goal was more along the lines of “robustness”!
How did we get here?Automotive Software Security Testing
5
VectorCAST test automation platformAutomotive Software Security Testing
6
Vector testing solutionAutomotive Software Security Testing
System Validation
System Integration
Test
SoftwareIntegration
Test
Software Unit Test
SoftwareImplementation
So
ftw
are
Sys
tem
White-box testing on host / on target
Vect
orCAST
Ana
lytic
s
Benefitsu Full support in the development process, from
software unit test to system validationu Uniform test management, test automation (CI),
result analysis and traceability
Link
to
Requ
irem
ents
VectorCAST/C++
CANoe, vTESTstudio vVIRTUALtarget
VectorCAST/C++ and /QA
CANoe, vTESTstudioVT System
VectorCAST/QA
SW integration testing + code coverage
on PC
System validation+ code coverage
on ECUChange-Based Testing
7
The approach
u Automatically interrogate the code and identify possible weaknesses (a la static analysis)
u Once a potential CWE is found, generate a test exploiting the identified issue and execute it (dynamic execution)
u After execution, analyses the execution trace and decide if the potential CWE is a genuine threat
The idea
u To be able to identify and automatically test for undiagnosed security vulnerabilities
u Utilizes MITRE’s classification of CWEs (common weakness enumeration)
u Once an instance of a generic CWE is found in the software, that issue is then classed as a CVE (common vulnerability and exposure)
Vulnerability detection via dynamic analysisAutomotive Software Security Testing
Code CWEs Tests Execution Analysis CVEs
8
Weaknesses identified
u Via the analysis of open-source projects, a number of API-usage related issues have been identified
u A large US automotive Tier 1 has used it to find security-specific reuse issues on their software platform
u Able to automatically find issues such as NULL pointer dereference (CWE-476), classic buffer overflow (CWE-120) and improper resource shutdown/release (CWE-404)
The benefits
u Unlike static analysis, this method will only flag an issue if we can generate an exploit, eliminating the false-positive issue plaguing static analysis
u The generation of test artefacts allows for their future re-execution to demonstrate the mitigation of a potential issue after software re-design
u Can be used for both on-host and on-target execution (think security validation for embedded systems)
Vulnerability detection via dynamic analysisAutomotive Software Security Testing
Automated Validation
9
Mutational (test-suite) fuzz testing u Take an existing test-suiteu Modify the values to be “randomly” erroneousu Run it with coverageu Does it crash?u If yes: potential weakness!
Directed (“intelligent”) security testingu Identify an expression of interest
> E.g., pointer dereference, divide by zerou Generate a test reaching that line with erroneous valuesu Run it with coverageu Does it crash?u If yes: potential weakness!
Two technical approachesAutomotive Software Security Testing
10
u Not detected: CppCheck, Facebook’s Infer, Uno
u Possible error: Lint, CodeHawk
u Programmatic error detected (SIGSEGV): VectorCAST
Example from lighttpdAutomotive Software Security Testing
int buffer_copy_string_buffer(buffer *b, const buffer *src)
{
if (!src) return -1;
if (src->used == 0)
{
b->used = 0;
return 0;
}
return buffer_copy_string_len(b, src->ptr, src->used - 1);
}
11
The approach is focused on automatically generating tests for a number of classifications of vulnerabilities according to MITRE
At the highest level, we look to address the general banner CWE-398 (“indication of poor code quality”)Some examples of issues we aim to detect
u “Hard” errorsu Use of a NULL pointer (CWE-476)u Buffer {under,over}flow (stack corruption) (CWE-124)u Divide by zero (CWE-369)
u Mismatched calls – malloc/free, fopen/fclose, pthread_mutex_lock/pthread_mutex_unlock (CWE-401/404/413/415/590)
u Bad arguments – memcpy (CWE-120/130)u Unchecked return – malloc (CWE-252/690)
Security weaknesses of interestAutomotive Software Security Testing
12
Technical Approach
13
Existing test-case:
>TEST.VALUE:buffer.buffer_copy_string_buffer.src:<<malloc 1>>>TEST.VALUE:buffer.buffer_copy_string_buffer.src[0].used:0>TEST.VALUE:buffer.buffer_copy_string_buffer.b:<<malloc 1>>
Manipulate the values:
>TEST.VALUE:buffer.buffer_copy_string_buffer.src:<<malloc 1>>>TEST.VALUE:buffer.buffer_copy_string_buffer.src[0].used:0>TEST.VALUE:buffer.buffer_copy_string_buffer.b:<<null>>
Execute!
Automated (mutational) fuzz testing for unit testingAutomotive Software Security Testing
14
u Replace “x” with “code” and “0” with “null pointer dereference”
From software to mathematicsAutomotive Software Security Testing
15
We combine in-depth static analysis with “constraint solving” to identify more complex weaknesses:
> param_2->x += 3;> param_3->y += 2;> return param_1->z / (param_2->x - param_3->y);
Fuzz testing has to “get lucky” here, but using test-case generation we can directly generate a test such that:
> (param_2->x − 3) − (param_3->y − 2) ≡ 0This gets fed to a “black box” oracle that can provide the answer!
Directed test-case generation for weaknessesAutomotive Software Security Testing
16
Real World Examples
17
There exists a number of paths through the code where Thou_MPH is unassigned (so undefined behaviour) or is assigned to zero
What is surprising is that Thou_MPH is checked against 0 and then used in a divide at the same scope-level
u No corrective action taken, even though the corrective condition is already detected!
Real examples found (divide by zero) – automotiveAutomotive Software Security Testing
extern VEHICLE_T Vehicle;
void check_speed(uint8_t speed_ThouMPH)
{
uint32_t temp,Thou_MPH,Thou_RPM;
if(Vehicle.WHEELSPEED<150) {}
else
{
if(speed_ThouMPH>1000) {}
else
{
Thou_MPH=0;
}
}
if(Thou_MPH==0)
{
// no change to Thou_MPH!
}
else {}
temp=(100*Thou_RPM)/Thou_MPH;
}
18
A lot of the code for provided in this project was extremely judicious in checking all parameters
u Their style of coding made this crash stand-out, as ptrTaskData is never checked for null!
Real examples found (NULL pointer) – medicalAutomotive Software Security Testing
STATUS process_lamp_event(LAMP_PATTERN_t patternId,
LAMP_TASK_DATA_t *ptrTaskData)
{
DRV_RET_CODE_t drvRetCode = DRV_RC_ERROR;
STATUS retCode = OK;
LAMP_PATTERN_t tmpPatternId = patternId;
if (((LAMP_FAST_BLINKING == ptrTaskData->previousPatternId)
|| (LAMP_SLOW_BLINKING == ptrTaskData->previousPatternId))
&& (LAMP_PATTERN_NONE == tmpPatternId))
{
tmpPatternId = ptrTaskData->previousPatternId;
}
ptrTaskData->counter += ptrTaskData->timeout_ms;
}
19
“Actionable Intelligence”
20
An approach to ascertaining quickly Chess’ “Morningstar for Software Security”
u ☆☆ – “absence of obvious reliability issues”u This similar to CWE-398 for poor code quality
The easy ones
u Defect densityu Defects/SLoC
u Lines free from obvious issues (via code coverage)u Confidence of “defect freedom” (but not guaranteed!)
u Ratio of security tests free of defectsu Higher ratio => more secure
Software metricsActionable Intelligence
21
Open Source analysisActionable Intelligence
ProjectMetric LIGHTTPD ZLIB LIBXML2Version 1.4.20 1.2.8 2.9.4# files 89 16 84SLoC⁶ 36,605 6,726 184,179Unique # issues 709 113 2,926Defect density (defects/line) 1/52 1/60 1/63Avg. # of tests per defect 11 7 12Tests hitting defects 69% 28% 40%Functions with defects 44% 44% 29%Functions with vg ≥ 20 and defects⁷ 51% 55% 66%
⁶measured with cloc
⁷Jones’08: “[complexity] levels greater than 20 are considered hazardous”
22
Process
u Identify portfoliou Assess vulnerabilities
u Manage risk
Some of the issues we find you might consider are “non-issues” or are mitigated against as part of your software architecture
u That’s great…
u …be wary about software re-use across projects!
Mainly: no “one size fits all” solution – use multiple tools!
u Dynamic execution can find certain vulnerabilities more definitively
u Need to always consider DP-E ratio (damage potential vs. effort)
Take homeAutomotive Software Security Testing
23 © 2018. Vector Informatik GmbH. All rights reserved. Any distribution or copying is subject to prior written approval by Vector. V1.1 | 2018-03-05
For more information about Vectorand our products please visit
www.vector.com