test 1 review

Post on 18-Nov-2014

656 Views

Category:

Documents

2 Downloads

Preview:

Click to see full reader

DESCRIPTION

 

TRANSCRIPT

CSCE 548 CSCE 548 Secure Software Secure Software

DevelopmentDevelopment

Test 1 -- ReviewTest 1 -- Review

CSCE 548 - Farkas 2

ReadingReading Test 1

– McGraw: Software Security: Chapters 1 -- 9– Software reliability, John C. Knight, Nancy G. Leveson, An

Experimental Evaluation Of The Assumption Of Independence In Multi-Version Programming, http://citeseer.ist.psu.edu/knight86experimental.html

– B. Littlewood, P. Popov, L. Strigini, "Modelling software design diversity - a review", ACM Computing Surveys, Vol. 33, No. 2, June 2001, pp. 177-208, http://portal.acm.org/citation.cfm?doid=384192.384195

CSCE 548 - Farkas 3

Test 1Test 1

Closed bookFrom reading materials on previous slide1 hour and 15 minutesNo multiple choice questions

CSCE 548 - Farkas 4

Software Security Software Security

NOT security software!Engineering software so that it continues to

function correctly under malicious attack– Functional requirements– Non-functional requirements (e.g., security)

CSCE 548 - Farkas 5

Why Software?Why Software?

Increased complexity of software productIncreased connectivityIncreased extensibility

Increased risk of security violations!

CSCE 548 - Farkas 6

Security ProblemsSecurity Problems

Defects: implementation and design vulnerabilities Bug: implementation-level vulnerabilities (Low-

level or mid-level)– Static analysis tool

Flaw: subtle, not so easy to detect problems– Manual analysis– Automated tools (for some but not design level)

Risk: probability x impact

CSCE 548 - Farkas 7

Application vs. Software SecurityApplication vs. Software Security

Usually refers to security after the software is built– Adding more code does not

make a faulty software correct– Sandboxing – Network-centric approach

Application security testing: badness-ometer

Deep Trouble

Who Knows

CSCE 548 - Farkas 8

Three Pillars of Software SecurityThree Pillars of Software Security

Risk ManagementSoftware Security TouchpointsKnowledge

CSCE 548 - Farkas 9

Risk ManagementRisk Management

CSCE 548 - Farkas 10

Risk AssessmentRisk Assessment

RISKRISK

Threats

Vulnerabilities Consequences

CSCE 548 - Farkas 11

Risk Management Framework(Business Context)

Understand BusinessContext

Identify Business and Technical Risks

Synthesize and RankRisks

Define RiskMitigation Strategy

Carry Out Fixesand Validate

Measurement and Reporting

CSCE 548 - Farkas 12

Assets-Threat Model (1)

Threats compromise assets

Threats have a probability of occurrence and severity of effect

Assets have values

Assets are vulnerable to threats

Threats Assets

CSCE 548 - Farkas 13

Risk Acceptance

Certification How well the system meet the security

requirements (technical)

Accreditation Management’s approval of automated system

(administrative)

CSCE 548 - Farkas 14

Building It SecureBuilding It Secure

1960s: US Department of Defense (DoD) risk of unsecured information systems

1970s: – 1977: DoD Computer Security Initiative– US Government and private concerns – National Bureau of Standards (NBS – now NIST)

Responsible for standards for acquisition and use of federal computing systems

Federal Information Processing Standards (FIPS PUBs)

CSCE 548 - Farkas 15

Software Security Touchpoints

CSCE 548 - Farkas 16

Application of TouchpointsApplication of Touchpoints

Requirement and Use cases

Architecture and Design

Test Plans Code Tests andTest Results

Feedback fromthe Field

5. Abuse cases

6. Security Requirements

2. Risk Analysis

External Review

4. Risk-Based Security Tests

1. Code Review(Tools)

2. Risk Analysis

3. Penetration Testing

7. Security Operations

CSCE 548 - Farkas 17

When to Apply Security? When to Apply Security? Economical consideration: early is better Effectiveness of touchpoints:

– Economics– Which software artifacts are available– Which tools are available– Cultural changes

Bad: reactive strategy need: secure development

See slides of Kromholz: Assurance – A Case for the V-Model, https://syst.eui.upm.es/conference/sv03/papers/V-Chart%20200309Kromholz08.ppt

CSCE 548 - Farkas 18

Best PracticesBest Practices

Earlier the betterChange “operational” view to secure

softwareBest practices: expounded by experts and

adopted by practitioners

CSCE 548 - Farkas 19

Who Should Care?Who Should Care?

DevelopersArchitectsOther buildersOperations people

Do not start with security people.Start with software people.

CSCE 548 - Farkas 20

Architectural Risk AnalysisArchitectural Risk Analysis

CSCE 548 - Farkas 21

Design FlawsDesign Flaws

50 % of security problemsNeed: explicitly identifying riskQuantifying impact: tie technology issues

and concerns to businessContinuous risk management

CSCE 548 - Farkas 22

Security Risk AnalysisSecurity Risk Analysis

Risk analysis: identifying and ranking risksRisk management: performing risk analysis

exercises, tracking risk, mitigating risksNeed: understanding of business impact

CSCE 548 - Farkas 23

Security Risk AnalysisSecurity Risk Analysis

Learn about the target of analysisDiscuss security issuesDetermine probability of compromisePerform impact analysisRank risksDevelop mitigation strategyReport findings

CSCE 548 - Farkas 24

Knowledge RequirementsKnowledge Requirements

Three basic steps:– Attack resistance analysis

Attack patterns and exploit graphs

– Ambiguity analysis Knowledge of design principles

– Weakness analysis Knowledge of security issues

Forest-level view: What does the software do?– Critical components and interaction between them– Identify risk related to flaws

CSCE 548 - Farkas 25

Modern Risk AnalysisModern Risk Analysis

Address risk as early as possible in the requirements level

Impact:– Legal and/or regulatory risk– Financial or commercial considerations– Contractual considerations

Requirements: “must-haves,” “important-to-have,” and “nice-but-unnecessary-to-have”

CSCE 548 - Farkas 26

Attack Resistance AnalysisAttack Resistance Analysis

Information about known attacks, attack patterns, and vulnerabilities – known problems– Identify general flaws: using secure design literature

and checklists– Map attack patterns: based on abuse cases and attack

patterns– Identify risk in the architecture: using checklist– Understand and demonstrate the viability of known

attacks

CSCE 548 - Farkas 27

Ambiguity AnalysisAmbiguity Analysis

Discover new risksParallel activities of team members unify

understanding– Private list of possible flaws– Describe together how the system worked

Need a team of experienced analysts

CSCE 548 - Farkas 28

Weakness AnalysisWeakness Analysis

Understanding the impact of external software dependencies– Middleware– Outside libraries– Distributed code– Services– Physical environment– Etc.

CSCE 548 - Farkas 29

Use CasesUse CasesMisuse CasesMisuse Cases

CSCE 548 - Farkas 30

SecureUMLSecureUML

Model-driven software development integrated with security

Advantages: – Security is integrated during software design, using

high-level of abstraction– Modeling information can be used to detect design

errors and verify correctness

Limitations: need precise semantics of modeling language for security assurance

CSCE 548 - Farkas 31

SecureUMLSecureUML

Defines vocabulary for annotating UML-based models with access control information

Metamodel: abstract syntax of the language UML Profile: notation to enhance UML class

model Host language: an other modeling language that

uses SecureUML SecureUML dialect: SecureUML specifications

are refined in the host language – E.g., syntactic elements of the modeling language are

transformed into constructs of the target platform

CSCE 548 - Farkas 32

Misuse CasesMisuse CasesSoftware development: making software do

something– Describe features and functions– Everything goes right

Need: security, performance, reliability– Service level agreement – legal binding

How to model non-normative behavior in use cases?– Think like a bad guy

CSCE 548 - Farkas 33

Penetration TestingPenetration Testing

CSCE 548 - Farkas 34

CSCE 548 - Farkas 35

Security TestingSecurity TestingLook for unexpected but intentional misuse of the

systemMust test for all potential misuse types using

– Architectural risk analysis results– Abuse cases

Verify that – All intended security features work (white hat)– Intentional attacks cannot compromise the system

(black hat)

CSCE 548 - Farkas 36

Penetration TestingPenetration Testing

Testing for negative – what must not exist in the system

Difficult – how to prove “non-existence” If penetration testing does not find errors than

– Can conclude that under the given circumstances no security faults occurred

– Little assurance that application is immune to attacks

Feel-good exercise

CSCE 548 - Farkas 37

Penetration Testing TodayPenetration Testing Today

Often performedApplied to finished productsOutside in approachLate SDLC activityLimitation: too little, too late

CSCE 548 - Farkas 38

Late-Lifecycle TestingLate-Lifecycle Testing

Limitations:– Design and coding errors are too late to discover– Higher cost than earlier designs-level detection– Options to remedy discovered flaws are constrained

by both time and budget

Advantages: evaluate the system in its final operating environment

CSCE 548 - Farkas 39

Success of Penetration TestingSuccess of Penetration Testing

Depends on skill, knowledge, and experience of the tester

Important! Result interpretationDisadvantages of penetration testing:

– Often used as an excuse to declare victory and go home

– Everyone looks good after negative testing results

CSCE 548 - Farkas 40

Risk-Based Security TestingRisk-Based Security Testing

CSCE 548 - Farkas 41

Quality AssuranceQuality Assurance

External quality: correctness, reliability, usability, integrity

Interior (engineering) quality: efficiency, testability, documentation, structure

Future (adaptability) quality: flexibility, reusability, maintainability

CSCE 548 - Farkas 42

Correctness TestingCorrectness Testing

Black box: – Test data are derived from the specified

functional requirements without regard to the final program structure

– Data-driven, input/output driven, or requirements-based

– Functional testing – No implementation details of the code are

considered

CSCE 548 - Farkas 43

Correctness TestingCorrectness Testing

White box:– Software under test are visible to the tester – Testing plans: based on the details of the

software implementation – Test cases: derived from the program structure – glass-box testing, logic-driven testing, or

design-based testing

CSCE 548 - Farkas 44

Performance TestingPerformance TestingGoal: bottleneck identification, performance

comparison and evaluation, etc.Explicit or implicit requirements"Performance bugs" – design problems Test: usage, throughput, stimulus-response

time, queue lengths, etc. Resources to be tested: network bandwidth

requirements, CPU cycles, disk space, disk access operations, memory usage, etc.

CSCE 548 - Farkas 45

Reliability TestingReliability Testing

Probability of failure-free operation of a system

Dependable software: it does not fail in unexpected or catastrophic ways

Difficult to test(see later lecture on software reliability)

CSCE 548 - Farkas 46

Security TestingSecurity Testing

Test: finding flaws in software can be exploited by attackers

Quality, reliability and security are tightly coupled

Software behavior testing– Need: risk-based approach using system

architecture information and attacker’s model

CSCE 548 - Farkas 47

Risk-Based TestingRisk-Based Testing

Identify risksCreate tests to address identified risksSecurity testing vs. penetration testing

– Level of approach– Timing of testing

CSCE 548 - Farkas 48

Security TestingSecurity Testing

Can be applied before the product is completed

Different levels of testing (e.g., component/unit level vs. system level)

Testing environmentDetailed

CSCE 548 - Farkas 49

Who Should Perform the Test?Who Should Perform the Test?

Standard testing organizations– Functional testing

Software security professionals– Risk-based security testing– Important: expertise and experience

CSCE 548 - Farkas 50

How to Test?How to Test?

White box analysis– Understanding and analyzing source code and design– Very effective finding programming errors– Can be supported by automated static analyzer– Disadvantage: high rate of false positives

Black box analysis– Analyze a running program– Probe the program with various input (malicious input)– No need for any code – can be tested remotely

CSCE 548 - Farkas 51

Security OperationsSecurity Operations

CSCE 548 - Farkas 52

Don’t stand so close to me Don’t stand so close to me

Best Practices– Manageable number of simple activities – Should be applied throughout the software

development process Problem:

– Software developers: lack of security domain knowledge limited to functional security

– Information security professionals: lack of understanding software limited to reactive security techniques

CSCE 548 - Farkas 53

Deployment and OperationsDeployment and Operations

Configuration and customization of software application’s deployment environment

Activities: – Network-component-level– Operating system-level – Application-level

CSCE 548 - Farkas 54

SANS: Secure Programming SANS: Secure Programming Skills AssessmentSkills Assessment

Aims to improve secure programming skills and knowledge

Allow employers to rate their programmers Allow buyers of software and systems vendors to

measure skills of developers Allow programmers to identify their gaps in secure

programming knowledge Allow employers to evaluate job candidates and

potential consultants Provide incentive for universities to include secure

coding in their curricula

CSCE 548 - Farkas 55

Independence in Independence in Multiversion ProgrammingMultiversion Programming

CSCE 548 - Farkas 56

Multi-Version ProgrammingMulti-Version Programming

N-version programmingGoal: increase fault toleranceSeparate, independent development of

multiple versions of a softwareVersions executed parallel

– Identical input Identical output ?– Majority vote

CSCE 548 - Farkas 57

Separate DevelopmentSeparate Development

At which point of software development?– Common form of system requirements document– Voting on intermediate data

Rammamoorthy et al. – Independent specifications in a formal specification

language – Mathematical techniques to compare specifications

Kelly and Avizienis– Separate specifications written by the same person– 3 different specification languages

CSCE 548 - Farkas 58

DifficultiesDifficulties

How to isolate versionsHow to design voting algorithms

CSCE 548 - Farkas 59

Advantages of N-VersioningAdvantages of N-Versioning

Improve reliability Assumption: N different versions will fail

independently Outcome: probability of two or more versions

failing on the same input is small

If the assumption is true, the reliability of the system could be higher than

the reliability of the individual components

CSCE 548 - Farkas 60

Is the assumption TRUE?

CSCE 548 - Farkas 61

False?False?

People tend to make the same mistakesCommon design faultsCommon Failure Mode Analysis

– Mechanical systems– Software system

CSCE 548 - Farkas 62

How to Achieve Reliability?How to Achieve Reliability?

Need independence– Even small probabilities of coincident errors

cause substantial reduction in reliability– Overestimate reliability

Crucial systems– Aircrafts– Nuclear reactors– Railways

CSCE 548 - Farkas 63

Testing of Critical Software Testing of Critical Software SystemsSystems

Dual programming:– Producing two versions of the software– Executing them on large number of test cases– Output is assumed to be correct if both versions

agree– No manual or independent evaluation of correct

output – expensive to do so– Assumption: unlikely that two versions contain

identical faults for large number of test cases

CSCE 548 - Farkas 64

VotingVoting

Individual software versions may have low reliability

Run multiple versions and vote on “correct” answer

Additional cost: voting processs

CSCE 548 - Farkas 65

Common Assumption:

Low probability of common mode failures

(identical, incorrect output generated from the same input)

CSCE 548 - Farkas 66

IndependenceIndependence

Assumed and not tested– Two versions were assumed to be correct if the two

outputs for the test cases agree

Test for common errors but not for independence– Kelly and Avizienis: 21 related and 1 common fault –

nuclear reactor project– Taylor: common faults in European practical systems

Need evaluation/testing of independence

top related