14g26.metricon 9.seeing the elephant - real world secure engineering lifecycle programs

Upload: geoff-hill

Post on 01-Jun-2018

219 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/9/2019 14g26.Metricon 9.Seeing the Elephant - Real world secure engineering lifecycle programs

    1/17

    Slide 1

    Seeing the Elephant

    Using collected data points to design and roll out software security initiatives

    Geoffrey Hill

    Artis Secure Ltd.

    Feb 28, 2014

     

    Seeing the Elephant can mean several things… the reason I picked it: 

    •  Not seeing the need for better tracking via metrics “in the security room” 

    •  In ancient times if an unprepared army saw an elephant it ran! Same with many in

    operations and metric collection 

    • 

    Story of blind people describing an elephant… each has a different story (or metrics) 

  • 8/9/2019 14g26.Metricon 9.Seeing the Elephant - Real world secure engineering lifecycle programs

    2/17

    Slide 2

    How I plan to run my talk

    • First I’m going to give you who I am

    • Then I’m going to give a quick background on the SoftwareSecurity Initiative (SSI) frameworks I used

    • Then… I’m going to talk about what I tried

    • . . . What failed (and how I had to fix it)

    • . . . And what worked

    And I’ll finish with a few questions…

     

  • 8/9/2019 14g26.Metricon 9.Seeing the Elephant - Real world secure engineering lifecycle programs

    3/17

    Slide 3

    A bit of a background

    Cigital Artis-Secure

     

    I started as a C/C++ developer in the 90s. This then morphed into a senior

    engineer/architect role in the 2000s. I joined the Microsoft Consulting org in 2002 and

    started making the move to secure development work by 2003. In 2005 I learned about the

    MS-SDL and started using it in my projects. I had to change quite a few activities to enable

    the SDL to work in a heterogeneous Agile work environment. By 2009 I was doinggovernance on around 100 worldwide projects and had integrated the core SDL into the

    Services Application Lifecycle Management process. 

    I joined Cigital in 2011 and was in put in charge of creating a SSI in two major clients. I will

    refer to these two in my examples later on. 

    My overall goal since 2009 has been to build metrics into the process of building up a SSI.

    I’ve developed a methodology based my experiences at clients and based on my use of

    several frameworks/standards/processes (incl. MSFT SDL, BSIMM, CLASP, ISO 27001 series,

    SAMM, CMM(I) ) 

  • 8/9/2019 14g26.Metricon 9.Seeing the Elephant - Real world secure engineering lifecycle programs

    4/17

    Slide 4

    Current Software Security Initiatives

    touchpoints

     

    SSIs fall into 2 categories, technical lifecycle frameworks and software maturity models. 

    I do not count ISO/IEC 27001:2005 (extension of quality system), COBIT Security (regulatory

    compliance), CRAMM (risk analysis), SABSA (business operational risk based architectures),

    CMMI (Capability Maturity Model 

    I have used the following SSIs : 

    •  CLASP, Touchpoints, MS-SDL – several specific SSIs for secure development  

    •  Advantage – very technical, very descriptive 

    •  Disadvantage – hardly any business-side guidance (governance, risk, compliance,

    operations, strategic planning, etc.) 

    •  BSIMM, SAMM – maturity model to measure implementation of the ‘Build Security In’

    framework, jointly created by Cigital and Fortify 

    •  Advantage – large pool of participants, descriptive of activities *within* pool,

    includes some reference to business practices 

    •  Disadvantage – does not record activities falling outside BSI framework, owned by

    one organization, very simple metrics, levels are not a “tech tree” with pre-

    requisites, light on business practices 

  • 8/9/2019 14g26.Metricon 9.Seeing the Elephant - Real world secure engineering lifecycle programs

    5/17

    Slide 5

    How did I measure these Initiatives?

    • Bug bars… But, *which* bugs were meaningful to teams?

    • Measuring Threat Models… CIA? STRIDE/DREAD? What else?• Non-measurable models.

    • What IS a complete Threat Model?

    • Code review and testing… many competing taxonomies… confusion• Varying criticality measures by group (security, dev, ops)

    • Criticality measure not a great Apple's to Apple‘s measure

    • . . . Maturity Model issues… interviewer bias• Measured only presence of activity, not proficiency

    • Out of band activities… what to do?

     

    The technical models fall short on explaining how to measure the progress of the initiatives,

    while the maturity models are made for simple measurement. 

    •  MS-SDL, Touchpoints, CLASP – several specific SSIs for secure development 

    • 

    Bug bars 

    •  MSFT – could use common MS taxonomies but

    •  (all projects) Hard to nail down which bugs were meaningful for the

    project. Some security bugs questioned by dev leads. 

    •  threat model measurements 

    •  Early MSFT - Each TM was different and no common components to

    measure off. 

    •  Later MSFT – consistent but STRIDE and DREAD not good for measuring 

    •  Cigital - non-measurable modelling processes 

    •  How do we measure a ‘complete’ TM? 

    •  We may have a complete TM, but how do we measure inclusion of proper

    threats and correct mitigations? 

    •  number of code review bugs 

    •  Competing taxonomies to use for metrics; which is correct? 

    •  Categorize by criticality, BUT this varies between organizations!  

    •  Criticality not a great apples/apples measure; different bug types bundled

    together. 

    •  number of security testing issues 

    •  Same problems as bug review issues 

    •  SAMM – declarative Software Assurance Model owned by OWASP 

    •  Simple yes/no measurements of whether an activity is being done, as per

    reviewer’s estimate 

  • 8/9/2019 14g26.Metricon 9.Seeing the Elephant - Real world secure engineering lifecycle programs

    6/17

    •  Same of the BSIMM weaknesses exist here; the assessment may show ‘out of

    band’ activities which skew the metrics 

    •  BSIMM – maturity model to measure implementation of the ‘Build Security In’

    framework, jointly created by Cigital and Fortify 

    •  Simple yes/no measurements of whether an activity is spotted by the reviewer 

    • 

    Some activities get spotted ‘out of band’, which makes for strange, incomplete

    maturity tree measurements 

    •  Metrics become weaker evidence because the model only shows presence of an

    activity, not proficiency 

  • 8/9/2019 14g26.Metricon 9.Seeing the Elephant - Real world secure engineering lifecycle programs

    7/17

    Slide 6

    What about security taxonomies for measuring?

    • OWASP Top 10 … not stable, could change year on year

    • DREAD… everyone fought over what rank to measure each item

    • STRIDE… overlap of some of the elements, confusing measurements

    • Patterns & Practice Security Frame… 9 common areas of dev security confusion• Used across SDLC to measure Apples/Apples

    • Easy to teach developers

    • Simple metric to use and classify security requirements, architecture and bugs by

    • CWE and CAPEC… globally accepted, taxonomies tie to each other and CVE

     

    •  OWASP Top 10 

    •  Potentially changes each year; not stable enough. 

    •  DREAD 

    •  Everyone fought over what numbers to put up, on scale of 1-10 

    • 

    STRIDE 

    •  Good start but there is overlap 

    •  SECURITY FRAME 

    •  This breaks down top developer issues better 

    •  CWE 

    •  Globally used and frequently maintained 

    •  CAPEC 

    •  Attack taxonomy that ties directly to CWE and CVE  

  • 8/9/2019 14g26.Metricon 9.Seeing the Elephant - Real world secure engineering lifecycle programs

    8/17

    Slide 7

    Lessons I learned (the hard way… pain pain pain)

     

    Lessons 

    •  It’s very hard to measure anything in the real world when one cannot do an apples/apples

    comparison. 

    •  Many of the processes lacked a common category 

    • 

    It gets even harder when one mixes the technical models in with the maturity models. 

    •  Inexperience causes failures in nearly 60% of the cases 

    •  How does one measure competency with a given task?!? 

    •  There may be cultural differences 

    •  Unanticipated process slowdowns 

  • 8/9/2019 14g26.Metricon 9.Seeing the Elephant - Real world secure engineering lifecycle programs

    9/17

    Slide 8

    … and the Conclusions that I came to…

    • Ensured the technical models fed into maturity models

    • Made a set of buckets common to all SSI• Mapped both maturity models to commond IT business activitiess

    • Balanced Scorecard

    • Measured Efficiency of work• Security Frame – across SDLC

    • CAPEC – For Threat Models

    • Modified Black Scholes options pricing model for cost of fixing bugs

     

    Conclusions 

    •  Ensured that the technical models are part of the maturity models 

    •  Technical model is sub-part of SSI buckets/scorecard 

    •  Made a common set of SSI “buckets” to enable measurement 

    • 

    SOMC buckets 

    •  Balanced Scorecard 

    •  Measured cost efficiency 

    •  Measuring each step by 3s 

    •  Black & Scholes modified for costs 

  • 8/9/2019 14g26.Metricon 9.Seeing the Elephant - Real world secure engineering lifecycle programs

    10/17

    Slide 9

    Common SSI buckets

    SSG Organizational growth OpenSAMM BSIMM

    Strategic Contacts

    Ability to Project SSO Vision

    Business Marketing

    Security Anecdotes Education & Guidance Training

    Performance Incentives

    SSG activity growth

    Governance, R isk & Compliance (GRC) Policy & Compl iance Policy & Compliance

    Auditing Policy & Compliance Policy & Compliance

    SSG quality gates (touchpoints) VERIFICATION SSDL TOUCHPOINTS

    Operations Management DEPLOYMENT DEPLOYMENT

    Supplier Management

    SSO Core Competencies CONSTRUCTION INTELLIGENCE

    Strategic Planning Strategy & Metrics Strategy & Metrics

    Financial Planning Strategy & Metrics Strategy & Metrics

    SSG = SoftwareSecurity Group

     

    The two maturity models I worked on have the same origins but both are lacking in the more

    business-oriented security processes. I overlaid a number of metrics for checking the

    security organizational growth.

    --- Progress in growing strategic contacts needed to be captured. 

    --- Internal marketing needed to be captured with extra processes (Projection and Businessmarketing). 

    --- A maturing security group has the ability to positively influence other groups to follow its

    guidance. This is captured with Performance Incentives metrics. 

    Under SSG activity growth, I also oriented several other security domains towards their

    more common non-security processes.

    --- Governance, Risk & Compliance are normally grouped 

    --- Auditing is normally separate 

    --- Operations management normally incorporates deployment activities 

    --- Strategic planning requires its own breakdown 

    --- Financial planning requires its own breakdown for any security group within an

    organization 

    I added Supplier Management as a necessary domain for Security group activity. 

  • 8/9/2019 14g26.Metricon 9.Seeing the Elephant - Real world secure engineering lifecycle programs

    11/17

    Slide 10

    Balanced Scorecard

    SSG Organizational growth   Balanced Scorecard

    Strategic Contacts FINANCIALAbility to Project SSO Vision CUSTOMERS

    Business Marketing CUSTOMERS

    Security Anecdotes INNOVATION & LEARNING

    Performance Incentives CUSTOMERS

    SSG activity growth

    Governance, Risk & Compliance (GRC) FINANCIAL

    Auditing INTERNAL BUSINESS

    SSO quality gates (touchpoints) INTERNAL BUSINESS

    Operations Management INTERNAL BUSINESS

    Supplier Management CUSTOMERS

    SSO Core Competencies INTERNAL BUSINESS

    Strategic Planning FINANCIAL

    Financial Planning FINANCIAL

    Strategy

    (F) ReduceSecurity

    Costs, SSGfinance

    (IB) Tech &MM metrics

    (IL) SecurityAwareness,SSG datacollection

    (C)Customer

    confidence,SSG

    organization

     

    The Balanced Scorecard can be used to help the security group provide key metrics to

    management in a form that they understand. The standard Balanced Scorecard is broken

    into 4 blocks; Financial, Internal Business, Learning and Customer (oriented metrics). 

    The SSI bucket metrics (previous page) are fed into the Balanced Scorecard as follows: 

  • 8/9/2019 14g26.Metricon 9.Seeing the Elephant - Real world secure engineering lifecycle programs

    12/17

    Slide 11

    Security Frame for continuity

    Data ValidationAuthentication

    Authorisation

    Configuration

    Sensitive Data

    Session

    Cryptography

    Exception

    Logging

     

    The Security Frame lists the 9 most common areas that developers make mistakes. Created

    by the Patterns & Practice team of Microsoft. It provides a way to tag security work

    throughout the secure engineering lifecycle. 

    •  Data Validation – vetting data before it gets consumed 

    • 

    Authentication – Who are you? 

    •  Authorisation – Are you allowed access to this particular area? 

    •  Configuration – What are the system dependencies? 

    •  Sensitive Data – PII? PCI data? Secrets? 

    •  Session – How are related two-party communications managed? 

    •  Cryptography – key generation, key management 

    •  Exception – How are unexpected errors handled? 

    •  Logging – Who did what and when? 

  • 8/9/2019 14g26.Metricon 9.Seeing the Elephant - Real world secure engineering lifecycle programs

    13/17

    Slide 12

    Mapping to CAPEC to allow for global use and flexibility

    Data Validation Category - I njection (Injecting Control Plane content through the Data Plane) - (152)

    Category - Abuse of Functionality - (210)Category - Data Structure Attacks - (255)

    Category - Time and State Attacks - (172)

    Category - Probabilistic Techniques - (223)

    Authentication Category - Exploitation of Authentication - (225)Category - Spoofing - (156)

    Authorisation Category - Exploitation of Privilege/Trust - (232)

    Configuration Category - Physical Security Attacks - (436)

    Category - Resource Manipulation - (262)

    Attack Pattern - Supply Chain Attacks - (437)Sensitive Data

    Session Category - Data Leakage Attacks - (118)

    Cryptography [Category - Exploitation of Authentication - (225)] 

    Exception Category - Resource Depletion - (119)

    [Category - Probabilistic Techniques - (223)] 

    Logging Attack Pattern - Network Reconnaissance - (286)

     

    1000 - Mechanism of Attack 

    Category - Data Leakage Attacks - (118) 

    Category - Resource Depletion - (119) 

    Category - Injection (Injecting Control Plane content through the Data Plane) - (152) 

    Category - Spoofing - (156) 

    Category - Time and State Attacks - (172) 

    Category - Abuse of Functionality - (210) 

    Category - Probabilistic Techniques - (223) 

    Category - Exploitation of Authentication - (225) 

    Category - Exploitation of Privilege/Trust - (232) 

    Category - Data Structure Attacks - (255) 

    Category - Resource Manipulation - (262) 

    Category - Physical Security Attacks - (436) 

    Attack Pattern - Network Reconnaissance - (286) 

    Attack Pattern - Social Engineering Attacks - (403) 

    Attack Pattern - Supply Chain Attacks - (437) 

  • 8/9/2019 14g26.Metricon 9.Seeing the Elephant - Real world secure engineering lifecycle programs

    14/17

    Slide 13

    Measuring the potential cost of fixing issues

    • Modified Black & Scholes options model• Options are financial rights (not obligation)

    • Security issues are similar (no obligation to fix)• Cost of options goes up with time, volatility

    • Security issue fix cost is higher when• code is older• risk of immature secure engineering increases

    • … here’s a brief overview of the equation

    Dollar_Cost_of_Risk= Consultancy_hr * delta - Consultancy_hr * e^(-0.01 * Project_Days_Used/365) * Normal_Distribution(d1- Volatility_Over_Time)

    e = 2.71828... etc.

    Normal_Distribution = standard normal distribution

    Consultancy_hr = Consultancy costs per hour (potential costs for experts to fix)

    Project_Wage = costs per hour given the project budget

    Project_Security_Risk = 0 to 1 measurement of technical SSI competence

    Volatility_Over_Time = Project_Security_Risk*SQRT(Project_Days_Used/365)

    Project_Days_Used = Project_Current_Date - Project_Incept_Date

    d1 =LN(Consultancy_hr/Project_Wage) + ( (0.01 + Project_Security_Risk^2) /2) * (Project_Security_Risk/365) /Volatility_Over_Time

    delta = Normal_Distribution(d1)

     

    Modified Black & Scholes 

    The financial model is to measure value of options with incomplete information. It states

    that riskier/more volatile markets yield higher option values. 

    Security issues are similar 

    •  Option to fix them, not obligation 

    •  Cost of fixing them (ie. Buying the option) is more when the technical risk from the SDLC

    is higher and code phase is more advanced 

    Dollar_Cost_of_Risk = Consultancy_hr * delta - Consultancy_hr * e^(-0.01 *

    Project_Days_Used/365) * Normal_Distribution(d1 - Volatility_Over_Time) 

    e = 2.71828... etc. 

    Normal_Distribution = standard normal distribution 

    Consultancy_hr = Consultancy costs per hour 

    Project_Wage = costs per hour given the project budget 

    Project_Security_Risk = 0 to 1 measurement of technical SSI competence 

    Volatility_Over_Time = Project_Security_Risk*SQRT(Project_Days_Used/365) 

    Project_Days_Used = Project_Current_Date - Project_Incept_Date 

    d1 =LN(Consultancy_hr/Project_Wage) + ( (0.01 + Project_Security_Risk^2) /2) *

    (Project_Security_Risk/365) / Volatility_Over_Time 

    delta = Normal_Distribution(d1) 

  • 8/9/2019 14g26.Metricon 9.Seeing the Elephant - Real world secure engineering lifecycle programs

    15/17

     

  • 8/9/2019 14g26.Metricon 9.Seeing the Elephant - Real world secure engineering lifecycle programs

    16/17

    Slide 14

    Here are some tracking examples from my previous work

    Project Date Received Project Security Risk Dollar Cost of RiskMinisterstvo zdravotnictva SR 26/10/2009 0.854 $126.89

    GIE - SHIFT-10-01 08/10/2009 0.842 $126.11

    PetroC OMS Phase I Extend 10/05/2010 0.894 $124.89

    PetroC OMS Maintenance Project 07/11/2010 0.894 $118.23

    Internet Banking Client 27/09/2010 0.842 $114.26

    DWH Migration ( Monitoring) 11/08/2010 0.781 $109.25

    TIB-Corporate Internet Banking 22/02/2010 0.644 $98.41

    RWE Smart Home 27/10/2009 0.527 $85.92

    LMI - Domain Awareness System 31/07/2009 0.469 $79.75

    Vita Phase II 18/10/2009 0.469 $77.96

    Smart Home - 2.PQR 06/12/2010 0.527 $74.91

    CBS-BOC 08/02/2010 0.439 $70.96

    SmartHome V2 27/10/2010 0.436 $64.25

    Meo at PC -- Passagem a Producao 08/12/2009 0.356 $60.09

    SHaS BPM Platform Establishment Project 28/08/2009 0.085 $18.73

     

    In this example you can see how a higher project security risk yields a higher cost to fix an

    issue, and a lower project security risk yields a much lower cost to fix. The assumption is

    that the hourly cost per team member is USD 200. 

    The Dollar_Cost_of_Risk represents the cost per hour of finding and fixing a security issue,given the state of the project’s secure engineering. To put it another way, if  

    •  The security requirements were missing or incomplete, they would make it difficult to

    articulate the security controls needed 

    •  The threat model was missing or poor, it would miss threats and mitigations, which would

    make it more difficult to focus on dangerous code  

    •  The code review was incomplete or non-existent, it would yield potentially dangerous

    code 

    •  The security testing was missing, verification coverage wouldn’t exist 

    In short, it would be dangerous code that was difficult to introduce fundamental changes to.

    The cost of fixing such code would be high. 

    This model assumes all bugs are the same but the model can be modified for criticality of

    bug. This can be done by changing the Consultancy_hr cost to reflect the cost of bringing an

    expert in to fix the issue. 

  • 8/9/2019 14g26.Metricon 9.Seeing the Elephant - Real world secure engineering lifecycle programs

    17/17

    Slide 15

    Thank you!

    Geoffrey HillArtis Secure Ltd.

    [email protected] 28, 2014

     

    Seeing the Elephant can mean several things… the reason I picked it: 

    •  Seeing the elephant of poor data collection in the security room 

    •  In ancient times if an unprepared army saw an elephant it ran! Same with many ops and

    metric collection 

    • 

    Story of blind people describing an elephant… each has a different story (or metrics)