tnw 19 2 blueprint for a science of cybersecurity

Upload: david-bober

Post on 05-Apr-2018

218 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/31/2019 TNW 19 2 Blueprint for a science of cybersecurity

    1/11

    The Next Wave | Vol. 19 No. 2 | 2012 | 47

    1. Introduction

    A secure system must defend against all possible at-tacksincluding those unknown to the defender. Butdefenders, having limited resources, typically developdefenses only for attacks they know about. New kindsof attacks are then likely to succeed. So our growingdependence on networked computing systems puts atrisk individuals, commercial enterprises, the publicsector, and our military.

    e obvious alternative is to build systems whose

    security follows from rst principles. Unfortunately,we know little about those principles. We need ascience of cybersecurity(see box 1) that puts the con-struction of secure systems onto a rm foundationby giving developers a body of laws for predicting theconsequences of design and implementation choices.e laws should

    transcend specic technologies and attacks, yetstill be applicable in real settings,

    introduce new models and abstractions, therebybringing pedagogical value besides predictive

    power, and facilitate discovery of new defenses as well as de-scribe non-obvious connections between attacks,defenses, and policies, thus providing a betterunderstanding of the landscape.

    e research needed to develop this scienceof cybersecurity must go beyond the search for

    vulnerabilities in deployed systems and beyond the de-velopment of defenses for specic attacks. Yet, use of ascience of cybersecurity when implementing a systemshould not be equated with implementing absolutesecurity or even with concluding that security requiresperfection in design and implementation. Rather, ascience of cybersecurity would provideindependentof specic systemsa principled account for tech-niques that work, including assumptions they requireand ways one set of assumptions can be transformedor discharged by another. It would articulate and or-

    ganize a set of abstractions, principles, and trade-osfor building secure systems, given the realities of thethreats and of our cybersecurity needs.

    BOX 1. What is a science?

    The term science has evolved in meaning since Aristotle used itto describe a body of knowledge. To many, it connotes knowl-edge obtained by systematic experimentation, so they take thatprocess as the dening characteristic of a science. The naturalsciences satisfy this denition.

    Experimentation helps in forming and then armingtheories or laws that are intended to oer veriable predictions

    about man-made and natural phenomena. It is but a small stepfrom science as experimentation to science as laws that ac-curately predict phenomena. The status of the natural sciencesremains unaected by changing the denition of a science inthis way. But computer science now joins. It is the study of whatprocesses can be automated eciently; laws about specication(problems) and implementations (algorithms) are a comfortableway to encapsulate such knowledge.

    Blueprint for a science

    of cybersecurity | F r e d B . S c h n e i d e r

  • 7/31/2019 TNW 19 2 Blueprint for a science of cybersecurity

    2/11

    48

    e eld of cryptography comes close to exem-plifying the kind of science base we seek. e focusin cryptography is on understanding the design andlimitations of algorithms and protocols to computecertain kinds of results (for example, condential or

    tamperproof or attributed) in the presence of certainkinds of adversaries who have access to some, but notall, information involved in the computation. Cryp-tography, however, is but one of many cybersecuritybuilding blocks. A science of cybersecurity would haveto encompass richer kinds of specications, comput-ing environments, and adversaries. Peter Neumann [1]summarized the situation well when he opined aboutimplementing cybersecurity, If you think cryptog-raphy is the answer to your problem, then you dontknow what your problem is.

    An analogy with medicine can be instructive for

    contemplating benets we might expect from a sci-ence of cybersecurity. Some health problems are besthandled in a reactive manner. We know what to dowhen somebody breaks a nger, and each year wecreate a new inuenza vaccine in anticipation of theu season to come. But only aer making signicantinvestments in basic medical sciences are we start-ing to understand the mechanisms by which cancersgrow, and a cure seems to require that kind of deepunderstanding. Moreover, nobody believes disease willsomeday be a solved problem. We make enormousstrides in medical research, yet new threats emerge

    and old defenses (for example, antibiotics) lose theireectiveness. Like good health, cybersecurity is nevergoing to be a solved problem. Attacks coevolve withdefenses and in ways to disrupt each new task that isentrusted to our networked systems. As with medicalproblems, some attacks are best addressed in a reactiveway, while others are not. But our success in develop-ing all defenses will benet considerably from havinglaws that constitute a science of cybersecurity.

    is article gives one perspective on the shape ofthat science and its laws. Subjects that might be char-acterized in laws are discussed in section 2. en, sec-

    tion 3 illustrates by giving concrete examples of laws.e relationship that a science of cybersecurity wouldhave with existing branches of computer science isexplored in section 4.

    If you thinkcryptography is the

    answer to your problem,then you dont knowwhat your problem is.

    -PETERNEUMANN

  • 7/31/2019 TNW 19 2 Blueprint for a science of cybersecurity

    3/11

    The Next Wave | Vol. 19 No. 2 | 2012 | 49

    FEATURE

    2. Laws about what?

    In the natural sciences, quantities found in nature arerelated by laws: E = mc2, PV = nRT, etc. Continuousmathematics is used to specify these laws. Continuous

    mathematics, however, is not intrinsic to the notionof a scientic lawpredictive power is. Indeed, lawsthat govern digital computations are oen most con-veniently expressed using discrete mathematics andlogical formulas. Laws for a science of cybersecurityare likely to follow suit because these, too, concerndigital computation.

    But what should be the subject matter of these laws?To be deemed secure, a system should, despite attacks,satisfy some prescribedpolicythat species what thesystem must do (for example, deliver service) andwhat it must not do (for example, leak secrets). And

    defenses are the means we employ to prevent a systemfrom being compromised by attacks. is accountsuggests we strive to develop laws that relate attacks,defenses, and policies.

    For generality, we should prefer laws that relateclasses of attacks, classes of defenses, and classes ofpolicies, where the classication exposes essentialcharacteristics. en we can look forward to hav-ing laws like Defenses in class enforce policies inclass despite attacks from classAor By compos-ing defenses from class 'and class ", a defense isconstructed that resists the same attacks as defenses

    from class .Appropriate classes, then, are crucial fora science of cybersecurity to be relevant.

    2.1. Classes of attacks

    A systems interfaces dene the sole means by which anenvironment can change or sense the eects of systemexecution. Some interfaces have clear embodimentto hardware: the keyboard and mouse for inputs, agraphic display or printer for outputs, and a networkchannel for both inputs and outputs. Other hardwareinterfaces and methods of input/output will be less

    apparent, and some are quite obscure. For example,Halderman et al. [2] show how lowering the operatingtemperature of a memory board facilitates capture ofsecret cryptographic keys through what they term a

    cold bootattack. e temperature of the environmentis, in eect, an input to a generally overlooked hard-ware interface. Most familiar are interfaces createdby soware. e operating system interface oenprovides ways for programs to communicate overtly

    through system calls and shared memory or covertlythrough various side channels (such as battery level orexecution timings).

    Since (by denition) interfaces provide the onlymeans for inuencing and sensing system execution,interfaces necessarily constitute the sole avenues forconducting attacks against a system. e set of in-terfaces and the specic operations involved is thusone obvious basis for dening classes of attacks. Forexample, we might distinguish attacks (such as SQL-injections) that exploit overly powerful interfacesfrom attacks (such as buer overows) that exploit

    insuciently conservative implementations. Anotherbasis for dening classes of attacks is to characterizethe information or eort required for conducting theattack. With some cryptosystems, for instance, e-cient techniques exist for discovering a decryption keyif samples of ciphertext with corresponding plaintextare available for that key, but these techniques do notwork when only ciphertext is available.

    A given input might cause some policies to beviolated but not others. So whether an input consti-tutes an attack on a given system could depend on thepolicy that system is expected to enforce. is depen-dence suggests that classes of attacks could be denedin terms of what policies they compromise. e deni-tion of denial-of-service attacks, for instance, equatesa class of attacks with system availability policies.

    For attacks on communications channels, cryptog-raphers introduce classications based on the compu-tational power or information available to the attacker.For example, Dolev-Yao attackers are limited to read-ing, sending, deleting, or modifying elds in messagesbeing sent as part of some protocol execution [3]. (ealtered trac confuses the protocol participants, andthey unwittingly undertake some action the attackerdesires.) But it is not obvious how to generalize theseattack classes to systems that implement more com-plex semantics than message delivery and that provide

  • 7/31/2019 TNW 19 2 Blueprint for a science of cybersecurity

    4/11

    50

    Blueprint for a science of cybersecurity

    operations beyond reading, sending, deleting, ormodifying messages.

    Finally, the role of people in a system can be a basisfor dening classes of attacks. Security mechanismsthat are inconvenient will be ignored or circumventedby users; security mechanisms that are dicult tounderstand will be misused (with vulnerabilities intro-duced as a result). Distinct classes of attacks can thusbe classied according to how or when the humanuser is fooled into empowering an adversary. Phishingattacks, which enable the of passwords and ultimate-ly facilitate identity the, are one such class of attacks.

    2.2. Classes of policies

    Traditionally, the cybersecurity communityhas formulated policies in terms of three kindsof requirements:

    Condentiality refers to which principals are al-lowed to learn what information.

    Integrity refers to what changes to the system(stored information and resource usage) and to

    its environment (outputs) are allowed. Availability refers to when must inputs be reador outputs produced.

    is classication, as it now stands, is likely to beproblematic as a basis for the laws that form a scienceof cybersecurity.

    One problem is the lack of widespread agree-ment on mathematical denitions for condentiality,integrity, and availability. A second problem is thatthe three kinds of requirements are not orthogonal.For example, secret data can be protected simply by

    corrupting it so that the resulting value no longeraccurately conveys the true secret value, thus tradingintegrity for condentiality.a As a second example, anycondentiality property can be satised by enforcinga weak enough availability property, because a systemthat does nothing cannot be accessed by attackers tolearn secret information.

    Contrast this state of aairs with trace properties,where safety (no bad thing happens) and liveness(some good thing happens) are orthogonal classes.(Formal denitions of trace properties, safety, andliveness are given in box 2 for those readers who are

    interested.) Moreover, there is added value when re-quirements are formulated in terms of safety and live-ness, because safety and liveness are each connected toa proof method. Trace properties, though, are not ex-pressive enough for specifying all condentiality andintegrity policies. e class of hyperproperties [5], ageneralization of trace properties, is. And hyperprop-erties include safety and liveness classes that enjoy thesame kind of orthogonal decomposition that existsfor trace properties. So hyperproperties are a promis-ing candidate for use in a science of cybersecurity.

    BOX 2. Trace properties, safety, and liveness

    A specication for a sequential program would characterize foreach input whether the program terminates and what outputs itproduces. This characterization of execution as a relation is inad-equate for concurrent programs. Lamport [6] introduced safetyand liveness to describe the more expressive class of specica-tions that are needed for this setting. Safety asserts that no badthing happens during execution and liveness asserts that somegood thing happens.

    A trace is a (possibly innite) sequence of states; a trace prop-ertyis a set of traces, where each trace in isolation satises somecharacteristic predicate associated with that trace property.Examples includepartial correctness (the rst state satises theinput specication, and any terminal state satises the output

    specication) and mutual exclusion (in each state, the programfor at most one process designates an instruction in a criticalsection). Not all sets of traces dene trace properties. Informa-tion ow, which stipulates a correlation between the valuesof the two variables across all traces, is an example. This set oftraces does not have a characteristic predicate that dependsonly on each individual trace, so the set is not a trace property.

    FIGURE 1. Phishing attacks, which enable theft of passwordsand ultimately facilitate identity theft, can be classied ac-cording to how the human user is fooled into empoweringthe adversary.

    a. Clarkson and Schneider [4] use information theory to derive a law that characterizes the trade-o between condentiality and integrityfor database-privacy mechanisms.

  • 7/31/2019 TNW 19 2 Blueprint for a science of cybersecurity

    5/11

    The Next Wave | Vol. 19 No. 2 | 2012 | 51

    FEATURE

    Every trace property is either safety, liveness, or the con-junction of two trace propertiesone that is safety and onethat is liveness [7]. In addition, an invariance argument sucesfor proving that a program satises a trace property that issafety; a variant function is needed for proving a trace propertythat is liveness [8]. Thus, the safety-liveness classication for

    trace properties comes with proof methods beyond oeringformal denitions.

    Any classication of policies is likely to be associ-ated with some kind of system model and, in particu-lar, with the interfaces the model denes (hence theoperations available to adversaries). For example, wemight model a system in terms of the set of possibleindivisible state transitions that it performs whileoperating, or we might model a system as a blackbox that reads information streams from some chan-nels and outputs on others. Sets of indivisible state

    transitions are a useful model for expressing lawsabout classes of policies enforced by various operatingsystem mechanisms (for example, reference monitorsversus code rewriting) which themselves are con-cerned with allowed and disallowed changes to systemstate; stream models are oen used for quantifyinginformation leakage or corruption in output streams.We should expect that a science of cybersecurity willnot be built around a single model or around a singleclassication of policies.

    2.3. Classes of defenses

    A large and varied collection of dierent defenses canbe found in the cybersecurity literature.

    Program analysis and rewriting form one naturalclass characterized by expending the eort for deploy-ing the defense (mostly) prior to execution. is classof defenses, called language-based security, can be fur-ther subdivided according to whether rewriting occurs(it might not occur with type-checking, for example)and according to the work required by the analysisand/or the rewriting. e undecidability of certainanalysis questions and the high computation costs

    of answering others is sometimes a basis for furtherdistinguishing conservative defensesthose analysismethods that can reject as being insecure programsthat actually are secure, and those rewriting methodsthat add unnecessary checks.

    Run-time defenses have, as their foundation, only afew basic mechanisms:

    Isolation. Execution of one program is somehowprevented from accessing interfaces that are as-sociated with the execution of others. Examplesinclude physically isolated hardware, virtualmachines, and processes (which, by denition,have isolated memory segments).

    Monitoring. A reference monitoris guaranteed toreceive control whenever any operation in somespecied set is invoked; it further has the capac-ity to block subsequent execution, which it doesto prevent an operation from proceeding whenthat execution would not comply with what-ever policy is being enforced. Examples includememory mapping hardware, processors havingmodes that disable certain instructions, operat-ing system kernels, and rewalls.

    Obfuscation. Code or data is transmitted orstored in a form that can be understood onlywith knowledge of a secret. at secret is keptfrom the attacker, who then is unable to abuse,understand, or alter in a meaningful way thecontent being protected. Examples include dataencryption, digital signatures, and programtransformations that increase the work factorneeded to cra attacks.

    Obviously, a classication of run-time defenses couldbe derived from this taxonomy of mechanisms.

    Another way to view defenses is in terms of trustrelocation. For example, by running an application

    FIGURE 2. A rewall is an example of a reference monitor.

  • 7/31/2019 TNW 19 2 Blueprint for a science of cybersecurity

    6/11

    52

    Blueprint for a science of cybersecurity

    under control of a reference monitor, we relocate trustin that application to trust in the reference monitor.is trust-relocation view of defenses invites discoveryof general laws that govern how trust in one compo-nent can be replaced by trust in another.

    We know that it is always possible for trust in ananalyzer to be relocated to a proof checkersim-ply have an analyzer that concludes Palso generatea proof ofP. Moreover, this specic means of trustrelocation is attractive because proof checkers can besimple, hence easy to trust; whereas, analyzers canbe quite large and complicated. is suggests a re-lated question: Is it ever possible to add defenses andtransform one system into another, where the latterrequires weaker assumptions about components be-ing trusted? Perhaps trust is analogous to entropy inthermodynamicssomething that can be reversed

    only at some cost (where cost corresponds to thestrength of the assumptions that must be made)? Suchquestions are fundamental to the design of securesystems, and todays designers have no theory to helpwith answers. A science of cybersecurity could providethat foundation.

    3. Laws already on the books

    Attacks coevolve with defenses, so a system thatyesterday was secure might no longer be securetomorrow. You can then wonder whether yesterdays

    science of cybersecurity would be made irrelevant bynew attacks and new defenses. is depends on thelaws, but if the classes of attacks, defenses, and poli-cies are wisely constructed and suciently general,then laws about them should be both interesting andlong-lived. Examples of extant laws can provide someconrmation, and two (developed by the author) arediscussed below.

    3.1. Law: Policies and reference monitors

    A developer who contemplates building or modifyinga system will have in mind some class of policies that

    must be enforced. Laws that characterize what poli-cies are enforced by given classes of defenses would behelpful here. Such laws have been derived for vari-ous defenses. Next, we discuss a law [9] concerningreference monitors.

    e policy enforced by a reference monitor is theset of traces that correspond to executions in whichthe reference monitor does not block any operation.is set is a trace property, because whether the refer-ence monitor blocks an operation in a trace depends

    only on the contents of that trace (specically, the pre-ceding operations in that trace). Moreover, this traceproperty is safety; the set of nite sequences that endin an operation the reference monitor blocks consti-tutes the bad thing. We conclude:

    L. All reference monitors enforce traceproperties that are safety.

    is law, for example, implies that a reference mon-itor cannot enforce an information ow policy, since(as discussed in box 2) information ow is not a traceproperty. However, the law does not preclude using a

    reference monitor to enforce a policy that is strongerand, by being stronger, implies that the informationow policy also will hold. But a stronger policy willdeem insecure some executions the information owpolicy does not. So such a reference monitor wouldblock some executions that would be allowed by adefense that exactly enforces information ow. esystem designer is thus alerted to a trade-oemploy-ing a reference monitor for information ow policiesbrings overly conservative enforcement.

    e above law also suggests a new kind of run-timedefense mechanism [10]. For every trace property

    that is safety, there exists an automaton mthat acceptsthe set of traces in [8].

    Automaton m

    is a reference monitor for because,by denition, it rejects traces that violate . So if codeM

    that simulates m

    is invoked before every instruc-

    tion in some given program S, then the result will bea new program that behaves just like S except it haltsrather than executing an instruction that violatespolicy. is is depicted in gure 3, where invoca-tionM

    (x) simulates the transition that automaton

    m

    makes for input symbol xand repeatedly returnsOK until automaton m

    would reject the sequence of

    inputs it has processed. us, the statement

    ifM(S

    1) OKthen halt (1)

    in gure 3 immediately prior to a program statementS

    icauses execution to terminate if next executing

  • 7/31/2019 TNW 19 2 Blueprint for a science of cybersecurity

    7/11

    The Next Wave | Vol. 19 No. 2 | 2012 | 53

    FEATURE

    b. ere is also experimental evidence [11] that distinct versions built by independent teams nevertheless share vulnerabilities.

    Siwould violate the policy dened by automaton

    mthat is, if executing S

    iwould cause policyto

    be violated.

    S1

    ifM(S

    1) OKthen halt

    S2

    S1

    S3

    ifM(S

    2) OKthen halt

    S4

    S2

    original inlined reference monitor

    FIGURE 3. Inlined reference monitor example

    Such inlined reference monitors can be more e-cient at run-time than traditional reference monitors,

    because a context switch is not required each time aninlined reference monitor is invoked. However, aninlined reference monitor must be installed separatelyin each program whose execution is being monitored;whereas, a traditional reference monitor can be writ-ten and installed once and for all. e per-programinstallation does mean that inlined reference monitorscan enforce dierent policies on dierent programs,an awkward functionality to support with a singletraditional reference monitor. And per-program in-stallation also means that code (1) inserted to simulatem

    can be specialized and simplied, thereby allow-

    ing unnecessary checks to be eliminated for inlined

    reference monitors.

    3.2. Law: Attacks and obfuscators

    We dene a set of programs to be diverse if all imple-ment the same functionality but dier in their imple-mentation details. Diverse programs are less proneto having vulnerabilities in common, because attacksoen depend on memory layout and/or instructionsequence specics. But building multiple distinct ver-sions of a program is expensive.b So system implemen-tors have turned to mechanical means for creating sets

    comprising diverse versions of a given program.For mechanically generated diversity to work as a

    defense, not only must implementations dier (so theyhave few vulnerabilities in common), but the dier-ences must be kept secret from attackers. For example,

    buer overow attacks are generally written relative tosome specic run-time stack layout. Alter this layoutby rearranging the relative locations of variables aswell as the return address on the stack, and an inputdesigned to perpetrate an attack for the original stack

    layout is unlikely to succeed. But if the new stacklayout were known by the adversary, then craing anattack again becomes straightforward.

    Programs to accomplish such transformations havebeen called obfuscators. An obfuscator takes two in-putsa program S and a secret keyKand producesa morph, which is a program (S, K) whose semanticsis equivalent to S but whose implementation diersfrom S and from morphs generated with other keys.Kspecies which exact transformations are applied inproducing morph (S, K). Note that since S and areassumed to be publicly known, knowledge ofKwould

    enable an attacker to learn implementation details forsuccessfully attacking morph (S, K).

    Dierent classes of transformations are more orless eective in defending against the various dierentclasses of attacks. is correspondence is importantwhen designing a set of defenses for a given threatmodel, but knowing the specic correspondences isnot the same as knowing the overall power of mechan-ically generated diversity as a defense. at defensivepower for programs written in a C-like language hasbeen partially characterized in a set of laws [12]. EachObfuscator Law establishes, for a specic (common)

    type system Ti and obfuscator i pair, what is the rela-tionship between two sets of attacksthose blockedwhen type system T

    iis enforced versus those that

    cause execution of a morph i(S, K) to abort for some

    secret keyK.

    e Obfuscator Laws do not completely quantifythe dierence between the eectiveness of type-check-ing and obfuscation. But the laws are noteworthy fora science of cybersecurity because they circumventthe dicult problem of reasoning about attacks notyet invented. Laws about classes of known attacks riskirrelevance as new attacks are discovered. By formulat-

    ing the Obfuscator Laws in terms of a relation betweensets of attacks, the need to identify or enumerateindividual attacks is avoided. To wit, the class of at-tacks that type-checking defends against is not knownand not given, yet the power of obfuscation to defend

  • 7/31/2019 TNW 19 2 Blueprint for a science of cybersecurity

    8/11

    54

    Blueprint for a science of cybersecurity

    against an attack can now be meaningfully conveyedrelative to the power of type-checking.

    4. The science in context

    A science of cybersecurity would build on knowledgefrom several existing areas of computer science. econnections to formal methods, fault-tolerance, andexperimental computer science are nuanced; they arediscussed below. However, cryptography, informationtheory, and game theory are also likely to be valuablesources of abstractions and laws. Finally, the physicalsciences surely have a role to playnot only in mattersof physical security but also for understanding un-conventional interfaces to real devices that attackersmight exploit (as exemplied by the cold boot attacksmentioned in section 2.1).

    Formal methods. Attacks are possible only becausea system we deploy has aws in its implementation,design, specication, or requirements. Eliminate theaws and we eliminate the need to deploy defenses.But even when the systems on which we rely arentbeing attacked, we should want condence that theywill function correctly. e presence of aws under-mines that condence. So cybersecurity is not the onlycompelling reason to eliminate aws.

    e focus of formal methods research is on meth-ods for gaining condence in a system by usingrigorous reasoning, including programming logics

    and model checkers.c is work has been remarkablysuccessful with small systems or small specications. Itis used by companies like Microso to validate devicedrivers and Intel to validate chip designs. It is alsothe engine behind strong type-checking in modernprogramming languages (for example, Java and C#)and various code-analysis tools used in security audits.Further developments in formal methods could servea science of cybersecurity well. However, to date, workin formal methods has been based on trace propertiesor something with equivalent expressive power. isfoundation allows mathematically elegant character-

    izations for whether a program satises a specicationand for justifying stepwise renement of programs.But trace properties are not adequately expressive forspecifying all condentiality, integrity, and availabil-ity policies, and stepwise renement is not sound for

    these richer policies. (A mathematical justication ofthis limitation is provided in box 3 for the interestedreader.) So the foundations of todays formal meth-ods would have to be changed to something with theexpressiveness of hyperpropertiesno small feat.

    BOX 3.Satises and renement

    A program S can be modeled as a trace propertyS

    containingall sequences of states that could arise from executing S, anda specic execution ofS satises a trace property Pif the tracemodeling that execution is in P. Thus, S satises Pif and only if

    S Pholds.

    We say that a program S'renes S, denoted S' S, when S'resolves choices left unspecied by S. For example, a programthat incrementsxby 1 renes a program that merely speciesthatxbe increased. A renement S'ofS thus exhibits a subset ofthe executions for S: S' S holds if and only if

    S'

    Sholds.

    Notice that satises is closed under renement. IfS'renesS and S satises P, then S'satises P. Also, if we construct S'byperforming a series of renements S' S

    1,S

    1 S

    2, . . . , S

    n S and

    S satises Pthen we are guaranteed that S'will satisfy Ptoo. Soprograms can be constructed by stepwise renement.

    With richer classes of policies, satises is unfortunately notclosed under renement. As an example, consider two pro-grams. Program S

    x=yis modeled by trace property

    x=ycontain-

    ing all traces in which x = yholds in all states; program S*ismodeled by

    S*containing all sequences of states. We have that

    x=y

    S*

    holds, so by denition Sx=y

    S*. However, program S*enforces the condentiality policy that no information owsbetweenxandy, whereas (renement) S

    x=ydoes not. Satises for

    the condentiality policy is not closed under renement, andstepwise renement is not sound for deriving programs that

    satisfy this policy.

    Byzantine fault-tolerance. A system is consideredfault-tolerantif it will continue operating correctlyeven though some of its components exhibit faultybehavior. Fault-tolerance is usually dened relativeto afault modelthat denes assumptions about whatcomponents can become faulty and what kinds ofbehaviors faulty components might exhibit. In theByzantine fault model[13], faulty components are per-mitted to collude and to perform arbitrary state transi-

    tions. A real system is unlikely to experience suchhostile behavior from its faulty components, but anyfaulty behavior that might actually be experienced is,by denition, allowed with the Byzantine fault model.So by building a system that works for the Byzantine

    c. Other areas of soware engineering are concerned with gaining condence in a system through the use of experimentation (for ex-ample, testing) or management (for example, strictures on development processes).

  • 7/31/2019 TNW 19 2 Blueprint for a science of cybersecurity

    9/11

    The Next Wave | Vol. 19 No. 2 | 2012 | 55

    FEATURE

    fault model, we ensure that the system can tolerateall behaviors that in practice could be exhibited by itsfaulty components.

    e basic recipe for implementing such Byzantinefault-tolerance is well understood. We assume that theoutput of every component is a function of the preced-ing sequence of inputs. Each component that mightfail is replaced by 2t + 1 replicas, where these replicasall receive the same sequence of inputs. Provided thattor fewer replicas are faulty, then the majority of the2t + 1 will be correct. ese correct replicas will gener-ate identical correct outputs, so the majority outputfrom all replicas is unaected by the behaviors offaulty components.

    A faulty component in the Byzantine fault modelis indistinguishable from a component that has been

    compromised and is under control of an attacker. Wemight thus conclude that if a Byzantine fault-tolerantsystem can tolerate tcomponent failures, then it alsocould resist as many as tattackswe could get se-curity by implementing Byzantine fault-tolerance.Unfortunately, the argument oversimplies, and theconclusion is unsound:

    Replication, if anything, creates more opportuni-ties for attackers to learn condential informa-tion. So enforcement of condentiality is notimproved by the replication required for imple-menting Byzantine fault-tolerance. And storing

    encrypted dataeven when a dierent key isused for each replicadoes not solve the prob-lem if replicas actually must themselves be ableto decrypt and process the data they store.

    Physically separated components connected onlyby narrow bandwidth channels are generallyobserved to exhibit uncorrelated failures. Butphysically separated replicas still will share manyof the same vulnerabilities (because they will usethe same code) and, therefore, will not exhibitindependence to attacks. If a single attack mightcause any number of components to exhibitByzantine behavior, then little is gained by toler-ating tByzantine components.

    What should be clear, though, is that mechanicallygenerated diversity creates a kind of independencethat can be a bridge from Byzantine fault tolerance to

    attack tolerance. e Obfuscation Laws discussed insection 3.2 are a rst step in this direction.

    Experimental computer science. e code for atypical operating system can t on a disk, and all of theprotocols and interconnections that comprise the In-ternet are known. Yet the most ecient way to under-stand the emergent behavior of the Internet is not tostudy the documentation and program codeit is toapply stimuli and make measurements in a controlledway. Computer systems are frequently too complexto admit predictions about their behaviors. So just asexperimentation is useful in the natural sciences, weshould expect to nd experimentation an integral partof computer science.

    Even though we might prefer to derive our cyberse-curity laws by logical deduction from axioms, the va-

    lidity of those axioms will not always be self-evident.We oen will work with axioms that embody approxi-mations or describe models, as is done in the naturalsciences. (Newtons laws of motion, for example, ig-nore friction and relativistic eects.) Experimentationis the way to gain condence in the accuracy of ourapproximations and models. And just as experimenta-tion in the natural sciences is supported by laborato-ries, experimentation for a science of cybersecuritywill require test beds where controlled experimentscan be run.

    Experimentation in computer science is somewhat

    distinct from what is called experimental computerscience though. Computer scientists validate theirideas about new (hardware or soware) system de-signs by building prototypes. is activity establishesthat hidden assumptions about reality are not beingoverlooked. Performance measurements then demon-strate feasibility and scalability, which are otherwisedicult to predict. And for artifacts that will be usedby people (for example, programming languages andsystems), a prototype may be the only way to learnwhether key functionality is missing and what novelfunctionality is useful.

    Since a science of cybersecurity should lead to newideas about how to build systems and defenses, thevalidation of those proposals could require buildingprototypes. is activity is not the same as engineeringa secure system. Prototypes are built in support of a

  • 7/31/2019 TNW 19 2 Blueprint for a science of cybersecurity

    10/11

    56

    Blueprint for a science of cybersecurity

    science of cybersecurity expressly to allow validationof assumptions and observation of emergent behav-iors. So, a science of cybersecurity will involve someamount of experimental computer science as well assome amount of experimentation.

    5. Concluding remarks

    e development of a science of cybersecurity couldtake decades. e sooner we get started, the sooner wewill have the basis for a principled set of solutions tothe cybersecurity challenge before us. Recent new fed-eral funding initiatives in this direction are a key step.Its now time for the research community to engage.

    Acknowledgments

    An opportunity to deliver the keynote at a work-shop organized by the National Science Foundation(NSF), NSA, and the Intelligence Advanced ResearchProjects Activity on Science of Security in Fall 2008was the impetus for me to start thinking about whatshape a science of cybersecurity might take. efeedback from the participants at that workshop aswell as discussions with the other speakers at a sum-mer 2010 Jasons meeting on this subject was quitehelpful. My colleagues in the NSF Team for Researchin Ubiquitous Secure Technology (TRUST) Scienceand Technology Center have been a valuable sourceof feedback, as have Michael Clarkson and Riccardo

    Pucella. I am grateful to Carl Landwehr, Brad Martin,Bob Meushaw, Greg Morrisett, and Pat Muoio forcomments on an earlier dra of this paper.

    Funding

    is research is supported in part by NSF grants0430161, 0964409, and CCF-0424422 (TRUST), Of-ce of Naval Research grants N00014-01-1-0968 andN00014-09-1-0652, and a grant from Microso. eviews and conclusions contained herein are those ofthe author and should not be interpreted as necessar-

    ily representing the ocial policies or endorsements,either expressed or implied, of these organizations orthe US Government.

    About the author

    Fred B. Schneiderjoined the Cornell Universityfaculty in 1978, where he is now the Samuel B. EckertProfessor of Computer Science. He also is the chief

    scientist of the NSF TRUST Science and Technol-ogy Center, and he has been professor at large at theUniversity of Tromso since 1996. He received a BSfrom Cornell University (1975) and a PhD from StonyBrook University (1978).

    Schneiders research concerns trustworthy systems,most recently focusing on computer security. His earlywork was in formal methods and fault-tolerant distrib-uted systems. He is author of the graduate textbookOn Concurrent Programming, coauthor (with DavidGries) of the undergraduate textA Logical Approachto Discrete Math, and the editor ofTrust in Cyberspace,

    which reports ndings from the US National ResearchCouncils study that Schneider chaired on informationsystems trustworthiness.

    A fellow of the American Association for theAdvancement of Science, the Association for Com-puting Machinery, and the Institute of Electrical andElectronics Engineers, Schneider was granted a DSchonoris causa by the University of Newcastle-upon-Tyne in 2003. He was awarded membership in NorgesTekniske Vitenskapsakademi (the Norwegian Acad-emy of Technological Sciences) in 2010 and the USNational Academy of Engineering in 2011. His survey

    paper on state machine replication received a SpecialInterest Group on Operating Systems (SIGOPS) Hallof Fame Award.

    Schneider serves on the Computing Research As-sociations board of directors and is a council memberof the Computing Community Consortium, whichcatalyzes research initiatives in the computer sciences.He is also a member of the Defense Science Board andthe National Institute for Standards and TechnologyInformation Security and Privacy Advisory Board.A frequent consultant to industry, Schneider co-chairs Microsos Trustworthy Computing Academic

    Advisory Board.

    Dr. Schneider can be reached at the Departmentof Computer Science at Cornell University in Ithaca,New York 14853.

  • 7/31/2019 TNW 19 2 Blueprint for a science of cybersecurity

    11/11

    The Next Wave | Vol. 19 No. 2 | 2012 | 57

    References

    [1] Kolata G. e key vanishes: Scientist outlines unbreak-able code. New York Times. 2001 Feb 20. Available at: http://www.nytimes.com/2001/02/20/science/the-key-vanishes-

    scientist-outlines-unbreakable-code.html

    [2] Halderman JA, Schoen SD, Heninger N, Clarkson W,Paul W, Calandrino JA, Feldman AJ, Appelbaum J, Felten,EW. Lest we remember: Cold boot attacks on encryptionkeys. In: Proceedings of the 17th USENIX Security Sympo-sium; July 2008; p. 4560. Available at: http://www.usenix.org/events/sec08/tech/full_papers/halderman/halderman.pdf

    [3] Dolev D, Yao AC. On the security of public keyprotocols. IEEE Transactions on Information eory.1983;29(2):198208. DOI: 10.1109/TIT.1983.1056650

    [4] Clarkson M, Schneider FB. Quantication of integrity.In: Proceedings of the 23rd IEEE Computer Security Founda-

    tions Symposium; Jul 2010; Edinburgh, UK, p. 2843. DOI:10.1109/CSF.2010.10

    [5] Clarkson M, Schneider FB. Hyperproperties.Journal ofComputer Security. 2010;18(6):11571210.

    [6] Lamport L. Proving the correctness of multiprocessprograms. IEEE Transactions on Soware Engineering.1977;3(2):125143. DOI: 10.1109/TSE.1977.229904

    [7] Alpern B, Schneider FB. Dening liveness. Infor-mation Processing Letters. 1985;21(4):181185. DOI:10.1016/0020-0190(85)90056-0

    [8] Alpern B, Schneider FB. Recognizing safety and liveness.Distributed Computing. 1987;2(3):117126. DOI: 10.1007/BF01782772

    [9] Schneider, FB. Enforceable security policies.ACMTransactions on Information and System Security.2000;3(1):3050. DOI: 10.1145/353323.353382

    [10] Erlingsson U, Schneider, FB. IRM enforcement of Javastack inspection. In: Proceedings of the 2000 IEEE Sympo-sium on Security and Privacy; May 2000; Oakland, CA; p.246255. DOI: 10.1109/SECPRI.2000.848461

    [11] Knight JC, Leveson NG. An experimental evalua-tion of the assumption of independence in multiversionprogramming. IEEE Transactions on Soware Engineering.1986;12(1):96109.

    [12] Pucella R, Schneider FB. Independence from ob-fuscation: A semantic framework for diversity.Journal of

    Computer Security. 2010;18(5):701749. DOI: 10.3233/JCS-2009-0379

    [13] Lamport L, Shostak R, Pease M. e Byzantine generalsproblem.ACM Transactions on Programming Languages.1982;4(3):382401. DOI: 10.1145/357172.357176