digital identity - technology and society magazine,...

8
T he word, “identity” re- fers to an increasing range of possible nouns – from a set of person- ality-defining traits to a hashed computer password. The lack of conceptual clarity reflected by the overload of the word is confounding the ubiquitous practice of risk management via identity management. Thus this arti- cle clarifies the nature of identifica- tion in a digital networked world, as opposed to a paper-based world. The examples and the context are identification as used in law enforcement in the United States. Together the definitions and the examples illustrate the importance of developing policies with recogni- tion of the social implications of technical change. Authentication, Identity, and Access The problems of authentication, identity, and access are difficult indi- vidually. Identification requires authentication of identity; access to data must often be preceded by authentication. The difficulty of these problems is further exacerbated by the subtleties of authentication, identity, and access in law enforce- ment because of privacy concerns unique to matters of justice. Identification is a necessary ele- ment of transactions-based digital government [9]. While citizen inter- action with and access to govern- ment is increasingly remote and 34 | 0278-0079/04/$20.00©2004IEEE IEEE TECHNOLOGY AND SOCIETY MAGAZINE | FALL 2004 Digital Identity L. JEAN CAMP T DIGITAL VISION

Upload: others

Post on 29-Jun-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Digital identity - Technology and Society Magazine, IEEEsocial.cs.uiuc.edu/class/cs598kgk-04/papers/digital_identity.pdfunverified assertions of identity can, in fact, undermine rational

The word, “identity” re-fers to an increasingrange of possible nouns– from a set of person-ality-defining traits to a

hashed computer password.The lack of conceptual clarity

reflected by the overload of theword is confounding the ubiquitouspractice of risk management viaidentity management. Thus this arti-cle clarifies the nature of identifica-tion in a digital networked world, asopposed to a paper-based world.The examples and the context areidentification as used in lawenforcement in the United States.Together the definitions and theexamples illustrate the importanceof developing policies with recogni-tion of the social implications oftechnical change.

Authentication, Identity, and AccessThe problems of authentication,identity, and access are difficult indi-vidually. Identification requiresauthentication of identity; access todata must often be preceded byauthentication. The difficulty ofthese problems is further exacerbatedby the subtleties of authentication,identity, and access in law enforce-ment because of privacy concernsunique to matters of justice.

Identification is a necessary ele-ment of transactions-based digitalgovernment [9]. While citizen inter-action with and access to govern-ment is increasingly remote and

34 | 0278-0079/04/$20.00©2004IEEE IEEE TECHNOLOGY AND SOCIETY MAGAZINE | FALL 2004

Digital IdentityL. JEAN CAMP

T

DIG

ITA

L V

ISIO

N

Page 2: Digital identity - Technology and Society Magazine, IEEEsocial.cs.uiuc.edu/class/cs598kgk-04/papers/digital_identity.pdfunverified assertions of identity can, in fact, undermine rational

computer mediated, identificationsystems used by citizens remainfirmly grounded in the age of print.Understanding how digital net-worked identification is differentfrom paper-based identification sys-tems is a critical and often over-looked element in appropriatedesign of identification systems.

In this discussion, we begin byoffering rigorous definitions ofidentity and related terms. Theseilluminate the core problem inmodern identity systems – theassumptions about the functioningof identification are based onassumptions built on centuries ofexperience with paper. The failureof identity in the law enforcementcases discussed here can be under-stood as stemming from mentalmodels of paper-based identitymanagement mapped onto digitalnetworked systems. The policiesand organizational models of paper-based identification system createdangerous failure modes whenapplied to digital networked identi-ties. The most widely known failuremode is identity theft. These assump-tions are discussed in some detail inthe section on paper identities.

Applying rigorous definitions tothe case of an officer stopping a cit-izen who may or may not be a sus-pect in a crime illustrates thatunverified assertions of identitycan, in fact, undermine rational riskmanagement. Digitizing these falsi-fiable claims of identity results inless security, not more.

An identity system is useful forthreat management in a high-risksituation only if there is no effectiveopportunity for falsification duringenrollment and verification. Just ascompletely unverified assertions ofidentity have less than no value,faulty identity systems have similarrisks. Therefore, the use of identitymanagement systems as risk man-agement systems creates a situationin which security is reduced, as thegenuinely dangerous will be pre-sented or authenticated as havingapparently benign identities.

DefinitionsConsider a passport. A passportincludes an identifier, the passportnumber. It lists some attributes,including nationality, sometimesheight and weight. It includes per-sonal identifiers, including date ofbirth and name. It includes a bio-metric1 method of identification – aphotograph [5]. Passports are used,therefore for both identification (“Iam me”) and identity authentication(“My government authenticates thatI am me.”). A passport linksattribute authentication (citizenship)with identity authentication (name)to enable off-line identification(photograph and data). All of theseelements are combined in a singledocument, and that document mustbe shown in whole to provideauthentication. This binding ofattribute to identity for the purposeof authentication is made necessaryby the power and limits of paper.

A few decades ago, the immigra-tion official reviewing a passportwas not expected to calculate theexponentials needed to make a pub-lic key system work – this requiredcomputers. That official also couldnot check whether an attribute iden-tified only by a number was valid ina highly dynamic system - thisrequired a network and highly avail-able data.

The aggregation of elements ofidentity, identifiers, and authentica-tion that inherently occurs with apaper-based system does not neces-sarily exist in digital systems. Clari-ty requires breaking a traditionalpaper system into its functionalcomponents and understanding theinteraction of those components in adigital networked environment.Without that clarity, digital systemscan be designed with vulnerabili-ties, concepts of identity, andauthentication.

In order to further develop thecore argument, the consideration ofdefinitions is the next order of busi-ness.2 Each of these describes afunction or element of an identitysystem.

Identifier. An identifier distin-guishes a distinct person, place orthing within the context of a spe-cific namespace. For example, anautomobile, account, and a personeach have identifiers. The auto-mobile has a license plate and theaccount has a number. The personmay be associated with eitherauto or account through addition-al information, e.g., a serial num-ber, or a certificate. One person,place, or thing can have multipleidentifiers. A car has a permanentVIN and temporary license plate.Each identifier is meaningful onlyin the namespace, and only whenassociated with the thing beingidentified. Therefore, each identi-fier can reasonably be thought ofas having a <thing identified,identifier, namespace> set, e.g.,<car, license plate, state motorvehicle database>.

Attribute. An attribute is a charac-teristic associated with an entity,such as an individual. Examples ofpersistent attributes include eye col-or, and date of birth. Examples oftemporary attributes includeaddress, employer, and organiza-tional role. A Social Security Num-ber is an example of a long-livedattribute in the American govern-mental system. Passport numbersare long-lived attributes. Some bio-metric data are persistent, somechange over time or can be changed,(e.g., fingerprints versus hair color).

Personal identifier. Persistent iden-tifiers consist of that set of identi-

IEEE TECHNOLOGY AND SOCIETY MAGAZINE | FALL 2004 | 35

1A biometric is a physical , biological feature or attribute that can be measured. Examples of unique bio-

metrics are fingerprints, iris patterns, and DNA sequences. Other biometrics are not unique and therefore

not useful for authentication or identification, e.g., hair color, nose length, and height .2 The following definitions were developed cooperatively in preparation for the KSG Identity Civic Sce-

nario by participants of this event. For more information please see: http://www.ksg.harvard.edu/digital-

center/conference.

Page 3: Digital identity - Technology and Society Magazine, IEEEsocial.cs.uiuc.edu/class/cs598kgk-04/papers/digital_identity.pdfunverified assertions of identity can, in fact, undermine rational

fiers associated with an individualhuman that are based on attributesthat are difficult or impossible toalter. For example, human date ofbirth and genetic pattern are all per-sonal identifiers. Notice that anyonecan lie about his or her date of birth,but no one can change it. Personalidentifiers are not inherently subjectto authentication.

Identification. Identification is theassociation of a personal identifierwith an individual presenting attrib-utes, e.g., “You are John Doe.”Examples include accepting theassociation between a physical per-son and claimed name; determiningan association between a companyand a financial record; or connect-ing a patient with a record of physi-cal attributes. Identification occursin the network based on both indi-vidual humans and devices. Identifi-cation requires an identifier (e.g.,VIN, passport number).

Authentication. Authentication isproof of an attribute. Identity as it isconstructed in modern systems is anattribute, e.g., “Here is my proof ofpayment so please load the televisiononto my truck.”A name is an attributeand an identifier, but it usually is notused to provide authentication.

Identity Authentication. Authenti-cation of identity is proving an asso-ciation between an entity and anidentifier: the association of a per-son with a credit or educationalrecord; the association of an auto-mobile with a license plate; or a per-son with a bank account. Essential-ly this is verification of the <thingidentified, thing> claim in a name-space. Notice, “You are John Doe.”is identification while “Your docu-ments illustrate that you are JohnDoe.” is identity authentication.

Attribute Authentication. Auth-entication of an attribute is provingan association between an entity andan attribute; e.g., the association of apainting with a certificate of authen-

ticity. In an identity system this isusually a two-step process: identityauthentication followed by authenti-cation of the association of theattribute and identifier. The automo-bile is identified by the license plate;but it is authenticated as legitimateby the database of cars that are notbeing sought for enforcement pur-poses. Of course, the license platecheck may find an unflagged recordin the database yet fail identityauthentication if the license plate isvisibly on the wrong car; e.g., alicense plate issued to a VW bug on aHummer. The person is identified bythe drivers’ license and the licensesimultaneously authenticates theright-to-drive attribute. Notice thedifference between “Your documentsillustrate that you are John Doe.” and“Your documents illustrate that you area student registered at the Universityand have access rights in this building.”

Authorization. Authorization is adecision to allow a particular actionbased on an identifier or attribute.Examples include the ability of aperson to make claims on lines ofcredit; the right of an emergencyvehicle to pass through a red light;or a certification of a radiation-hard-ened device to be attached to a satel-lite under construction.

Identity. In an identity manage-ment system identity is that set ofpermanent or long-lived temporalattributes associated with an entity.

Anonym (as in anonymous). Ananonym is an authenticated attributethat is not linked to an identifier. Anidentifier associated with no person-al identifier, but only with a single-use attestation of an attribute is ananonym. An anonymous identifieridentifies an attribute, once. Ananonymous identifier used morethan once becomes a pseudonym.

Pseudonym. An identifier associat-ed with attributes or set(s) of trans-actions, but with no permanentidentifier.

The most commonly used anon-yms are the dollar and the euro. Themost familiar anonyms self-authen-ticate. Paper money is authenticatedby the users. The right to spend thepaper money is authenticated sim-ply by the fact that it is presented.To hold a euro is to have an anonymthat authenticates the right to spendthat euro. These dollars, euros, andshillings are anonymous tokens. Incontrast, credit cards and debit cardsauthenticate an identity, use thatidentity to authenticate the attribute.The attribute in this case is the rightto make charges. Credit cards areidentity-based payment systems.Cash is token-based anonymouspayment.

Identity options exist in morethan money. Disney World offersseason tickets, where a season tick-et holder can authenticate by handgeometry. Alternatively, DisneyWorld offers tickets where the ticketitself authenticates the right to enterthe park – anonyms that verify theright to enter the park. Disney storesoffer insured tickets, where the tick-et is linked to the identity via pay-ment mechanism. These tickets looklike anonymous tickets. However, ifthe ticket is lost, the purchaser cango to the Disney store or office, can-cel the previously purchased ticket,and obtain a new valid ticket by pre-senting proof of identity. Theinsured ticket has an identifier thatis linked, by Disney, to the paymentprocess.

In contrast, passports explicitlyreject the possibility of providinganonymity. A pseudonymous pass-port that simply identifies you as acitizen is not available. In the digitalworld, in theory, a passport couldpresent a citizenship claim, then adda biometric to authenticate the citi-zenship/body claim and even enablea search of known criminals. In sucha transaction the person remainsanonymous assuming there is nomatch between the unique biometricand the list of those sought by lawenforcement. The binding betweenthe citizenship authentication and

36 | IEEE TECHNOLOGY AND SOCIETY MAGAZINE | FALL 2004

Page 4: Digital identity - Technology and Society Magazine, IEEEsocial.cs.uiuc.edu/class/cs598kgk-04/papers/digital_identity.pdfunverified assertions of identity can, in fact, undermine rational

the person physically present ismuch stronger than in the case withthe paper passport, yet the role ofidentity is minimized. There is noneed to store the exact biometric forthis system to function. Privacy-enhancing biometric systems, suchas those that depend on one-wayhashes of biometrics, are moresecure in the long term than systemsthat depend on predictable and thusmore easily forged data.

Identity systems must be trustwor-thy to be useful. The more critical thecontext or namespace in which identi-ties are authenticated, the more robustthe attribute authentication must be.Making certain that no one is holdingexplosives requires stronger attribu-tion than making certain that everyonein the plane purchased a ticket.Adding identity into attribute authen-tication can weaken an identity sys-tem by creating a possible attackbecause it adds an extra step. Whenidentity is added to authenticationattribution then the authenticationbecomes as strong as the <identity,thing identified> link. If the authenti-cation has two steps (identity authen-tication, then <identity, attributeauthentication>) this creates anotheropportunity to subvert the authentica-tion by subverting the first stage.

As an example, consider “crimi-nal identity theft” as it occurs in theUnited States. First, the arrestedindividual presents false identifica-tion information. Those false identi-fiers are entered into the nationalfingerprint database under theclaimed name. The fingerprints arenot compared with those already inthe database because the system iskeyed on claims of identity (nameand date of birth) rather than thepersistent attribute (fingerprint).The criminal identity theft victimdiscovers the theft after being arrest-ed for another person’s crime. Atthat time the criminal identity theftvictim’s fingerprints are comparedto the criminal’s. Then the crimi-nal’s fingerprints are compared withall other fingerprints in the nationalfingerprint database, often but not

always yielding a match. When thedifference is discovered, the crimi-nal identity theft victim obtains adocument that verifies that the per-son’s identity is an alias of a crimi-nal. The bearer of this certificate canthen present the certificate to lawenforcement officials to attest to hisor her innocence. The initial depen-dence on the constructed identityrather than the biometric attributeallows an attack on the law enforce-ment system, allowing wanted crim-inals to escape from police custodyby whitewashing their criminalrecords with others’ good names.

Compare the entering of names(identities) with the direct entering offingerprints (biometric attributes).The entering of a name asks the ques-tion, “Is this claimed identity in thesystem?” The attribute authenticationquestion, “Are the fingerprints of thisbody in the database of criminals?”offers a more reliable response.

Criminal identity theft illustratesthat biometrics can deliver thewrong answer more quickly andwith a greater degree of precisionthan paper-based identity systems.Biometrics are only as reliable asthe process of enrollment.3

Direct attribute authentication(“Do you have a euro?”) is morereliable than identity-based attributeauthentication (“Do you have a eurocredit?”) in any case where it is eas-ier to create a fraudulent identitythan it is to create a false attribute. Ifdollars were easier to counterfeitthan identities to steal, identity theftwould arguably not be the fastestgrowing crime in the United States.If the process of arrest and enroll-ment in the American criminal jus-tice system relied on personalattribute (i.e., fingerprint) first,before identity authentication (i.e.,name and ID), then criminal identi-ty theft would not be useful.

Fingerprints are not validated

against an arrested person's assertionof identity during the arrest proce-dure. This failure to verify identityallows high-risk multiple-offensefelons to be classified as the low-risksingle-arrest identities they present.The failure to verify every finger-print against possible identity theftnot only puts law enforcement per-sonnel inappropriately processingdangerous multiple offenders at risk;this failure also increases the valueof stolen identities of the innocent.

Identity in the LawEnforcement Context Currently in the United States thereis considerable debate about appro-priate responses to an ill-defined ter-rorist threat and an increasing crimerate. In seeking to provide security,multiple identification mechanismsare being implemented.

At the Federal level in the U.S.,there is a Transportation SecurityAdministration (TSA) “no fly list”and also a Computer Assisted Pas-senger Screening System (CAPSS)list. Without addressing the widelycited possibility that the no-fly list isused to limit efficacy of politicalopponents of the Bush Administra-tion [1], [11], CAPSS can beaddressed in terms of its efficacy inits stated goal. A similar list, theComputer-Assisted Passenger Pre-Screening System (CAPPS II), isbeing developed by the TSA and theDepartment of Homeland Security.Note that no system is perfect and inevery system there will be failures,even in theory [4], [10].

The CAPPS II and CAPSS sys-tems embody the perfect failures ofstatic lists of identities [7]. Staticlists provide identifiers associatedwith people who have proven to beuntrustworthy in the past, or whoare expected to be untrustworthy inthe future.

With a static list of identities, anindividual knows that some rating isassociated with an identity. A seri-ous attack requires first avoidingsecurity scrutiny. To implement theattack, the first round is to deter-

IEEE TECHNOLOGY AND SOCIETY MAGAZINE | FALL 2004 | 37

3Enrollment is the process of entering initial

records with biometrics. A biometric is only useful

if it is associated with attributes (e.g., criminal

records). The creation of a biometrics: attribute

record is enrollment.

Page 5: Digital identity - Technology and Society Magazine, IEEEsocial.cs.uiuc.edu/class/cs598kgk-04/papers/digital_identity.pdfunverified assertions of identity can, in fact, undermine rational

mine if the identity used for theattack is one that will result inscrutiny. Therefore, the obvious firsteffort is to determine a set of identi-ties that will certainly not result inbeing subject to scrutiny. The lessrandom the scrutiny, the easier it isto avoid.

Currently the no-fly and securitycheck lists are static [7]. In addition,there are well-known factors thatdetermine security scrutiny. Buyingone-way tickets and having no fre-quent flyer number result in certainscrutiny. Having a frequent flyernumber, buying a round trip ticket,and flying coach results in no scruti-ny. These well-known flags create abrittle system of security that isdeterministic and therefore easy tosubvert.

The use of static lists of identitiesfor security without randomizationand comprehensive securityrequires a flawless identity systemand perfect knowledge of who willcommit the next crime.

The identity system must beflawless or at least less subject toflaw than the security implementedagainst untrusted adversaries. Oth-erwise, an attacker can obtain a falseidentity and escape the security net.

Consider the case of the no-flylist. If that were the sole protection,then by definition everyone whoflies can be trusted. Of course, thisis not the case. Every passenger isexamined for metal and some forexplosive residue. If the existence ofan untrusted (and thus by default amore trusted) list decreases theoverall scrutiny of each passenger orperson not on the list, overall securi-ty is decreased. The least trustedperson could obtain a false identify-ing information offering verificationas the most trusted.

An identity-based security man-agement system must be able to pre-dict the source of the next attack.Otherwise, a person who has not yetimplemented an attack but intendsto do so can pass through the securi-ty system unchecked. A failure topredict the identity of the next

attacker causes a failure in an iden-tity-based system.

Compare one example of anidentity-based list of trustworthypeople to a list of specific threatsthat could be posed by any person.One list would check the identitiesof all drivers in an area; anotherapproach would refuse to allow anyvehicle large enough to be a signifi-cant car bomb into an area. TheKennedy School of Governmentallows any vehicle to enter the park-ing area, but has installed large con-crete “planters” that prevent any carfrom driving into the pedestrian orbuilding areas. In contrast, during2004 graduation, cars were allowedinto Harvard Yard lots based on theappearance of the driver. A femaleclad in academic robes (the author)was able to drive into parking areaswith drivable access to the pavilion.The KSG approach is to protectagainst car bombs; the Harvardapproach is to identify untrustwor-thy drivers. Thus under the HarvardCollege approach the driver mustget past identity scrutiny only.

The university example, CAPSS,and CAPPS II illustrate the risks ofusing identity as a security manage-ment tool. In the U.S., there is cur-rently a court case about the moregeneric use of identity in securitymanagement.

Unauthenticated Assertions ofIdentity in Risk ManagementIf only individuals who are knowncurrent or future threats are subjectto scrutiny, then any unknownfuture criminal will escape securityexamination. This is a generalobservation, and applies to trafficstops as well as airline travel.

In the United States, there is cur-rently no legislation or case lawrequiring identification in order totravel by air, and the courts are nothearing cases that assert the right totravel without identification.

However, there was a recentcourt case on the right of a policeofficer to demand identification ofany individual he or she approaches,

Hiibel vs. Nevada. [118 Nev. 868,59 P. 2d 1201, affirmed., U.S.Supreme Court No. 03-5554.Argued March 22, 2004 – DecidedJune 21, 2004]. In this case the stateclaimed that the ability to demandan unauthenticated claim of identityincreased officer safety. In the sum-mer session of 2004, the U.S.Supreme Court upheld the state'sposition and effectively reasonedthat only an innocent person has toprovide information. A guilty partyhas a Fifth Amendment right not toincriminate themselves; however,the innocent has no such right. TheCourt did not address how a personwould know they were innocent ofthe crime being committed, or ifguilt of any crime conveys the FifthAmendment protection. This casebecame an excellent illustration ofthe misuse of identity in risk man-agement.

In order for there to be a need foridentification of a subject at thebeginning of interrogation to be use-ful there must be three conditions.First, the identification must beaccurate. Second, the identificationmust immediately correspond touseful information. Third, the iden-tification and information mustaddress an identified threat, therebyproviding guidance for the officerthat will immediately enhance his orher safety.

Without an authenticated identityleading to the corresponding appro-priate response to the specificpotential threat, identification doesnot decrease risk. False identities, ifbelieved, lull officers into believingthat there is a low level of risk in ahigh-risk situation. If assertions oflow-risk identities cannot bebelieved, then the officer mustalways assume he or she is in ahigh-risk situation. The informationcannot be trusted, and cannot beused in risk management. Falseidentities used by dangerous sus-pects, and the resulting false senseof security by officers, may lead toan increase in risk.

Without identity management as

38 | IEEE TECHNOLOGY AND SOCIETY MAGAZINE | FALL 2004

Page 6: Digital identity - Technology and Society Magazine, IEEEsocial.cs.uiuc.edu/class/cs598kgk-04/papers/digital_identity.pdfunverified assertions of identity can, in fact, undermine rational

a risk tool, the implication is thatevery officer should treat every traf-fic stop and every encounter aspotentially dangerous.

Any identity system that is notone hundred percent accurate willresult in an officer lowering his orthreat level and acting accordingly.In order to explain why the imple-mentation of such an identity sys-tem will create rather than reducethreats, it is necessary to clarify thedistinctions between identification,authentication, and verification.

Demanding a claim of identity isdemanding an identifier. The claimof identity is simply a claim of alabel. There is little or no direct wayin which to confirm a simple claimof a name. In order to confirm theclaim it will be necessary for theinvestigating officer to demand a listof associated claims that mightauthenticate the identity. For exam-ple, the officer would need to obtaindate of birth and current address toconfirm a claim of a name with dri-ver's license records. In fact, theofficer may need the drivers' licensenumber itself depending upon thedatabase access provided by themotor vehicle provider to confirmthe claim.

Asking for a name is either ask-ing for a completely unverified andtherefore useless label from a poten-tially hostile party, or asking for adata set that verifies any claim ofidentity. A completely unverifiedclaim of identity is of no use in acriminal situation. An innocent par-ty will provide correct information,while any suspicious party wouldmisidentify. Criminal aliases wereused before identity managementsystems in law enforcement, andtheir value to criminals in fraud anddetection is well documented.

First, consider false identifica-tion. In this case, an officer evalu-ates his or her investigation, andindeed his or her personal safety,based on false information. A nameis accepted from a potential suspectas a valid identifier. A criminal iden-tifies him- or herself as a person

with no criminal record. The officermay significantly decrease his orher wariness in a potentially danger-ous situation.

Second, consider that the personis innocent indeed. In this case, theofficer has required that an innocentperson provide information. If everyinquiry is reported, then an innocentperson may have a record created onthe basis of nothing but a trafficstop. Innocent individuals may findthemselves tagged as suspicioussimply as a result of being innocent-ly stopped multiple times. Given thecurrent racial disparities in trafficstops, so common as be referred toas “DWB” or “driving while black,”this possibility is particularly trou-blesome.

Imagine that an unchangeablebiometric, such as fingerprints, isembedded on the driver’s license.Once fingerprints are available in adriver license database, those dataare then available for theft and mis-use. If the stolen information is froma person with no previous criminalrecord then any possible recordcould be added – there will be noextant data and the goal of the crim-inal will simply be to have a match.Utilizing real time biometricsrequires highly available networkedtechnology. The officer would haveto be able to observe the collectionof the fingerprint. Then a databasesearch would reveal identity if therewas a match. With current fieldtechnology, innocent individualswill be arrested, and criminals willescape, as data capture technologieshave not evolved to the point wherethose who are not technologists willhave reliable, consistent reads. If thetechnology were to ever be somature, then there would be a possi-ble trade of privacy for security.Under current social, organization-al, and technical conditions, privacyis lost and security is decreased.

Even given the ideal technolo-gies, the treatment of all traffic stopsas potentially dangerous is the bestpossible practice for the police offi-cer for the same reason that the

treatment of all patients as bearinginfectious disease is the best possi-ble medical practice. In both cases,those identified as previouslybenign may suddenly change tohazardous. In both cases, profes-sional practice of self-protectionand wariness is the best defense forthe professional on the front lines.

In contrast to a traffic stop thatelicits the suspect’s name (claimedidentity), a fingerprint-based authen-tication of identification is the verifi-cation of a claimed identity usingspecific attributes. It enables theappropriate application of long-prac-ticed and well-understood risk man-agement in handling prisoners. Notethat traffic stop identification is notverified; is not used to evaluate spe-cific attributes; and should the attrib-utes be identified would not bringinto play specified practices. Trafficstop identification increases risks tolaw enforcement personnel by sub-stituting flawed information for bestpractices, and violates citizen priva-cy, in order to obtain this potentiallydangerous outcome. In short, the twoexamples of fingerprints interactingwith criminal identity theft andunverified claims of identity in trafficstops illustrate the strength andweaknesses of identity management.

Paper Identity“Where are your papers?”

Why is it the case that lawenforcement professionals, expertsin risk by definition, are using iden-tifiers in a consistently counter-pro-ductive manner, enabling criminalidentity theft? Why do systemdesigners that understand the mal-leability of information begin tothink that entering a password cre-ates all the implications of an iden-tity, instead of the access requiredby a particular role? Both the riskexpert and the technical elite trans-late narrow digital techniques ofauthentication into ancient humanconcepts of identity. This is becausepaper-based centuries-old conceptsof identity are being stapled into thedigital age.

IEEE TECHNOLOGY AND SOCIETY MAGAZINE | FALL 2004 | 39

Page 7: Digital identity - Technology and Society Magazine, IEEEsocial.cs.uiuc.edu/class/cs598kgk-04/papers/digital_identity.pdfunverified assertions of identity can, in fact, undermine rational

Paper is itself a technology, onethat has been so long embedded inWestern culture that it is often nolonger identifiable as technology[8], [12]. Paper has distinct charac-teristics, and paper-based identifica-tion systems build on those charac-teristics to create distinct modes offailure. Paper identifiers createbindings between identification, andattributes that are optional or evenhazardous in digital systems. Yet themental model of the designers andlaw enforcement is that of paper.

Paper credentials must behuman-readable. Paper credentialsmust be long lived, because theircreation and distribution is soexpensive. Paper credentials need tobe self-proving because there is bydefinition no option to go on-lineand confirm credentials.

Papers are physical self-confirm-ing documents. Without computerdatabases and networks, the paperhas to stand on its own. The papersare physical, they are usually carriedby the person asserting their associ-ation with the documents. In thepaper-based environment, transac-tional histories were sparse andaccessible to few. A single sharingof data does not result in multiplecopies distributed across multipledatabases. Personal histories wereverified by community networks,community centers, or common rec-ollection, with little documentation.The classic example of this is thepersonal letter of reference. Physi-cal skills and craftsmanship can beillustrated at the moment when aperson applies for work, there is noneed for a degree and the relatedcertifications.

In the paper world the physicalperson is inherently linked to theaction. Theft or fraud requiresshowing up to take the money. Atransaction requires presence of abody in the physical environment.Thus the body and the identity arelinked. In particular this enables anenforcement system that depends inthe extreme on bodily enforcement(such as imprisonment).

The individual data that togetherform a ‘proof’ of identity are hard tolocate or extract in a paper-basedsystem. Consider the difficulty oflocating a mother’s maiden nameand a Social Security Number(SSN) for a stranger before net-worked digital information. Accessto that information would requireaccess to paper records, records thatcould be managed and secured. In acomputerized and networked envi-ronment such individual data aredifficult to conceal. Once each indi-vidual datum is located, the accessdata can be created by a simplecombination. When such data areidentifiable, availability threatensprivacy [2], [3], and when the dataare authenticating data, their avail-ability threatens security.

Anonymity in a paper systemimplies the simple absence ofpapers: no papers, no credential,and thus no accountability. Yetanonymity in a digital systemmeans rather that the credentialstands alone, can be confirmed dur-ing use, and thus does not dependon a network of personal informa-tion. Anonymous credentials can belong-lived (e.g., confirmation ofdate of birth) or short-lived (confir-mation of a ticket to an event). Notethat the ability to subvert that net-work of personal information isexactly what makes the paper mod-el of credentials weak in the digitalnetworked world. Confirming dateof birth without additional informa-tion enables secure attribute verifi-cation and decreases overall risk.The clerk can answer the criticalquestion (“Are you old enough tobuy beer?”) without informationleakage (“This attractive personlives at 37 Noan Rd and here is herphone number”; or, “This welldressed person has this date of birthand this social security number.”).

In contrast, consider crypto-graphic keys and codes used as ses-sion keys. Session keys were thedomain of the military, with code-books distributed at great cost totreasury and life. Now every user of

Internet commerce generates a ses-sion key for each purchase, realtime, and uses it only for the singletransaction. Digital networked sys-tems and modern cryptographysolved the cost of generating keysand distributing them.

When concepts of paper authenti-cation are used in digital networkedsystems the nature of paper docu-ments (persistent, self-authenticatinginformation) creates serious risks.The same information - called identi-ty - that links the achievement of acollege degree, credit worth, healthinsurance risk, or a promotion is con-nected to emails and on-line pur-chase records at on-line merchants.The record of each web view can betraced to the login at a specificmachine. Information entered intoforms may remain on the machinesin the users’ profile or passwordmanager. These identifiers were builtfor paper systems: names, socialsecurity numbers, and other human-readable persistent identifiers.

Anonymous/PseudonymousAttribute Systems: GreaterSecurity and PrivacyWith automated face recognition,ubiquitous video, and widespreaddigital authentication systems basedon identity, many actions that weretraditionally hidden in the faces ofthe crowd are becoming easily asso-ciated with identity. Many timessecurity is given as the reason forthis association. These claims ofsecurity and the value of identitymust be subject to clear and criticalscrutiny. Sometimes privacy is lostand security is subverted by the useof identity as a substitute forattribute-based risk management.

Identity systems that function bestin law enforcement are those thatidentify an individual through an un-forgeable biometric (the capture ofwhich is observed by law enforce-ment and processed appropriately)that is linked to a specific attribute.The use of other identity manage-ment tools has abetted criminals incommitting fraud, escaping justice,

40 | IEEE TECHNOLOGY AND SOCIETY MAGAZINE | FALL 2004

Page 8: Digital identity - Technology and Society Magazine, IEEEsocial.cs.uiuc.edu/class/cs598kgk-04/papers/digital_identity.pdfunverified assertions of identity can, in fact, undermine rational

and evading surveillance. Identitysystems are best used only when thethreat is one of misidentification,rather than for attribute forgery.

Paper-based identity systems linkattribute, identity, and authenticationinto a single stand-alone document.The availability of networked dataopens entire new vistas of possiblesystems. Simultaneously, networkeddata are undermining the assump-tions of “secret” information onwhich paper systems depend.

Identity in government is suffi-ciently critical that an identity sys-tem is to some degree inevitable.However, an identity system thatbuilds upon biometric-confirmedpseudonyms can provide privacyand enhance security. Current identi-ty management systems increasinglyharm privacy and security, ratherthan enhancing either. Anonymousand pseudonymous attribute systemsshould be used whenever feasible, asimperfect attribute authentication isoften weakened, not strengthened,

by the addition of imperfect identityauthentication.

Author InformationL. Jean Camp is Associate Profes-sor of Informatics at Indiana Uni-versity, 901 East 10th St., Bloom-ington, IN 47408-3912; email:[email protected]. An earlierversion of this paper was present-ed at ISTAS’03, Amsterdam, TheNetherlands.

References[1] LA Times Editors, “‘No Fly’ list traps inno-cent,” LA Times, Apr. 14, 2004. [2] E. Alderman and C. Kennedy, The Right toPrivacy. New York, NY: Knopf, 1995.[3] R.E. Anderson, D.G. Johnson, D. Gotter-barn, and J. Perrolle, “Using the ACM Code ofEthics in decision making,” Commun. ACM,vol. 36, pp. 98-107, 1993.[4] T. Aslam, I. Krsul, and E. H. Spafford, “Ataxonomy of security vulnerabilities,” in Proc.19th Nat. Information Systems Security Conf.,Baltimore, MD, Oct. 6, 1996, pp. 551-560.[5] K. Bowyer, “Face recognition technology:Security versus privacy,” IEEE Technology &Society Mag., vol 23, no. 1, pp. 9-19, 2004.[6] L. Camp, Trust and Risk in Internet Com-merce. Cambridge MA: M.I.T. Press, 1999.

[7] S. Chakrabarti and A. Strauss, “Carnivalbooth: An algorithm for defeating the comput-er-assisted passenger screening system,” FirstMonday, vol. 7, no. 10, Oct. 2002; http://www.firstmonday.org/issues/issue7_10/chakrabarti/index.html.[8] E. L. Eisenstein, The Printing Press as anAgent of Change. Cambridge, U.K.: Cam-bridge Univ. Press, 1979.[9] J. Fountain, Building the Virtual State:Information Technology and InstitutionalChange. Brookings, 2001. [10] C. Landwhere, Bull, McDermott & Choi,,"A taxonomy of computer program securityflaws, with examples”, ACM Computing Sur-veys, vol. 26, pp. 3-39, Sept. 1994.[11] Lindorff “Is a federal agency systematical-ly harassing travelers for their political beliefs?”In These Times, Issue 27, No 2, Nov 22, 2002.[12] H. M. McLuhan, The Gutenberg Galaxy:The Making of Typographic Man. Toronto, CN:Univ. of Toronto Press, 1962. [13] E. M. Newton,. and J. D. Woodward, “Bio-metrics: A technical primer,” adapted from J.D.Woodward, K.W. Webb, E.M. Newton et al.,“Biometrics: A technical primer,” Army Biomet-ric Applications: Identifying and AddressingSociocultural Concerns, App. A, RAND/MR-1237-A. Santa Monica, CA: RAND, 2001.[14] B. Yee and R. Blackley, “Trusting code andtrusting hardware,” Digital Identity Civic Sce-nario, Harvard Univ., Apr, 2003; available athttp://www.ksg.harvard/edu/digitalcenter/con-ference/references/htm.

IEEE TECHNOLOGY AND SOCIETY MAGAZINE | FALL 2004 | 41

of this paper was presented atISTAS’03, Amsterdam, TheNetherlands.

References[1] P. Brey, “The politics of computer systems andthe ethics of design,” in Computer Ethics: Philo-sophical Enquiry, J.v.d. Hoven, Ed. Rotterdam,The Netherlands: Rotterdam Univ. Press, 1998.[2] B. Friedman and P.H. Kahn, “Humanagency and responsible computing: Implica-tions for computer system design,” in HumanValues and the Design of Computer Technolo-gy, B. Friedman, Ed. Stanford, CA: CSLI Pub-lications, 1997, pp. 221-235.[3] B. Friedman and L.I. Millet, “Reasoningabout computers as moral agents: A researchnote,” in Human Values and the Design ofComputer Technology, B. Friedman, Ed. Stan-ford, CA: CSLI Pubs., 1997, pp. 205.[4] D. Grossman, On Killing. Boston, MA: Lit-tle Brown & Co, 1995.[5] D. Grossman, “The morality of bombing: Psy-chological responses to “distant punishment,” pre-sented at the Center for Strategic and International

Studies, Dueling Doctrines and the New AmericanWay of War Symp., Washington, DC, 1998.[6] D. Grossman, “Evolution of weaponry,”Encyclopedia of Violence, Peace, and Conflict.Academic Press. 2000.[7] P.R. Helft, M. Siegler, and J. Lantos, “The riseand fall of the futility movement,” New EnglandJ. Medicine, vol. 343, no. 4, pp. 293-296, 2000.[8] D.G. Johnson, Computer Ethics, 3 ed.Upper Saddle River, NJ: Prentice Hall, 2001.[9] H. Jonas, The Imperative of Responsibility:In Search of an Ethics for the TechnologicalAge. Chicago, IL: Univ. of Chicago Press, 1979.[10] S. Milgram, Obedience to Authority. NewYork, NY: Harper and Row, 1975.[11] K.L. Mosier and L.J. Skitka, “Human deci-sion makers and automated decision aids: Madefor each other?,” in Automation and Human Per-formance: Theory and Applications, R. Parasur-aman and M. Mouloua, Eds. Mahwah, NJ:Lawrence Erlbaum Assoc., 1996, pp. 201-220.[12] C. Nass, J. Steuer, and E.R. Tauberm,“Computers are social actors,” presented at theCHI'94: Human Factors in Computing Sys-tems, Boston, MA, 1994.[13] W.V. O'Brien, The Conduct of Just andLimited War. New York, NY: Praeger, 1981.

[14] R. Parasuraman, and V. Riley, “Humans andautomation: Use, misuse, disuse, abuse,” HumanFactors, vol. 39, no. 2, pp. 230-253, 1997.

[15] R. Parasuraman, T.B. Sheridan, and C.D.Wickens, “A model for types and levels ofhuman interaction with automation,” IEEETrans. Systems, Man, and Cybernetics, vol. 30,no. 3, pp. 286-297, 2000.

[16] B. Reeves and C. Nass, The Media Equa-tion: How People Treat Computers,Televisionand New Media Like Real People and Places.Stanford, CA: CSLI Pubs., and New York. NY:Cambridge Univ. Press, 1996.

[17] T.B. Sheridan, “Speculations on futurerelations between humans and automation,” inAutomation and Human Performance, M.Mouloua, Ed Mahwah, New Jersey: LawrenceErlbaum Assoc., 1996, pp. 449-460.

[18] L.J. Skitka, K.L. Mosier, and M.D. Bur-dick, “Does automation bias decision-mak-ing?” Int. J. Human-Computer Studies, vol. 51,no. 5, pp. 991-1006, 1999.

[19] D.L. Wells, “Opening remarks,” presentedat the Swarming: Network Enabled C4ISR,McLean, VA, 2003.

Creating Moral Buffers in Weapon Control Interface Design (Continued from page 33)