evaluation of privacy and security risks analysis construct for identity management systems

10
IEEE SYSTEMS JOURNAL, VOL. 7, NO. 2, JUNE 2013 189 Evaluation of Privacy and Security Risks Analysis Construct for Identity Management Systems Ebenezer Paintsil Abstract —This paper evaluates a taxonomy of privacy and security risks contributing factors with the Delphi method. The taxonomy was introduced in a previous work, and it is based on characteristics of tokens used in identity management systems (IDMSs). The taxonomy represents a construct for risk analysis in IDMSs. Constructs are concepts, terms, or vocabularies and symbols adopted or developed to describe, conceptualize, or define the problems and solutions within a domain. We can determine the performance and utility of a construct through evaluation. Evaluation can determine constructs’ completeness, simplicity, elegance, ease of use, and understandability. Evalu- ation of a construct can be done with the Delphi method. The Delphi method solicits expert opinions on a subject matter in a structured group communication process. The Delphi evaluation of the taxonomy led to additional privacy and security risks contributing factors that were not covered in the initial taxonomy. Furthermore, the evaluation identified three key risk indicators and showed that the experts mostly agreed with our initial risk analysis construct for IDMSs. Index Terms—Analysis, construct, evaluation, privacy, risk, security. I. Introduction I DENTITY management systems (IDMSs) create and man- age identities of end users [1]. The traditional role of IDMSs is to issue end users with credentials and unique identifiers during the initial registration phase. The end users can then use these identifiers and the credentials to authenticate to a system. End users’ identifiers or credentials may consist of both pseudonyms and personally identifiable information with a variety of security features. They may be organized in the form of tokens in order to provide assurance about an identity. We define a token as a technical artifact providing assurance about an identity [2]. A token can be an identifier such as username, a claim such as a password, an assertion such as SAML token, a credential such as X.509 certificate, or any possible combinations of these. Our previous work [2] surveys how the characteristics of identifiers contribute to privacy and security risks in IDMSs. The taxonomy is a construct [3] for privacy and security risks assessment. A construct is a shared knowledge or a language for communication within a known domain. Constructs are Manuscript received September 28, 2011; revised March 14, 2012; accepted July 26, 2012. Date of publication November 29, 2012; date of current version April 17, 2013. The PETweb II Project was funded by the Research Council of Norway under Grant 193030/S10. The author is with the Department of Applied Research, ICT Norwegian Computing Center, Oslo 314, Norway (e-mail: [email protected]). Digital Object Identifier 10.1109/JSYST.2012.2221852 concepts, terms, or vocabularies and symbols adopted or devel- oped to describe, conceptualize or define the problems and so- lutions within a domain [4]. Constructs may be formal or infor- mal and they may be established through consensus building. According to the design science research method, constructs need to be evaluated for completeness, simplicity, elegance, understandability, and ease of use [4]. This paper applies the Delphi method to build and evaluate a security and privacy risks assessment construct. We derive the initial construct from a literature survey of characteristics of tokens used in IDMSs [2]. We then evaluate the construct through consensus building with a Delphi study [5]. A Delphi study solicits expert opin- ions on a subject matter in a structured group communication process. It is a collective human intelligence process that may be used to develop and evaluate a construct [5]. This paper is organized as follows. Section II introduces related work. Section III gives the background and objectives of the paper. Section IV explains the Delphi method and how it applies to this paper. We present the result of the paper in Section V and discuss it in Section VI. Section VII discusses shortcomings of the paper. Section VIII aligns the result to a set of security and privacy goals in order to show its completeness. Section IX concludes and discusses possible future work. II. Related Work Finding assets and their value at risk is an important part of risk assessment [6]. Tokens used in IDMSs are assets and their value at risk can contribute to privacy and security risks. Peterson introduces asset value computation factors and the importance of asset in risk calculation [7]. Asset value com- putation factors can be used to quantify the contributions of tokens to privacy and security risks. Peterson derived the asset value of tokens from their loss, misuse, disclosure, disruption, replacement value, or theft. However, these factors are not comprehensive for privacy and security risks assessment. The taxonomy by Solove [8] classifies privacy risk con- tributing factors. The factors include data processing, data col- lection, data transfer, and privacy invasion. However, Solove’s taxonomy is quite abstract for thorough privacy and security risks assessments in IDMSs. This paper presents a specific construct for supporting risk assessment in IDMSs. Yamada et al. modeled the projected compensation for damages caused by privacy violations [9]. The objective of the model is to “provide organizations with a quantitative 1932-8184/$31.00 c 2012 IEEE

Upload: ebenezer

Post on 25-Dec-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

IEEE SYSTEMS JOURNAL, VOL. 7, NO. 2, JUNE 2013 189

Evaluation of Privacy and Security Risks AnalysisConstruct for Identity Management Systems

Ebenezer Paintsil

Abstract—This paper evaluates a taxonomy of privacy andsecurity risks contributing factors with the Delphi method. Thetaxonomy was introduced in a previous work, and it is based oncharacteristics of tokens used in identity management systems(IDMSs). The taxonomy represents a construct for risk analysisin IDMSs. Constructs are concepts, terms, or vocabularies andsymbols adopted or developed to describe, conceptualize, ordefine the problems and solutions within a domain. We candetermine the performance and utility of a construct throughevaluation. Evaluation can determine constructs’ completeness,simplicity, elegance, ease of use, and understandability. Evalu-ation of a construct can be done with the Delphi method. TheDelphi method solicits expert opinions on a subject matter in astructured group communication process. The Delphi evaluationof the taxonomy led to additional privacy and security riskscontributing factors that were not covered in the initial taxonomy.Furthermore, the evaluation identified three key risk indicatorsand showed that the experts mostly agreed with our initial riskanalysis construct for IDMSs.

Index Terms—Analysis, construct, evaluation, privacy, risk,security.

I. Introduction

IDENTITY management systems (IDMSs) create and man-age identities of end users [1]. The traditional role of

IDMSs is to issue end users with credentials and uniqueidentifiers during the initial registration phase. The end userscan then use these identifiers and the credentials to authenticateto a system.

End users’ identifiers or credentials may consist of bothpseudonyms and personally identifiable information with avariety of security features. They may be organized in theform of tokens in order to provide assurance about an identity.We define a token as a technical artifact providing assuranceabout an identity [2]. A token can be an identifier such asusername, a claim such as a password, an assertion such asSAML token, a credential such as X.509 certificate, or anypossible combinations of these.

Our previous work [2] surveys how the characteristics ofidentifiers contribute to privacy and security risks in IDMSs.The taxonomy is a construct [3] for privacy and security risksassessment. A construct is a shared knowledge or a languagefor communication within a known domain. Constructs are

Manuscript received September 28, 2011; revised March 14, 2012; acceptedJuly 26, 2012. Date of publication November 29, 2012; date of current versionApril 17, 2013. The PETweb II Project was funded by the Research Councilof Norway under Grant 193030/S10.

The author is with the Department of Applied Research, ICT NorwegianComputing Center, Oslo 314, Norway (e-mail: [email protected]).

Digital Object Identifier 10.1109/JSYST.2012.2221852

concepts, terms, or vocabularies and symbols adopted or devel-oped to describe, conceptualize or define the problems and so-lutions within a domain [4]. Constructs may be formal or infor-mal and they may be established through consensus building.

According to the design science research method, constructsneed to be evaluated for completeness, simplicity, elegance,understandability, and ease of use [4]. This paper applies theDelphi method to build and evaluate a security and privacyrisks assessment construct. We derive the initial construct froma literature survey of characteristics of tokens used in IDMSs[2]. We then evaluate the construct through consensus buildingwith a Delphi study [5]. A Delphi study solicits expert opin-ions on a subject matter in a structured group communicationprocess. It is a collective human intelligence process that maybe used to develop and evaluate a construct [5].

This paper is organized as follows. Section II introducesrelated work. Section III gives the background and objectivesof the paper. Section IV explains the Delphi method andhow it applies to this paper. We present the result of thepaper in Section V and discuss it in Section VI. Section VIIdiscusses shortcomings of the paper. Section VIII aligns theresult to a set of security and privacy goals in order to showits completeness. Section IX concludes and discusses possiblefuture work.

II. Related Work

Finding assets and their value at risk is an important partof risk assessment [6]. Tokens used in IDMSs are assets andtheir value at risk can contribute to privacy and security risks.Peterson introduces asset value computation factors and theimportance of asset in risk calculation [7]. Asset value com-putation factors can be used to quantify the contributions oftokens to privacy and security risks. Peterson derived the assetvalue of tokens from their loss, misuse, disclosure, disruption,replacement value, or theft. However, these factors are notcomprehensive for privacy and security risks assessment.

The taxonomy by Solove [8] classifies privacy risk con-tributing factors. The factors include data processing, data col-lection, data transfer, and privacy invasion. However, Solove’staxonomy is quite abstract for thorough privacy and securityrisks assessments in IDMSs. This paper presents a specificconstruct for supporting risk assessment in IDMSs.

Yamada et al. modeled the projected compensation fordamages caused by privacy violations [9]. The objective ofthe model is to “provide organizations with a quantitative

1932-8184/$31.00 c© 2012 IEEE

190 IEEE SYSTEMS JOURNAL, VOL. 7, NO. 2, JUNE 2013

TABLE I

Model for Privacy and Security Risks Assessment in IDMS

Category of Factors Risk FactorsToken mobility Copyable, remotely usable, concurrently

usable, immobileToken value at risk Loss, misuse, disclosure, disruption,

theft, replacement valueToken provisioning Creation, editing, deletionToken frequency andduration of use

Uses per year, life-time, multiple times,one time

Token use and purpose Original, unintendedToken assignment &relationship

Forced, chosen, jointly-established, role,pseudonymity

Token obligation andpolicy

Absent, present, functionality

Token claim type Single, multipleToken secrecy Public, inferable, secretToken security Origination, identification, validation,

authentication, authorization

understanding of the latent risks involved in handling personalinformation.” The metric estimates the value of leaked per-sonal information based on risk contributing factors such asvalue of basic information, degree of information sensitivityand degree of ease of identification. This paper introduces acomprehensive construct for privacy and security risks assess-ment focusing on how tokens contribute to risk in IDMSs.

Similarly, Fritsch and Abie [10] proposed a model calledreturn-on-privacy-investment (ROPI) that states the effect thata particular investment has on the privacy-relevant value ofan information system . The model is based on the return-on-security-investment (ROSI) concept by Berinato [11] anal-ogous to the ROPI. However, the model focuses on only oneaspect of privacy and security risks estimation. We introduce acomprehensive construct for privacy and security risks analysisin IDMSs.

III. Study Background and Objectives

IDMSs rely on tokens to function [12]. Tokens facilitateidentification, authentication, and authorization of services andaccess to resources in IDMSs. Therefore, the privacy andsecurity of IDMSs may depend on the characteristics of tokens.For example, a weak password can lead to a security breachin IDMSs. Consequently, an analysis of privacy and securityof IDMSs based on characteristics of tokens is needed.

This paper evaluates the characteristics of tokens that cancontribute to privacy and security risks in IDMSs. The initialtaxonomy of these characteristics is shown in Table I. Thefirst column of Table I represents the categories of risk factors,while the second column is a mixture of positive and negativerisk factors. We refer to factors that can enhance privacy orsecurity in IDMSs as positive factors while those that cancontribute to privacy or security risk are the negative factors.Individual factors and categories in the table are explained asfollows:

A. Token Mobility

This category indicates the degree of mobility of a token.The degree of mobility refers to how easy it is to copy a token

or its content, the physical constraints regarding the movementof the token, among others. For example, the content of a lowcost RFID tag with no additional security could easily be readby anyone but a high cost RFID tag that comes with additionalsecurity may ensure that only authorized readers have accessto its content. Various forms of mobility create risk in IDMSs.We assess the contributions of token mobility to privacy andsecurity risks according to the following.

1) Copyable: the token can be copied with limited effort.2) Remotely usable: the token can be used for remote

identity management.3) Concurrently usable: the token can be used concurrently

in many parallel sessions, transactions, or applications.4) Immobile: a token is not “mobile,” if it must be physi-

cally present.

B. Token Value at Risk

Finding assets and the value of the assets at risk is animportant part of risk assessment [6]. Tokens are assets andtheir value at risk can contribute to privacy and security risks.Thus, we can quantify the risk of using tokens by assessingthe significance of the token or the value of the token to theoperation and security of the IDMS. We classify the value atrisk [7] as follows.

1) Loss: value at risk when token is lost.2) Misuse: value at risk when token is used in wrongful

ways.3) Disclosure: value at risk when token or token-related

information gets known by someone else.4) Disruption: value at risk when token doesn’t function.5) Theft: value at risk when token falls into someone else’s

possession without authorization.6) Replacement value: cost (effort, resources, time) to

replace a token.

C. Token Provisioning

The amount of personal information collected during thecreation phase could contribute to privacy risk [13], [14].Data minimality principles prohibit excessive data collectionof personal information. In addition, tokens that contain ex-cessive personal information may be used for other unintendedpurposes. There should be a means by which tokens can beupdated in order to ensure data integrity. Further, there shouldbe a means by which a token can be deleted or destroyedwhen the purpose for its creation is no longer in the interestof the end user. This would ensure the privacy of the enduser. Therefore, we distinguish the following phases of thetoken provisioning life cycle: creation, editing, and deletion.

D. Token Frequency and Duration of Use

The underlying IDMS information flow protocol coulddetermine if multiple use of the same token is susceptible toprivacy and security risks [15]. Different uses of even specif-ically constructed tokens remain linkable if the underliningIDMS’s information flow protocol is not designed to preventsuch a privacy risk. IDMSs that allow a service provider andan identity provider to share information or use the same

PAINTSIL: EVALUATION OF PRIVACY AND SECURITY RISKS ANALYSIS CONSTRUCT FOR IDENTITY MANAGEMENT SYSTEMS 191

token repeatedly may be susceptible to profiling. A token usedin association with life time identification or multiple timescauses high risk of secondary use and profiling [16]. We dividetokens into three categories—a token chosen or assigned forlife (life-time token), a token that can be used multiple timesbut not for life (multiple times token), and a token that is onlyvalid for a session (one time token).

E. Token Use and Purpose

Purpose specification is an important privacy principle. Itrequires that personal information be collected for a specificpurpose and used in an open and transparent manner [17]. Per-sonal information should not be used for purposes other thanits original purpose unless informed consent is given. Sincetokens may carry personal information or contain identifiersthat link to personal information, any form of function creep ormisuse could contribute to a privacy or security risk. Functioncreep occurs when a token is used for unintended purpose.The abuse of the purpose of a token or how the purpose of atoken is achieved may contribute to privacy and security risks.

F. Token Assignment and Relationship

The need for user-centric IDMSs clearly emphasizes theimportance of how tokens are assigned or chosen. User-centricIDMS offer the end users the ability to control their personalinformation and enforce their consent by determining thechoice of their identity tokens [18].

The origin of and control over tokens contribute to privacyor security risk. A token can be chosen by a person, jointlyestablished or forced upon by an authority. They can relate toa role or pseudonym rather than a person [15].

The token assignment and relationship risk contributingfactor assesses the privacy risk if a token is forced on an enduser, chosen by an end user (as in the user-centric IDMS),jointly established with the end user or chosen according tothe end user’s role. It also assesses the end user’s privacy riskif a token is chosen under a pseudonym [15].

G. Token Obligation and Policy

Some IDMSs give the end users an opportunity to specifypolicies regarding how their tokens should be used [19]. In theabsence of a policy, functionality such as the use of anonymouscredentials and selective disclosure of attributes can mitigateprivacy and security risks. Therefore, this category checks ifa policy or functionality exists for privacy protection.

H. Token Claim Type

We can enforce the security of identifiers stored in a tokenor protect the misuse of identifiers by attaching a secret or aclaim to the tokens. The type of claims are generally classifiedas something we know (a secret, e.g., password), somethingwe are (something we cannot share with others, e.g., fingerprint, iris), and something we have (e.g., possessing a smartcard, key card or a token) [13]. Each claim type is a factor.Token claim type considers the effect of the number of factorsused to secure a token in IDMSs. Some tokens may requireno additional secrets in order to protect its content. We refer

to such tokens as single claim tokens since the possession ofthe token itself is a factor. A token that requires a factor suchas a personal identification number (PIN) in order to access itscontent is referred to as a two-factor or multiple claim token.We refer to a token requiring multiple factors such as PIN andfinger print as multiple claim tokens or a three-factor token.

The number of authentication factors may determine thecomplexity and security of the token. For example, currentcredit cards may require no additional secret PIN when usedfor online transactions. We regard such a token as a single fac-tor authentication token. A single factor authentication tokenwould be less secure than two- or three-factor authenticationtokens.

I. Token Secrecy

Tokens have claim type as discussed above. The secrecyor the constraints of authentication factors such as physicalpresence can contribute to security or privacy risk in IDMS.The nature of the secret and constraint of the additionalauthentication factors may determine the “claim type.”

A token secret that can easily be guessed is classified as aninferable secret. We consider the secrecy of a token as publicif the additional authentication factors or claims are known bya number of people, group and organization or exist in manydatabases, at the disposal of an unknown number of personsand organizations. We classify the secrecy of a token as secretif it is private or not shared with others because it is linked toa valuable resource such as a bank account [15]. A possibleexample of private secrets is a private cryptographic key anda PIN.

J. Token Security

The security of a token can contribute to the security ofthe IDMSs. If a token used in an IDMS is a credential, thenthe system should have a mechanism to check the authorityof the token issuer. If the token is a mere assertion, then theIDMS should provide a different mechanism to ensure theauthenticity of the assertion. In order to ensure token security,there should be a means of ensuring the validity, identity,and legitimacy of the token. Similar to Jan and Van [13], wedescribe tokens’ security as follows.

1) Origin: the token is issued by the indicated legitimateauthority.

2) Identification: the token identifies the subject-the entitypossessing the token.

3) Validation: the token has not expired; its lifespan iswithin the validity period or has passed the validity test.

4) Authentication: the token belongs to the entity present-ing it to the IDMS.

5) Authorization: the token grants relevant permissions tothe entity possessing it.

The taxonomy in Table I represents the initial survey resultor construct. The objective of this paper is as follows.

1) To evaluate the privacy and security risks analysis con-struct introduced in Table I.

2) To identify the key risk indicators (KRIs) of the con-struct.

192 IEEE SYSTEMS JOURNAL, VOL. 7, NO. 2, JUNE 2013

IV. Method

“Delphi may be characterized as a method for structuring agroup communication process so that the process is effectivein allowing a group of individuals, as a whole, to deal witha complex problem” [5]. It collects and establishes an expertconsensus on a subject matter. The Delphi method is suitablein situations where concepts are still vague, and the subject ofstudy is not easy to quantify.

The method usually involves two or more rounds of ques-tions and answers eliciting expert opinions [5]. The firstround introduces the purpose of the study and the initialround of opening questions. The result of the first roundis analyzed for consensus and disagreements. The secondround of the study requires that the result of the first roundis submitted back to the experts for a second opinion. Thepurpose of this is to ask the experts to reconsider their initialpositions or disagreements. It is an opportunity for an expertto revise their opinions regarding a subject matter. The finalround is the consolidation of opinions of the previous rounds.Traditionally, the Delphi method is regarded as a forecastingmethod. However, we have other application areas, includingevaluation and modeling [5].

This work employs the Delphi method to build and evaluateconstructs. The study started in December 2010 and wascompleted in May 2011. We began the study by identifyingexperts with different backgrounds who are actively workingin the area of IT security and privacy. We sent an email to eachof them explaining the purpose of the study and requested theirparticipation. Initially, 17 experts agreed to participate in thestudy. However, only nine experts took part in the first andsecond rounds of the study. Three experts dropped out in thefinal round (round three) of the study.

The expert panel consisted of the following: two IT securityprofessors, one professor from Gjøvik University College,Norway, and the other professor from Radboud University, TheNetherlands. A senior IT security from Fraunhofer Institute,Germany and a Ph.D. Fellow from Linkoping University, Swe-den. In addition, three researchers, one from UnabhangigesLandeszentrum fur Datenschutz (ULD), Germany and theothers are from the Technical University of Dresden (TUD),Germany, and IBM Research, Switzerland. Finally, two ITsecurity practitioners, from Todos AB Sweden and A-SIT,Austria. The nine experts came from nine different institutionsand six countries. They represented the academia, industry, andresearch community. Two professors from Gjøvik UniversityCollege, Norway and the Radboud University, the Netherlandscompleted the study. The others who completed the studyare the Ph.D. fellow from Linkoping University, Sweden;the senior IT security from Fraunhofer Institute, Germany;the researchers from IBM Research, Switzerland and TUD,Germany. Further details of the experts who agreed to beacknowledged can be found in the acknowledgments.

The experts were consulted through email. The first roundof the study involved two tasks. The first task required theexperts to list and explain in their own words the properties orthe characteristics of tokens that may contribute to privacy andsecurity risks in IDMSs. The list should be independent of anyspecific application. The second task required the experts to

assess if the usages of tokens for identification, authentication,or authorization are interchangeable without any considerationof threat and risk assessments. We focus on the result of thefirst task in this paper.

We categorized the results of the first round into riskcontributing factors with a list of factors for each category.We then asked the respondents to provide their opinion on thecategorization and the list of factors. We asked them to statewhether they agree with the categorization and if not how theywould adjust the result. In addition, we asked the respondentsif they agreed with the explanations given to each category.

We integrated the initial survey result [2] with the resultsobtained from the expert panel in the third and final round ofthe study and asked the experts to give their opinions on theconsolidated results. In addition, we asked them to rate thefactors according to the following scale.

1) Impact—How does the risk factor affect privacy andsecurity in IDMSs?

2) Measurement—How easy or difficult is it to measure orquantify the risk factor?

3) Sensitivity—Is the risk factor an accurate measure ofprivacy and security risks of identifiers?

4) Stakeholder—Can nontechnical stakeholders understandthe risk factors?

In order to ensure clarity, each question came with a shortand easy to read introduction explaining its rational. Moreover,the understanding of the questions was tested internally toensure that possible ambiguities were removed. This was doneby asking local experts to review the questions for clarity andunderstanding. In addition, the experts from the Delphi studywho did not understand some of the questions wrote on theiranswer sheet how they interpreted them. For example, some ofthe experts interpreted “how to rate the possibilities to measurethe risk factors” to mean how to quantify the risk factors. Inall the cases, their interpretations were correct. In addition, tothe short introductions, the meaning of each risk category isexplained in a separate column during the second and the thirdrounds of the study (see Table II). This was done to assist theexperts in understanding the meaning of the categories. Someof the experts also gave out their phone numbers so that wecould call them for further discussions.

Sample questions for the various rounds can be found inthe Appendix.

V. Results

This section explains the result of the three-round Delphistudy.

A. Round I

We analyzed and categorized the responses to Round I overthemes or categories. We then listed the factors under eachcategory as suggested by the experts. The result is shown inTable II. In organizing the responses, we focused on the factorsthat may lead to consequences such as profiling, identity theft,linkability, availability, and function creep. For each response,we listed and categorized all the consequences and factors thatmay lead to those consequences.

PAINTSIL: EVALUATION OF PRIVACY AND SECURITY RISKS ANALYSIS CONSTRUCT FOR IDENTITY MANAGEMENT SYSTEMS 193

TABLE II

Result for Round I

Rule No. Category of Factors Explanation Factors1 Frequency and duration of use Number of times an identifier or its protected mechanism One time, multiple times,

or authenticated data is used and how long it is used life time, single sign-on,multiple services

2 Purpose of use When the identifier is designed for a particular purpose, Application specific,application or context and is switched for another privacy preserving,purpose it can lead to privacy and security consequences context specific, silo

3 Provisioning An identifier constructed, updated or archived, possibly in Limited personal attributes,conjunction with a large set of personal attributes or sensitive overloaded personal attributes,personal data such as age, data of birth creates privacy risk sensitive personal attribute

4 Secrecy The identifier can be inferred using the arithmetic or Inferable, public, obfuscatedother properties of the identifiers recoverable, revocable

5 Claim type The nature of the authentication mechanisms used Password, crypto key, biometric,to secure the identifier challenge-response, single-claim,

multiple-claims

The most frequently discussed theme or risk contributingcategory is “frequency and duration of use.” Seven out ofnine respondents directly or indirectly identified factors thatcan be categorized as “frequency and duration of use.” The“purpose of use” was the second dominant theme or categoryas four respondents identified it as one of the risk contribut-ing categories. Some of the respondents indirectly identified“provisioning” as one of the categories.

Other categories identified included “secrecy” and “claimtype” of identifiers or tokens. Although “secrecy” and “claimtype” categories and their factors were less frequent, we foundthem easy to understand and less controversial. Hence, we didnot include them in the second round of the study but reintro-duced them in the final round. The second round of the studyfocused on the controversial and most frequent categories suchas “frequency and duration of use” and provisioning (seeTable II) .

B. Round II

During the second round of the study, we submitted theresult in the first three rows of Table II to the experts forfurther assessment.

We obtained the result in Table III at the end of Round II.Table III shows that the respondents did not agree with someof the factors in Rows 1 and 2 of Table II. “single sign-on” and“multiple services” factors are now categorized under “purposeof use” instead of “frequency and duration of use.” In generalwe achieved consensus of opinions on the first two categories.There were divided opinions concerning the “provisioning”category as some of the respondents did not clearly understandthe factors in this category. We clarified this in the subsequentround of the study.

C. Round III

Table IV is the result of Round III. It combines the Delphisurvey results and our initial model [2]. Two respondentsconsidered the term “mobility” as wrong terminology forthe category because mobility usually refers to the positiveproperty of users being able to take identifiers or tokens withthem and use the tokens on the go and on multiple devices.

Additionally, “mobility” is closely related to (physical) protec-tion of identifiers or tokens, which may be categorized underanother risk contributing category. However, we maintained“mobility” because the majority of the respondents did notdisagree with the terminology.

The factors for “provisioning” category were not clearaccording to some of the respondents. Therefore, we changedthe factors based on the feedback received in this round.Furthermore, the factor “multiple services” under “purposeof use” is replaced with “used with multiple services.” Thismeans the token or identifier is used with multiple services,but for more than just sign-on.

“Revocable” seems not to fit too well with secrecy becauseit is orthogonal to its secrecy. “Revocability” is rather aprotection mechanism to invalidate secrecy, but can equallybe applied to public, inferable, and other kinds of identifiers.However, the majority of the respondents agreed with theplacement of “revocable” in the category secrecy. “Revocable”determines whether a token secret can be revoked or how easyor difficult it is to make a secret public.

In addition, we clarified the factors for “provisioning” basedon the experts’ feedbacks. “Provisioning” factors consider theeffect of creating, updating, deleting or archiving tokens withlimited personal attributes, overloaded personal attributes andsensitive personal attributes.

Task b (Round III) of the survey assesses how we can ratethe risk factors in Table IV regarding their effect on privacyand security risks in IDMSs. In addition, the task required theexperts to rate the possibilities of quantifying the risk factorsand to judge its complexity for nontechnical stakeholders.Table V is the tally of the responses of Task b (Round III).The result in Table V indicates consensus in many of theratings. The units M, L, H, and N are explained in Table VI.“Undecided” in the table means the respondents were unableto give their opinions on how to evaluate a particular factor.

Table V depicts the result of the evaluation of the consol-idated results in Table IV. There is no consensus regardingthe effect of “frequency and duration of use” on privacy andsecurity of IDMSs. 50% of the respondents rated the effectas medium while the others rated it as high. On the averagethe respondents agreed that “frequency and duration of use”

194 IEEE SYSTEMS JOURNAL, VOL. 7, NO. 2, JUNE 2013

TABLE III

Result for Round II

Rule No. Category of Factors Explanation Factors1 Frequency and duration of use Number of times an identifier or its protected mechanism One time, multiple times,

or authenticated data is used and how long it is used life time2 Purpose of use When the identifier is designed for a particular purpose, Application specific,

application or context and is switched for another single sign-on, multiple servicespurpose it can lead to privacy and security consequences context specific, silo

3 Provisioning An identifier constructed, updated or archived, possibly in limited personal attributes,conjunction with a large set of personal attributes or sensitive overloaded personal attributes,personal data such as age, data of birth creates privacy risk sensitive personal attribute

4 Secrecy The identifier can be inferred using the arithmetic or inferable, public, obfuscatedother properties of the identifiers recoverable, revocable

5 Claim type The nature of the authentication mechanisms used password, crypto key, biometric,to secure the identifier challenge-response, single-claim,

multiple-claims

TABLE IV

Result for Round III

Rule No. Category of Factors Explanation Factors1 Frequency and duration of use Number of times an identifier or its protected mechanism One time, multiple times,

or authenticated data is used and how long it is used life time2 Purpose of use When the identifier is designed for a particular purpose, Application specific, single sign-on,

application or context and is switched for another used with multiple services,purpose it can lead to privacy and security consequences context specific, silo

3 Provisioning Identifiers constructed, updated or archived Created, updated, deletedor archived with:

conjunction with a large set of personal attributes or sensitive Limited personal attributes,personal data such as age, data of birth creates privacy risk overloaded personal attributes,

sensitive personal attribute4 Secrecy The identifier can be inferred using the arithmetic or Inferable, public, obfuscated,

other properties of the identifiers revocable, recoverable,5 Claim Type The nature of the authentication mechanisms used Password, crypto key, biometric,

to secure the identifier challenge-response, single-claim,multiple-claims

6 Mobility This refers to how easy to copy the identifier or its content, the copyable, remotely usable,physical constraints regarding the movement of the identifier concurrently usable, immobile

7 Value at risk How much is loss when one or more of the following happens loss, misuse, disclosure, disruption,to an identifier. The loss could monetary or reputation theft, replacement value

8 Assignment and relationship How the identifier is chosen or assigned. Is it by the individual, forced Forced, self, jointly-established,on the individual, assigned to a role or a pseudonym or jointly established. role, pseudonym

9 Obligation and policy Are there policy or obligation management mechanisms Policy absent, policy presentavailable in the IDM system?

of a token have significant effect on privacy and securityof IDMSs. Furthermore, the respondents agreed that the riskfactors of this category are easy to measure or quantify. 50%of the respondents were of the opinion that the sensitivityof the factors is medium or provides accurate measure ofprivacy and security risks in IDMSs. This represents weakconsensus as half of the respondents did not support this rating.Moreover, the majority of the respondents were of the opinionthat nontechnical system stakeholders can easily understandthe factors in this category.

All the respondents agreed that the effect of “claim type”on privacy and security in IDMSs is high. This representsa very high degree of consensus. Furthermore, the majorityof the respondents were of the opinion that the factors of“claim type” are very easy to measure or quantify and thefactors provide accurate measure of privacy and security risks

in IDMS. Additionally, the factors can easily be understoodby nontechnical stakeholders.

The factors for “purpose of use” have medium effects onthe privacy and security of IDMSs and they are difficult tomeasure. On the average the “purpose of use” factors mayprovide accurate measure of privacy and security risks inIDMSs. However, the degree of consensus is low. The resultsalso show that nontechnical stakeholders can easily understandthe factors.

The majority of the respondents were of the opinion thatfactors of “provisioning” have medium effect on the privacyand security of IDMSs. This means “provisioning” may resultin costly loss of tangible or intangible assets or resources.Secondly, there is a low degree of consensus concerning the“measurement” rating. Half of the respondents agreed thatthe factors of “provisioning” are easy to quantify. Also, the

PAINTSIL: EVALUATION OF PRIVACY AND SECURITY RISKS ANALYSIS CONSTRUCT FOR IDENTITY MANAGEMENT SYSTEMS 195

TABLE V

Result for Task b of Round III

Rule No. Category of Factors Impact Measurement Sensitivity StakeholderLevel Tally Level Tally Level Tally Level Tally

1 Frequency and duration of use 3M, 3H 4M, 2H 3M, 1L, 1H, undecided 4M, 2H2 Claim type 6H 3H, 2M , 1L 3M, 2H, undecided 3M, 2L, 1H3 Purpose of use 5M, 1H 4L, 1H, 1N 2M, 2H, 1L, 1N 3M, 1L, 1H, 1N4 Provisioning 4M, 2H 3M, 1L, 1H, 1N 4M, 1L, 1N 3M, 2H, 1N5 Secrecy 3H, 2L, 1M 5M, 1H 3M, 1L, 1H, 1N 3M, 1L, 1H, 1N6 Mobility 3M, 2H, undecided 2L, 2M, 1H, undecided 2M, 2L, 1H, undecided 3L, 2M, undecided7 Value at risk 5H, 1L 3L, 1M, 1H, 1N 3H, 1L, 1M, 1N 3M, 2H, 1L8 Assignment and relationship 4L, 2H 2M, 2H, 1L, 1N 4L, 1M, 1H 3M, 1L, 1H, 1N9 Obligation and policy 3M, 1L, 1H, undecided 3M, 1L, 1H, undecided 3M, 1L, 1H, undecided 4L, 1H, undecided

TABLE VI

Rating Unit for the Quantification

Scale Impact Measurement Sensitivity StakeholderLevel Level Level Level

Low (L) Low impact Difficult to measure Factor(s) provides less accurate measure Not easy to understandMedium (M) Medium impact Easy to measure Factor(s) provides accurate measure Easy to understandHigh (H) High impact Very easy to measure Factor(s) provides very accurate measure Very easy to understandNot significant (N) Not significant impact Impossible to measure Factor(s) may provide Inaccurate measure Rarely understandable

majority of the respondents agreed that the factors provideaccurate measure of “provisioning.” Finally, there is a lowdegree of consensus on how nontechnical stakeholders mayunderstand the factors. Half of the respondents agreed that thefactors can easily be understood by nontechnical stakeholders.

Fifty percent of the experts rated the effect of “secrecy”on privacy and security in IDMSs as high. This representsa low degree of consensus for the rating. The majority ofthe respondents were of the opinion that the “secrecy” factorsare easy to measure and they provide an accurate measureof privacy and security risks of tokens. Further, nontechnicalstakeholders can understand the factors.

The effect of “value at risk” factors was rated as high bythe majority of the respondents. However, there was a lowdegree of consensus on how difficult it is to quantify thefactors. Half of the respondents agreed that it is difficult tomeasure or quantify the factors. Also, half of the respondentswere of the opinion that the factors provide a very accuratemeasure of security and privacy. This represents a low degreeof consensus. Similarly, half of the respondents agreed that thefactors are easy to understand by nontechnical stakeholders.

The majority of the respondents agreed that the effect oftoken “assignment” factors on privacy and security is low. Therespondents were split on how easy it is to quantify or measurethe factors. Two of the respondents were of the opinion thatthe factors are easy to measure while others thought they arevery easy to measure. The majority of the respondents wereof the opinion that the factors provide less accurate measureof privacy and security risks. Finally, half of the respondentsagreed that nontechnical stakeholders can easily understandthe risk factors.

The effect of the “obligation and policy” factors is medium.This means the factors have medium effect on privacy andsecurity risks of IDMSs. However, the consensus was weaksince only half of the respondents supported this evaluation.

Similarly, half of the respondents were of the opinion thatthe factors are easy to measure or quantify and they provideaccurate measure of privacy and security risks in IDMSs.The majority of the respondents were of the opinion thatnontechnical stakeholders cannot easily understand the factors.

Table VI explains the meaning of the units used in ratingthe risk factors in Table IV

VI. Discussion

Table I represents the result of the initial survey of riskcontributing factors for IDMSs [2]. The risk contributingfactors of the initial survey is similar to that of the consolidatedtable, Table IV. This indicates a fair degree of consensusconcerning the evaluation of the initial construct [2].

However, some of the “factors” have been refined in Ta-ble IV. The experts opinion regarding “provisioning” is nowmore precise than the result obtained in the initial survey. The“purpose of use” is refined and detailed in Table IV than theinitial version in Table I. Furthermore, “secrecy” and “claimtype” have additional factors that are not identified in Table I.

The second part of the evaluation requires the respondents torate the risk contributing factors in Table IV. The objective ofthe rating is to determine whether a set of factors in a categoryis a key risk indicator (KRI). KRIs are highly relevant indica-tors possessing a high probability of predicting or indicatingimportant risk [6]. The Risk IT [6] framework characterizesKRI as being sensitive, having high impact, easy to measureand relevant to both technical and nontechnical stakeholders.

The rating table, Table V, shows that the respondentsrated “claim type” factors as having a high effect on privacyand security risks of IDMSs. This means the nature of theauthentication mechanisms used to secure an identifier mayhave a significant effect in assessing the privacy and securityrisks of IDMSs. Thus, their nature may result in the highly

196 IEEE SYSTEMS JOURNAL, VOL. 7, NO. 2, JUNE 2013

TABLE VII

Key Risk Indicators Table

Category of Factors FactorsFrequency and duration of use One time, multiple times,

life timeSecrecy Inferable, public, obfuscated,

revocable, recoverableClaim type Password, crypto key, biometric,

challenge-response, single-claim,multiple-claims

costly loss of major tangible or intangible assets or resources.It may significantly violate, harm the individual, or even causedeath or serious injury to a stakeholder.

How to quantify or measure “claim type” is skewed toward“high”—it may be very easy to quantify the “claim type”factors. The factors also provide an accurate measure as sen-sitivity is skewed toward medium. Non-technical stakeholdersmay understand the factors as stakeholders understandabilityis skewed toward medium. We can conclude that “claim type”is a KRI for privacy and security risks in IDMSs.

Similarly, “frequency and duration of use” and “secrecy”may be considered as KRIs for privacy and security risks inIDMSs.

“Purpose of use” is not a KRI as it has medium impactand is difficult to measure. Similarly, “provisioning” is nota KRI because it may lead to medium privacy and securityrisks. “Mobility” is not a KRI because its privacy and securityimpact is skewed toward medium. Moreover, nontechnicalstakeholders may find it difficult to understand. Furthermore,“value at risk” may not be a KRI because it may be difficultto measure or quantify.

“Assignment and relationship” may not be a KRI becauseit may lead to low privacy and security risks in IDMSs andmay provide less accurate measure. In addition, “obligationand policy” may not be a KRI because it may have lesssignificant effect on privacy and security of IDMSs and maybe difficult for nontechnical stakeholders to understand. Theeffect may result in insignificant loss of some tangible assets,resources or reputation. The three KRIs identified in the studyare summarized in Table VII.

VII. Shortcomings

One of the shortfalls observed in this paper is the limitednumber of rounds. The number of rounds for the evaluation ofthe KRIs should have been more than one. Four rounds couldhave given some of the respondents an opportunity to rethinktheir answers. For example, a fourth round could have assistedin reassessing the privacy and security risks of the “frequencyand duration of use” factors since the opinions were dividedon certain aspects of the assessment. The number of expertsis equally important. Though we have the appropriate numberof respondents, we believe that an additional four to twentyexperts could have enriched the results. We were able tocomplete the study with six experts instead of nine. However,the effect of the shortfall can be felt on the final round of thestudy but not the entire study.

VIII. Mapping the Construct with the New

Protection Goals

Evaluation of a construct includes completeness amongother criteria [4]. We show the completeness of our result byaligning the risk factor with the new protection goals (NPGs).The NPGs turn data protection into a modern, proactive,and operational tool. They “fall in line with the approvedmethods of risk analysis and protective measures such asbaseline protection” [20]. They provide technically convertibleprinciples that cover both asset and right protection which islacking in the privacy by design goals.

The NPGs extend the classical security goals—integrity,availability, and confidentiality, by including transparency,unlinkability, and ability to intervene (Intervenability). Trans-parency relates to the purpose of data processing, necessity,data thriftiness, and information needs of the data subjects. Un-linkability operationalizes purpose separation or bindingness;that is, it checks if personal data collected for a particularpurpose is being used for another purpose. The ability tointervene (intervenability) gives the data subjects control overthe processing of their personal data [20].

The second column of Table VIII represents the categoriesof risk factors while the third represents the risk factors. Thefirst column of Table VIII aligns the NPGs [20] with the riskfactors. The alignment does not mean the exact match, butrather correspondence, similarity or overlap of concepts [21].The alignment shows the completeness of the model becauseeach NPG has corresponding risk categories and factors.

IX. Conclusion

In this paper, we used the Delphi method to evaluatea construct for privacy and security risks assessment. Thestudy asked an expert panel to list factors that contribute toprivacy and security risks of uses of identifiers in IDMSs.The responses obtained from the study were consolidatedwith a previous model [2] of risk contributing factors forexpert evaluation. The study showed that the experts mostlyagreed with the initial risk analysis construct for IDMSs.In addition, the study evaluated the consolidated result forKRIs. The respondents identified “claim type,” “frequency andduration of use,” and “secrecy” as possible KRIs. Furthermore,the completeness of the result was evaluated by aligning thefactors to the NPGs. The alignment evaluated the completenessof the result obtained in the Delphi study using an establishedset of privacy and security requirement. The evaluation showedthat the result is fairly complete since each category alignswith a security or privacy requirement. The model can be usedas a model for privacy and security risks assessment in IDMSsespecially the KRIs. Future work will show how the factorscan be used to estimate risk in IDMSs.

Appendix

The questions for the three-round Delphi study are asfollows.

1) Round I:

PAINTSIL: EVALUATION OF PRIVACY AND SECURITY RISKS ANALYSIS CONSTRUCT FOR IDENTITY MANAGEMENT SYSTEMS 197

TABLE VIII

Mapping of the Risk Factors and the New Protection Goals

New Protection Goals Category of Factors FactorsConfidentiality Frequency and duration of use One time, multiple times,

life timeIntegrity, Provisioning Created, updated, deletedConfidentiality or archived attribute with:Intervenability limited personal data,

overloaded personal data,sensitive personal data

Unlinkability Purpose of use Application specific, single sign-on,used with multiple services,context specific, silo

Intervenability Assignment and relationship Forced, self, jointly established,role, pseudonym

Confidentiality Secrecy Inferable, public, obfuscated,revocable, recoverable,

Confidentiality Claim type Password, crypto key, biometric,challenge-response, single-claim,multiple-claims

Availability Mobility Copyable, remotely usable,concurrently usable, immobile

Availability Value at risk Loss, misuse, disclosure, disruption,theft, replacement value

Transparency and obligation & Policy Policy absent, policy present

a) We would like you to help us find the variousways by which the properties or the characteristicsof electronic identifiers contribute to privacy andsecurity risks in IDMSs. Please list and explainthe risk properties of electronic identifiers in yourown words. Focus on the electronic identifiers andIDMSs only—the inclusion of application-specificrisks are out of the scope of this survey.

2) Round II:

a) We consolidated the responses received in RoundI of this Delphi study into categories of riskcontributing properties of electronic identifiers.We found out that some factors require furtherdeliberation. We would like you to share youropinions regarding the factors in the Table II byanswering the questions below. You are free toexpand or adjust the table if needed.

i) Would you consider single sign-on and multi-ple services risk factors as related to frequencyand duration of use (e.g., as an additionaldimension in “application space,” or rather seethem classified into “purpose of use”? (Pleasesee rows 1 and 2 in Table II).

ii) Are the factors “application specific,” “privacypreserving,” “silo” and “context specific” in“purpose of use,” in your opinion, point to thesame overall concept—or shall they be listedseparately? (Please see row 2 in Table II)

iii) Would you agree with how we explain theprovisioning of electronic identifier in Table IIrow 3? Do you agree with the listed factors thatcontribute to risk when provisioning electronicidentifiers? (Please see row 3 in Table II).

3) Round III

a) Table IV consists of the consolidated risk factorsof identifiers. We would like you to comment onthese risk factors, especially the new additions.The new additions are the rows in bold. They arenot direct opinions but inferences we drew fromthe study and other research works. We encourageany kind of remark (objection, clarification, oracceptance, etc.).

b) This task asks you to rate the risk factors in Ta-ble IV concerning their potential impact on privacyand security of IDMSs. In addition, we ask you torate the possibilities to measure the risk factors,and to judge its complexity for nontechnical users.You should fill in Table V based on the options inTable VI.

Acknowledgment

The author would like to thank the following experts fortheir contributions to this study: A. Vapen, Linkoping Uni-versity, Sweden, H. Leithold, A-SIT, Germany, P. Gullberg,Todos AB, Sweden, H. Zwingelberg, ULD, Germany, andH. Roßnagel, Fraunhofer Institute, Germany. The author alsowishes to acknowledge L. Fritsch for his guidance and support.

References

[1] J. Audun and P. Simon, “User centric identity management,” in Proc.AusCERT Conf., May 2005, pp. 77–90.

[2] E. Paintsil and L. Fritsch, “A taxonomy of privacy and security riskscontributing factors,” in Privacy and Identity Management for Life (IFIPAdvances in Information and Communication Technology, vol. 352), S.Fischer-Hubner, P. Duquenoy, M. Hansen, R. Leenes, and G. Zhang,Eds. Boston, MA: Springer, 2011, pp. 52–63.

198 IEEE SYSTEMS JOURNAL, VOL. 7, NO. 2, JUNE 2013

[3] J. P. Alan, R. Hevner, S. T. March, and S. Ram, “Design science ininformation systems research,” Manag. Inform. Syst. Quart., vol. 28,no. 1, pp. 75–105, Mar. 2004.

[4] S. T. March and G. F. Smith, “Design and natural science research oninformation technology,” Decis. Support Syst., vol. 15, pp. 251–266,Dec. 1995.

[5] M. Turoff and H. A. Linstone. (2002). The Delphi Method—Techniques and Applications [Online]. Available: http://is.njit.edu/pubs/delphibook/***

[6] The Risk IT Practitioner Guide, ISACA, Rolling Meadows, IL, 2009.[7] G. Peterson, “Introduction to identity management risk metrics,” IEEE

Security Privacy, vol. 4, no. 4, pp. 88–91, Jul. 2006.[8] D. J. Solove, “Privacy and power: Computer databases and metaphors

for information privacy,” Stanford Law Rev., vol. 53, p. 1393ff, Jul. 2001.[9] E. Yamada, “Information security incident survey report,” NPO Japan

Netw. Security Assoc., Tokyo, Japan, Tech. Rep. 1.0, 2006.[10] L. Fritsch and H. Abie, “Toward a research road map for the man-

agement of privacy risks in information systems,” in Proc. Sicherheit,LNI 128. 2008, pp. 1–15.

[11] S. Berinato. (2002). Finally A Real Return on Security Spend-ing [Online]. Available: http://www.cio.com.au/article/52650/finally−real−return−security−spendi

[12] WP3, “D3.17: Identity management systems—recent development,”Future of Identity in the Information Society, Johann-Wolfgang-GoetheUniversität Frankfurt, Frankfurt, Germany, Rep. D3.17, Aug. 2009.

[13] W. Mac Gregor, W. Dutcher, and J. Khan, “An ontology of identitycredentials—part 1: Background and formulation,” Nat. Inst. StandardTechnol., Gaitersburg, MD, Tech. Rep. 800-103, 2006 [Online]. Avail-able: http://www.jon.grimsgaard.no/rudrevyen/index.html

[14] L. A. Bygrave, Data Protection Law Approaching Its Rationale, Logicand Limits. Dordrecht, The Netherlands: Kluwer Law International,2002.

[15] J. Camenisch and E. Van Herreweghen, “Design and implementationof the idemix anonymous credential system,” in Proc. 9th ACM Conf.Comput. Commun. Security (CCS), 2002, pp. 21–30.

[16] L. Fritsch, “Profiling and location-based services,” Profiling the Eu-ropean Citizen—Cross-Disciplinary Perspectives, M. Hildebrandt andS. Gutwirth, Eds. Dordrecht, The Netherlands: Springer, 2008, pp. 147–160.

[17] M. Hansen, A. Schwartz, and A. Cooper, “Privacy and identity man-agement,” IEEE Security Privacy, vol. 6, no. 2, pp. 38–45, Mar.–Apr.2008.

[18] P. Bramhall, M. Hansen, K. Rannenberg, and T. Roessler, “User-centricidentity management: New trends in standardization and regulation,”IEEE Security Privacy, vol. 5, no. 4, pp. 84–87, Jul.–Aug. 2007.

[19] C. A. Ardagna, L. Bussard, S. De Capitani di Vimercati, G. Neven,S. Paraboschi, E. Pedrini, F. S. Preiss, D. Raggett, P. Samarati, S.Trabelsi, and M. Verdicchio, “PrimeLife policy language,” in Proc.W3C Workshop Access Cont. Appl. Scenarios, 2009 [Online]. Available:http://www.w3.org/2009/policy-ws/papers/Trabelisi.pdf

[20] M. Rost and K. Bock. (2011, Jan.). Privacy by design and thenew protection goals*. DuD [Online]. Available: http://www.maroki.de/pub/privacy/BockRost−PbD−DPG−en−v1f.pdf

[21] R. Matulevicius, N. Mayer, and P. Heymans, “Alignment of misuse caseswith security risk management,” in Proc. 3rd Int. Conf. AvailabilityReliability Security, 2008, pp. 1397–1404.

Ebenezer Paintsil received the M.Sc. degree innetwork and system administration and the LLMdegree in information communication technologylaw from the University of Oslo, Oslo, Norway.

He is currently a Ph.D. Fellow with the NorwegianComputing Center, Oslo, researching privacy andsecurity risks in identity management systems. Hiscurrent research interests include computer securityand privacy.