finite state machine for the social … › wm-418498...2018/02/16  · v109 2 2018 s in insi i nins...

15
Vol.109 (2) June 2018 SOUTH AFRICAN INSTITUTE OF ELECTRICAL ENGINEERS 133 FINITE STATE MACHINE FOR THE SOCIAL ENGINEERING ATTACK DETECTION MODEL: SEADM Francois Mouton * , Alastair Nottingham , Louise Leenen and H.S Venter § * Defence Peace Safety & Security, Council for Scientific and Industrial Research, Pretoria, South Africa E-mail: [email protected] E-mail: [email protected] E-mail: [email protected] § Department of Computer Science, University of Pretoria, Pretoria, South Africa E-mail: [email protected] Abstract: Information security is a fast-growing discipline, and relies on continued improvement of security measures to protect sensitive information. Human operators are one of the weakest links in the security chain as they are highly susceptible to manipulation. A social engineering attack targets this weakness by using various manipulation techniques to elicit individuals to perform sensitive requests. The field of social engineering is still in its infancy with respect to formal definitions, attack frameworks, and examples of attacks and detection models. In order to formally address social engineering in a broad context, this paper proposes the underlying abstract finite state machine of the Social Engineering Attack Detection Model (SEADM). The model has been shown to successfully thwart social engineering attacks utilising either bidirectional communication, unidirectional communication or indirect communication. Proposing and exploring the underlying finite state machine of the model allows one to have a clearer overview of the mental processing performed within the model. While the current model provides a general procedural template for implementing detection mechanisms for social engineering attacks, the finite state machine provides a more abstract and extensible model that highlights the inter-connections between task categories associated with different scenarios. The finite state machine is intended to help facilitate the incorporation of organisation specific extensions by grouping similar activities into distinct categories, subdivided into one or more states. The finite state machine is then verified by applying it to representative social engineering attack scenarios from all three streams of possible communication. This verifies that all the capabilities of the SEADM are kept in tact, whilst being improved, by the proposed finite state machine. Key words: Bidirectional Communication, Finite State Machine, Indirect Communication, Social Engineering, Social Engineering Attack Examples, Social Engineering Attack Detection Model, Social Engineering Attack Framework, Unidirectional Communication. 1. INTRODUCTION Protection of sensitive information is of vital importance to organisations and governments, and the development of measures to counter illegal access to such information is an area that continues to receive increasing attention. Organisations and governments have a vested interest in securing sensitive information and the trust of clients or citizens. Technology on its own is not a sufficient safeguard against information theft; staff members — often the weak link in an information security system — can be influenced or manipulated to divulge sensitive information that allows unauthorised individuals to gain access to protected systems. The ‘art’ of influencing people to divulge sensitive information is known as social engineering, and the process of doing so is known as a social engineering attack (SEA). There are various definitions of social engineering, and a number of different models of social engineering attacks exist [1–5]. The authors of this paper considered different definitions of social engineering and social engineering attack taxonomies in a previous paper, Towards an Ontological Model Defining the Social Engineering Domain [6], and formulated a definition for both social engineering and a social engineering attack. In addition, the authors proposed an ontological model for a social engineering attack and defined social engineering as “the science of using social interaction as a means to persuade an individual or an organisation to comply with a specific request from an attacker where either the social interaction, the persuasion, or the request involves a computer-related entity” [6]. As clearly stated by various authors [7–10], the human component is one of the most vulnerable elements within security systems. Unfortunately it is the tendency towards cooperation and helpfulness in human nature that make people vulnerable to the techniques used by social engineers, as social engineering attacks exploit various psychological vulnerabilities to manipulate the individual to disclose the requested information [7, 10]. It is also the case that more and more individuals are exposed to electronic computing devices as the costs of these devices are decreasing drastically. Electronic computing devices have become significantly more affordable during the past few years and due to this nearly everyone has access to these devices. This provides the social engineer with more Based on: “Underlying Finite State Machine for the Social Engineering Attack Detection Model”, by F. Mouton, A. Nottingham, L. Leenen and H.S. Venter which appeared in the Proceedings of Information Security South African (ISSA) 2017, Johannesburg, 16 & 17 August 2017. © 2017 IEEE

Upload: others

Post on 07-Jun-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: FINITE STATE MACHINE FOR THE SOCIAL … › wm-418498...2018/02/16  · V109 2 2018 S IN INSI I NINS 133 FINITE STATE MACHINE FOR THE SOCIAL ENGINEERING ATTACK DETECTION MODEL: SEADM

Vol.109 (2) June 2018 SOUTH AFRICAN INSTITUTE OF ELECTRICAL ENGINEERS 133

FINITE STATE MACHINE FOR THE SOCIAL ENGINEERINGATTACK DETECTION MODEL: SEADM

Francois Mouton∗ , Alastair Nottingham† , Louise Leenen‡ and H.S Venter§

∗ Defence Peace Safety & Security, Council for Scientific and Industrial Research, Pretoria, SouthAfrica E-mail: [email protected]† E-mail: [email protected]‡ E-mail: [email protected]§ Department of Computer Science, University of Pretoria, Pretoria, South Africa E-mail:[email protected]

Abstract: Information security is a fast-growing discipline, and relies on continued improvement ofsecurity measures to protect sensitive information. Human operators are one of the weakest links in thesecurity chain as they are highly susceptible to manipulation. A social engineering attack targets thisweakness by using various manipulation techniques to elicit individuals to perform sensitive requests.The field of social engineering is still in its infancy with respect to formal definitions, attack frameworks,and examples of attacks and detection models. In order to formally address social engineeringin a broad context, this paper proposes the underlying abstract finite state machine of the SocialEngineering Attack Detection Model (SEADM). The model has been shown to successfully thwartsocial engineering attacks utilising either bidirectional communication, unidirectional communicationor indirect communication. Proposing and exploring the underlying finite state machine of the modelallows one to have a clearer overview of the mental processing performed within the model. Whilethe current model provides a general procedural template for implementing detection mechanisms forsocial engineering attacks, the finite state machine provides a more abstract and extensible model thathighlights the inter-connections between task categories associated with different scenarios. The finitestate machine is intended to help facilitate the incorporation of organisation specific extensions bygrouping similar activities into distinct categories, subdivided into one or more states. The finite statemachine is then verified by applying it to representative social engineering attack scenarios from allthree streams of possible communication. This verifies that all the capabilities of the SEADM are keptin tact, whilst being improved, by the proposed finite state machine.

Key words: Bidirectional Communication, Finite State Machine, Indirect Communication, SocialEngineering, Social Engineering Attack Examples, Social Engineering Attack Detection Model, SocialEngineering Attack Framework, Unidirectional Communication.

1. INTRODUCTION

Protection of sensitive information is of vital importanceto organisations and governments, and the developmentof measures to counter illegal access to such informationis an area that continues to receive increasing attention.Organisations and governments have a vested interest insecuring sensitive information and the trust of clientsor citizens. Technology on its own is not a sufficientsafeguard against information theft; staff members —often the weak link in an information security system— can be influenced or manipulated to divulge sensitiveinformation that allows unauthorised individuals to gainaccess to protected systems.

The ‘art’ of influencing people to divulge sensitiveinformation is known as social engineering, and theprocess of doing so is known as a social engineeringattack (SEA). There are various definitions of socialengineering, and a number of different models of socialengineering attacks exist [1–5]. The authors of thispaper considered different definitions of social engineeringand social engineering attack taxonomies in a previouspaper, Towards an Ontological Model Defining the Social

Engineering Domain [6], and formulated a definition forboth social engineering and a social engineering attack. Inaddition, the authors proposed an ontological model for asocial engineering attack and defined social engineeringas “the science of using social interaction as a meansto persuade an individual or an organisation to complywith a specific request from an attacker where either thesocial interaction, the persuasion, or the request involves acomputer-related entity” [6].

As clearly stated by various authors [7–10], the humancomponent is one of the most vulnerable elements withinsecurity systems. Unfortunately it is the tendencytowards cooperation and helpfulness in human nature thatmake people vulnerable to the techniques used by socialengineers, as social engineering attacks exploit variouspsychological vulnerabilities to manipulate the individualto disclose the requested information [7, 10]. It is alsothe case that more and more individuals are exposed toelectronic computing devices as the costs of these devicesare decreasing drastically. Electronic computing deviceshave become significantly more affordable during the pastfew years and due to this nearly everyone has access tothese devices. This provides the social engineer with more

Based on: “Underlying Finite State Machine for the Social Engineering Attack Detection Model”, by F. Mouton, A. Nottingham, L. Leenen and H.S. Venter which appeared in the Proceedings of Information Security South African (ISSA) 2017, Johannesburg, 16 & 17 August 2017. © 2017 IEEE

Page 2: FINITE STATE MACHINE FOR THE SOCIAL … › wm-418498...2018/02/16  · V109 2 2018 S IN INSI I NINS 133 FINITE STATE MACHINE FOR THE SOCIAL ENGINEERING ATTACK DETECTION MODEL: SEADM

Vol.109 (2) June 2018SOUTH AFRICAN INSTITUTE OF ELECTRICAL ENGINEERS134

victims to target using skillfully crafted social engineeringattacks.

The authors have previously performed research withinthe field of social engineering that focused on formalisingand expanding upon concepts in the field. This researchincluded proposing a social engineering attack framework,providing social engineering attack examples, consideringethical questions relating to social engineering, and bothdeveloping and revising the Social Engineering AttackDetection Model (SEADM) [11–14]. The previousiteration of the SEADM focused on covering all threedifferent types of communication mediums for socialengineering attacks [14]. Whilst using the SEADMto determine whether it is effective to detect socialengineering attacks, it was noted that each set of questionsfocuses on a specific context. Also, the current iterationof the SEADM indicates that additional questions can beadded to the model to address different implementationenvironments, but does not explicitly state how this shouldbe done.

This paper focuses on addressing this problem byformalising the latest iteration of the SEADM intoan abstracted deterministic finite state automata. Theauthors are not aware of similar approaches by otherresearchers. In its original form, SEADM was constructedas a non-deterministic flow chart that relied on general,qualitative sub-procedures to provide a model for detectingsocial engineering attacks. While effective as a procedurefor reducing risk, the model made no provision andprovided no guidance on how additional actions relevantin specific contexts and domains could be included, and atwhat points in the model these inclusions should be placed.Due to the inclusion of cycles in the SEADM model,the process was also non-deterministic, which addedadditional and unnecessary complexity in implementingthe process.

This research aims to improve the extensibility of theSEADM, and to reduce its implementation complexity byrestructuring the process to be cycle-free and deterministic.The extensibility of the model is addressed by replacingthe qualitative sub-procedures with generalised states thatbetter define the role of each sub-process, while treatingquestions posed in each state as general examples thatmay be expanded or removed, and not as a definitivecollection of necessary queries. Organising the modelas a finite set of generalised states provides a moreconcise representation of the process that encapsulatesthe broad set of questions into distinct units of related,deterministic, context specific states. This adjustmentis intended to improve extensibilty, while simultaneouslyreducing the complexity of implementing the model as anorganisational process or in software.

The remainder of the paper is structured as follows.Section II provides background information on theprevious social engineering model and discusses theauthors’ previous work. Section III proposes theunderlying deterministic finite state machine of the

SEADM. Section IV provides a discussion on how eachof the states were derived from the SEADM. SectionV introduces the reader to social engineering attacktemplates, and evaluates the improved version of theSEADM against these templates. Section VI concludes thepaper.

2. PREVIOUS ITERATIONS OF THE SOCIALENGINEERING ATTACK DETECTION MODEL

Many models and taxonomies have been proposed forsocial engineering attacks [1–6, 11]. The authors’ontological model depicts that a social engineeringattack “employs either direct communication or indirectcommunication, and has a social engineer, a target, amedium, a goal, one or more compliance principles andone or more techniques” [6]. The ontological modelclearly splits social engineering into three categories,namely bidirectional communication, unidirectional com-munication and indirect communication.

The initial SEADM was designed to cater specificallyfor social engineering attacks utilising bidirectionalcommunication such as a call centre environment [13, 15].This research was the first attempt to develop a detectionmodel for social engineering attacks, as at the time ofpublishing the article there was still only limited researchavailable in this field. Most of the research in thisdomain still centres around the training of users [7,16,17].During the revision of the SEADM, the steps have beengeneralised to cater for all three communication categories,namely bidirectional communication, unidirectional com-munication and indirect communication. It has also beenshown that the model is effective in detecting socialengineering attacks by testing the model against knownsocial engineering attack examples [14]. The previousiteration of the SEADM is depicted in Figure 1.

The following section discusses the representation of theSEADM as an abstracted finite state machine and providesa short discussion on each of the states.

3. UNDERLYING FINITE STATE MACHINE OF THESEADM

A finite state machine (also known as finite stateautomaton) is an imaginary machine that embodies theidea of a sequential circuit. It has a finite set of states with astart state and accepting states, and a set of state transitions[18]. Finite state machines are commonly employed inthe design and implementation of modern software andelectronics, and range from simple and highly abstractmodels of computation or processing to complex andconcrete executable mechanisms and physical circuitry.

Finite state machines can be deterministic ornon-deterministic. A deterministic machine has exactlyone path for every input-state pair. In a non-deterministicmachine there may be multiple valid transitions for everyinput-state pair, and the chosen transition is not defined;any transition can be followed. A deterministic finite

Page 3: FINITE STATE MACHINE FOR THE SOCIAL … › wm-418498...2018/02/16  · V109 2 2018 S IN INSI I NINS 133 FINITE STATE MACHINE FOR THE SOCIAL ENGINEERING ATTACK DETECTION MODEL: SEADM

Vol.109 (2) June 2018 SOUTH AFRICAN INSTITUTE OF ELECTRICAL ENGINEERS 135

Figure 1: Social Engineering Attack Detection Model

Page 4: FINITE STATE MACHINE FOR THE SOCIAL … › wm-418498...2018/02/16  · V109 2 2018 S IN INSI I NINS 133 FINITE STATE MACHINE FOR THE SOCIAL ENGINEERING ATTACK DETECTION MODEL: SEADM

Vol.109 (2) June 2018SOUTH AFRICAN INSTITUTE OF ELECTRICAL ENGINEERS136

state machine is a state machine that is guaranteed tocomplete for all inputs in a finite amount of time, whilea non-deterministic finite state machine may executeindefinitely or fail to progress toward completion forcertain input sets. A finite state machine is provablydeterministic if and only if it is both free of cycles (that is,no state is ever revisited after being processed once) anddefines a transition to a new state for each potential inputin every state (that is, any valid input into a state results ina transition to a new state). These two properties togetherpreclude the possibility of the state machine entering aninfinite loop, either within a single state or between acollection of states, thereby ensuring that processing willalways complete within a finite number of steps.

The finite state automaton described in this paper is anabstract or general model, and is intended to provide astructured but flexible deterministic high-level overviewof the steps taken to mitigate a social engineering attack,improving upon the original SEADM flow-chart approach.While the SEADM flow-chart provides a static proceduraltemplate for implementing detection mechanisms forsocial engineering attacks, the finite state diagramprovides a more accessible, abstract and extensiblemodel that highlights the inter-connections between taskcategories associated with different scenarios. Theabstract state-based model is intended to help facilitate theincorporation of domain, system or organisation specificextensions by grouping related activities into distinctcategories, subdivided into one or more generalised nodes.Should a specific task, necessary in a particular domain,systems or organisational context not be included inthe flow-chart, the state diagram may be used to inferthe correct location within the model to incorporate thetask. It further facilitates additional analysis on statetransitions that are difficult to extract from the moreverbose flow-chart.

The current iteration of the SEADM, as depicted in Figure1, utilised four different state categories: the request,receiver, requester and third party. The request states,indicated in yellow, assesses information about the requestitself. The receiver state, indicated in blue, considersthe person handling the request and whether this person(the receiver) understands and is allowed to perform therequest. The requester states, indicated in green, considersthe requester and whether any information about therequester can be verified. The third party state, indicatedin red, considers the involvement of a third party tosupport external verification of information supplied by therequester.

The same four categories and colour schemas aremaintained in the state machine as they depict the primarytopic that each specific state deals with, allowing oneto better understand the attack detection model. Thestate machine is depicted in Figure 2. Each state has anassociated letter which explains which condition needs tobe met before the transition can be performed. As anexample, a state can have an alphabet of Y and ¬Y . Thesymbol ¬ indicates negation, so ¬Y is Not Y , or more

accurately, the opposite of Y .

The states in Figure 2 are explained as follows:

• S1 deals with the receiver’s understanding the request.The request is either ‘understood’ (U) or ‘notunderstood’ (¬U) by the receiver.

• S2 deals with requesting more information from therequester by the receiver in an effort to properlyunderstand the request. There is either ‘sufficientinformation’ (I) or ‘insufficient information’ (¬I)for the receiver to understand and thus perform therequest.

• S3 deals with the capability of the receiver to performthe request. The receiver is either ‘capable’ (C) or‘incapable’ (¬C) of performing the request.

• S4 deals with further verification requirementsthat may need to be met. The request eitherhas further ‘verification requirements’ (R) or has‘no verification requirements’ (¬R). Additionalverification requirements may be necessary insensitive or secure contexts.

• S5 deals with whether the receiver can verify and trustthe identity of the requester, and how many of theverification steps hold. The verification steps hold toeither a ‘high amount’ (VH ) where all or nearly all ofthe supplied information can be verified, a ‘mediumamount’ (VM) where the majority of information canbe verified, or ‘low amount’ (VL) where the majorityof supplied information cannot be verified. Theselevels can be calibrated to be more or less restrictive,depending on the operating environment, and governhow the receiver should proceed.

• S6 deals with trusted third party verification ofinformation supplied by the requester. The requesteris either ‘verified’ (T ) by a third party or is ‘notverified’ (¬T ).

• S7 deals with the authority of the requester. Therequester either has ‘sufficient authority’ (A) or‘insufficient authority’ (¬A) for the particular request.

• SF is an end state. This state indicates the failure state.The request should not be performed by the receiver,and should either be denied outright, or referred to areceiver with more knowledge or authority.

• SS is an end state. This state indicates the successstate. The request should be performed by thereceiver.

The states are elaborated further in Section 4. A descriptionof the full state machine in mathematical notation follows.The finite state machine is a 5-tuple consisting of the finiteset of input alphabet characters Σ, the finite set of states Q,the start state S0, a set of accepting states F , and a set ofstate transitions δ that contains 3-tuples representing state

Page 5: FINITE STATE MACHINE FOR THE SOCIAL … › wm-418498...2018/02/16  · V109 2 2018 S IN INSI I NINS 133 FINITE STATE MACHINE FOR THE SOCIAL ENGINEERING ATTACK DETECTION MODEL: SEADM

Vol.109 (2) June 2018 SOUTH AFRICAN INSTITUTE OF ELECTRICAL ENGINEERS 137

S1

start

S2 S3

S4

S5S6

S7

SF

SS

U¬U

C

¬C

R

¬RVL

VM

VHT

¬T

A

¬A

¬I

I

Figure 2: Underlying Finite State Machine of the SEADM

Page 6: FINITE STATE MACHINE FOR THE SOCIAL … › wm-418498...2018/02/16  · V109 2 2018 S IN INSI I NINS 133 FINITE STATE MACHINE FOR THE SOCIAL ENGINEERING ATTACK DETECTION MODEL: SEADM

Vol.109 (2) June 2018SOUTH AFRICAN INSTITUTE OF ELECTRICAL ENGINEERS138

transitions. A 3-tuple in δ consists of a current state, acurrent input and the next state.

Σ = {U,¬U, I,¬I,C,¬C,R,¬R,VL,VM,VH ,T,¬T,A,¬A}Q = {S1,S2,S3,S4,S5,S6,S7,SS,SF}S0 = S1

δ = {(S1,U,S3),(S1,¬U,S2)

(S2, I,S3),(S2,¬I,SF)

(S3,C,S4),(S3,¬C,SF)

(S4,R,S5),(S4,¬R,SS)

(S5,VL,SF),(S5,VM,S6),(S5,VH ,S7)

(S6,T,S7),(S6,¬T,SF)

(S7,A,SS),(S7,¬A,SF)

}F = {SS,SF}

Using both Figure 2 and the provided mathematical modelit is straightforward to infer a state transition table.Table 1 depicts all the possible state transitions given aspecific input for each state. For all input states, theoutput is either a terminal node or a node with a higherstate index. This illustrates that the state machine isdeterministic, eliminating cycles present in the originalSEADM flowchart.

Table 1: State Transition Table for the SEADM�������

InputState

S1 S2 S3 S4 S5 S6 S7

U S3 — — — — — —¬U S2 — — — — — —

I — S3 — — — — —¬I — SF — — — — —C — — S4 — — — —¬C — — SF — — — —R — — — S5 — — —¬R — — — SS — — —VL — — — — SF — —VM — — — — S6 — —VH — — — — S7 — —T — — — — — S7 —¬T — — — — — SF —A — — — — — — SS¬A — — — — — — SF

To further show that the state machine model isdeterministic, resulting in a valid outcome of either successor failure for all given alphabet sequences, a transitiontable indicating all possible input alphabet sequences(paths) and their corresponding results are shown in Table2. Each row in the table represents a path. Σi indicates theinput character is the i-th character in the path. The symbol

∀ indicates no transition occurred in the i-th position ofthe path in the case of shorter paths. This table showsthat for all possible paths, the state machine returns eithersuccess or failure in a finite number of steps, and is thusdeterministic.

Table 2: State Transition Table for all Input AlphabetsInput Alphabet Output

No Σ1 Σ2 Σ3 Σ4 Σ5 Σ6 Σ7 SS SF1 U ∀ C R VH ∀ A � —2 U ∀ C R VM T A � —3 U ∀ C R VM T ¬A — �4 U ∀ C ¬R ∀ ∀ ∀ � —5 U ∀ C R VH ∀ ¬A — �6 U ∀ C R VM ¬T ∀ — �7 U ∀ C R VL ∀ ∀ — �8 U ∀ ¬C ∀ ∀ ∀ ∀ — �9 ¬U ¬I ∀ ∀ ∀ ∀ ∀ — �

10 ¬U I C R VH ∀ A � —11 ¬U I C R VM T A � —12 ¬U I C R VM T ¬A — �13 ¬U ∀ C ¬R ∀ ∀ ∀ � —14 ¬U I C R VH ∀ ¬A — �15 ¬U I C R VM ¬T ∀ — �16 ¬U I C R VL ∀ ∀ — �17 ¬U I ¬C ∀ ∀ ∀ ∀ — �

Having considered the high-level state-based model for theSEADM, the following section elaborates on the purposeof each state and exactly how it was derived from theSEADM.

4. DISCUSSION OF EACH STATE

This sections explains how each of the states hasbeen designed and integrated from the SEADM model.Throughout this discussion the alphabet of the states areprovided. During the discussion on the states, it is alsoshown how each state relates to the SEADM. Each statehas been generalised to such an extent that it can containany number of questions required to achieve a specifictransition result. This provides a rough guide for flexibilityand extensibility, depending on the particular context themodel is applied to.

4.1 State S1: Understanding the Request

This state considers whether the receiver of the requestfully understands the request in its entirety. This meansthat the requester should have provided all the informationrequired to enable the receiver to perform the request infull. The question that was provided in the SEADM was“Do you understand what is requested?”

In the SEADM there was only one question asked here.This has been created as the start state as it is still importantto fully understand what is requested before the request canbe processed any further. In this state the alphabet is asfollows:

Page 7: FINITE STATE MACHINE FOR THE SOCIAL … › wm-418498...2018/02/16  · V109 2 2018 S IN INSI I NINS 133 FINITE STATE MACHINE FOR THE SOCIAL ENGINEERING ATTACK DETECTION MODEL: SEADM

Vol.109 (2) June 2018 SOUTH AFRICAN INSTITUTE OF ELECTRICAL ENGINEERS 139

• U represents that the request is understood in itsentirety and that the receiver has all the necessaryinformation in order to be able to perform the request.

• ¬U represents that the request is not fully understoodand that the receiver requires more information aboutthe request.

This state can only transition in one of two ways and isdepicted as follows:

• (S1,U,S3) if the request is fully understood.

• (S1,¬U,S2) if more information is required.

4.2 State S2: Requesting information to fully understandthe request

In the SEADM this state was previously grouped togetherwith the previous question. This resulted in a loop, asthe receiver could always ask the requester to elaboratefurther. At some point the requester would no longerbe able to elaborate on the request as there would be nomore additional information available, and the loop wouldterminate. At what point the loop would terminate wasunclear, and could not be formalised into a deterministicstate machine without additional complexity.

Representing this task as a distinct state more clearlydefines this, as it deals directly with the information that isrequired to complete the request and not whether one canrequest more information. This state considers whetherthe requester can provide information to such an extentthat the receiver is able to fully understand the request. Astate transition occurs when either sufficient informationis provided, or it is determined that the requester cannotprovide sufficient information. Previously the question thatwas asked was as follows, “Can you ask the requester toelaborate further on the request?” In this state all questionsshould be aligned with whether the requested information,in the case that additional information is provided, allowsthe receiver to understand the request in full or not. In thisstate the alphabet is as follows:

• I represents that the requester can and has providedenough information for the request to be understoodin its entirety by the receiver.

• ¬I represents that the receiver is unable to understandthe request in full. This may be because therequester could not provide more information, therequester could not be reached (in the case of remotecommunication), or that the information provided bythe requester was insufficient or incomplete.

This state can only transition in one of two ways and isdepicted as follows:

• (S2, I,S3) if the requester can provide sufficientinformation to understand the request.

• (S2,¬I,SF) if the requester is unable to providesufficient information to understand the request, orcannot be reached.

4.3 State S3: Does the receiver meet the requirements toperform the request?

State S3 is used to determine whether the receiver meetsall the requirements to perform the request. This stateis associated with three questions in the SEADM. Thequestions are as follows:

• “Do you understand how to perform the request?”

• “Are you capable of performing or providing therequest?”

• “Do you have the authority to perform the request?”

The goal of this state is to ensure that the individual whodeals with the specific request has the necessary skill leveland has the required authority to perform the request.Each question in this state deals directly with the role ofthe receiver and determines whether the request has beenissued to the correct receiver. In this state the alphabet isas follows:

• C represents that the receiver has met all the re-quirements to be capable of performing the requestedaction or to provide the requested information.

• ¬C represents that the receiver does not meet therequirements to be capable of dealing with therequest.

This state can only transition in one of two ways and isdepicted as follows:

• (S3,C,S4) if the receiver is able to service the request.

• (S3,¬C,SF) if the receiver is unable or incapable ofservicing the request.

4.4 State S4: Does the request have any furtherrequirements that need to be met before the requestcan be serviced?

State S4 deals with the the request itself and whether thereare any special conditions or requirements, such as policiesand procedures that need to be followed, associated withthe request. Examples of special conditions includewhether the request relates to information already in thepublic domain and is accessible to all, or whether therequest is a life threatening emergency. If the request is

Page 8: FINITE STATE MACHINE FOR THE SOCIAL … › wm-418498...2018/02/16  · V109 2 2018 S IN INSI I NINS 133 FINITE STATE MACHINE FOR THE SOCIAL ENGINEERING ATTACK DETECTION MODEL: SEADM

Vol.109 (2) June 2018SOUTH AFRICAN INSTITUTE OF ELECTRICAL ENGINEERS140

already in the public domain, for instance, there are nofurther requirements that need to be met. In the eventof a life threatening emergency, the outcome depends onwhether there are set policies in place to deal with suchcontingencies. For instance, if there is a policy in placethat allows medical personnel to rather err on the sideof caution in order to save an individuals life, there willbe no further requirements that need to be met and therequest may take place. The previous model dealt withthese examples using the following questions:

• “Is the requested action or information available to thepublic?”

• “Is this a pre-approved request that can be performedto avoid a life-threatening emergency?”

In addition, this state also deals with any furtherrequirements that need to be met. Examples of suchrequirements are any policies or procedures that are inplace that require further verification of the identity orauthority of the requester. This state can also cater forunusual requests. An unusual request is a request that isnew to the receiver and/or is a request that the receiverdoes not usually deal with on a regular basis. By followingthe rest of the model, the receiver ensures that adequateinformation about the requester is obtained before therequest is performed. It also allows the receiver time tothink about the request and whether the request should beperformed for the receiver. The previous model dealt withthese examples using the following questions:

• “Are there any administrative reasons for refusal?”

• “Are there any procedural reasons for refusal?”

• “Is this an unusual or new type of request?”

• “Are there any other reasons for refusal?”

The goal of this state is to ensure that any requestwhich is already public information should be immediatelyperformed and that any request that requires verificationshould result in further interrogation of the requester. Inthis state the alphabet is as follows:

• R represents that the request has further verificationrequirements that need to be met in order to be ableto perform the requested action or to provide therequested information.

• ¬R represents that the request has no furtherverification requirements and that the requestedaction can be performed or that the requestedinformation can be provided.

This state can only transition in one of two ways and isdepicted as follows:

• (S4,R,S5) if further verification is required.

• (S4,¬R,SS) if no further verification is required.

4.5 State S5: To what extent is the requester’s identityverifiable?

State S5 aims to address the question of the extent ofverifiability of the requester’s identity. The identity ofthe requester is determined to ensure that the requesterhas sufficient privileges to request the specific action orinformation. It is not always possible to verify the identityof the requester in full. The type of communicationmedium that is utilised by the requester usually will dictateto what extent the identity of the requester can be verified.Typically, if the requester makes the request in person oneis able to verify significantly more information about therequester than would be possible over an e-mail or postalmail. It may also be the case that the request is performedover unidirectional communication and that the receiveris unable to receive any further communication from therequester.

The previous model provided a fourth question as towhether the requester’s identity is verifiable at all. Thestate machine has combined not being verifiable andhaving a low level of verification as the same transitionas both states lead to SF . The questions that werepreviously used to perform the verification requirementsare as follows:

• “Can the authority level of the requester be verified?”

• “Can the credibility of the requester be verified?”

• “Did you have a previous interaction with therequester?”

• “Are you aware of the existence of the requester?”

All of the questions catered for a single point ofverification. It was also noted in the previous model thatthe level of verification required to transition to differentstates should be based on what type of environment themodel is applied to. The state machine makes this moregeneralised by having three possible transitions wherethere is a low, medium or high level of verification.The threshold for low, medium and high must still bedetermined based on the environment or context, but thestate diagram is more flexible when more questions areadded. In this state the alphabet is as follows:

• VL represents that there is a low level of verificationas only a few of the verification elements could beverified.

• VM represents that there is a medium level ofverification as some of the verification elements couldbe verified, but not all of them.

• VH represents that there is a high level of verificationas all or most of the verification elements could beverified. Note that edge VH should not be followed ifany information provided in a request is inconsistentwith, or differs substantially from, established orgenerally available information.

Page 9: FINITE STATE MACHINE FOR THE SOCIAL … › wm-418498...2018/02/16  · V109 2 2018 S IN INSI I NINS 133 FINITE STATE MACHINE FOR THE SOCIAL ENGINEERING ATTACK DETECTION MODEL: SEADM

Vol.109 (2) June 2018 SOUTH AFRICAN INSTITUTE OF ELECTRICAL ENGINEERS 141

This state can only transition in one of three ways and isdepicted as follows:

• (S5,VL,SF) if only a few verification elements hold.

• (S5,VM,S6) if a moderate amount of verificationelements hold.

• (S5,VH ,S7) if all or most verification elements hold.

4.6 State S6: Can you verify the requester’s identity froma third party source?

State S6 is only entered when there is a medium levelof verification requirements that have been met. If therequester could not be fully verified directly, the third partysource is utilised to determine whether the informationprovided by the requester was indeed truthful. Theprevious model did not elaborate much on the third partyverification and only had two questions associated with it,as follows:

• “Can you verify the requester through a third partysource?”

• “Does the verification process reflect the sameinformation as the verification requirements?”

The supplied questions did not mention specifically whichverification requirements needed to be verified. Utilisingthe extensibility and generality of the state machine,one could build intelligence into the model to only asksuch questions when the verification requirements wereobtained directly from the requester. In this state thealphabet is as follows:

• T represents that the verification requirements, asobtained from the requester, fully corresponds to theinformation that was obtained from the third partysource.

• ¬T represents that the verification requirements, asobtained from the requester, do not fully correspondto the information that was obtained from the thirdparty source.

This state can only transition in one of two ways and isdepicted as follows:

• (S6,T,S7) if information supplied by the requester isfully verified by the third party source.

• (S6,¬T,SF) if information supplied by the requesteris not fully verified by the third party source.

4.7 State S7: Does the authority level of the requesterprovide them with sufficient rights to request theaction or information?

State S7 utilises all the information obtained throughoutthe model and asks the receiver questions based onthe information obtained. This state aims to determinewhether the requester has the necessary authority andrights to request the action or the information. Theprevious model only asked “Does the requester havethe necessary authority to request the action or theinformation?” This state elaborates on this by allowingthe receiver to verify whether the requester has sufficientauthority to gain access to the request or the information.In this state the alphabet is as follows:

• A represents that the requester has sufficient authorityto be allowed to request the receiver to perform theaction or to provide the information.

• ¬A represents that the requester does not havesufficient authority and is thus not allowed to requestthe receiver to perform the action or to provide theinformation.

This state can only transition in one of two ways and isdepicted as follows:

• (S7,A,SS) if the requester has sufficient authority forthe request to be processed.

• (S7,¬A,SF) if the requester does not have sufficientauthority for the request to be processed.

4.8 State SF : Halt the request

This is the negative result state. In this state the requestwill be halted. In some environments one could opt torather defer the request to a more authoritative receiver.Deferring the request may lead to the request never beingperformed, and can be considered as the request havingbeen halted. In the case where the receiver is part ofan organisation, there is the option to refer the requestto a more authoritative person in the same organisation.This will allow someone else who may be better suited todetermine whether to perform or halt the request.

4.9 State SS: Perform the request

This is the positive result state of the model. In this statethe receiver is allowed to perform the single request fromthe requester.

The next section briefly discusses social engineeringattack templates, which are then tested against the socialengineering attack detection model.

Page 10: FINITE STATE MACHINE FOR THE SOCIAL … › wm-418498...2018/02/16  · V109 2 2018 S IN INSI I NINS 133 FINITE STATE MACHINE FOR THE SOCIAL ENGINEERING ATTACK DETECTION MODEL: SEADM

Vol.109 (2) June 2018SOUTH AFRICAN INSTITUTE OF ELECTRICAL ENGINEERS142

5. APPLICATION OF THE SOCIAL ENGINEERINGATTACK DETECTION MODEL

Previously, social engineering attack templates have beenproposed to provide researchers with a set of socialengineering attack examples that can be used to verifyand make comparisons between models, processes andframeworks within social engineering [12]. All of thetemplates were derived from real-world social engineeringattacks that have been documented in either news articles,technical reports, research reports, films or blogs. Thenews articles, technical reports, research reports or blogsdid not always contain all of the information regardingthe social engineering attack. This lack of informationwas addressed by proposing the templates as a moregeneralised form of the social engineering attacks providedin the literature [12].

Each template contains the full description of everyphase and associated steps of the social engineeringattack framework in such a way that each template willprovide repeatable results when used for verification andcomparison purposes. The templates are also kept assimple as possible so that they can be expanded uponto create more elaborate scenarios with similar principalstructures. The templates can also be used to verify orcompare other models, processes and frameworks withouthaving to physically perform the attack and potentiallycause harm to innocent targets [19]. The rest of thissection is dedicated to testing the SEADM against thesocial engineering attack templates.

In the first scenario, a bidirectional communicationtemplate, the social engineer pretends to be someone whoworks on the management floor and has to convince acleaner that he is indeed an employee [12]. He requeststhe cleaner to give him access to the management floor.In the second scenario, a unidirectional communicationtemplate, the social engineer attempts to obtain financialgain by sending out paper mail in which the letter requestsa group of individuals to make a small deposit into a bankaccount owned by the attacker [12]. In the third scenario,an indirect communication template, the social engineerattempts to gain unauthorised access to a workstation in anorganisation by using a storage medium device [12].

In each scenario the reader is provided with a genericdescription of the attack as taken from social engineeringattack templates. This generic description is thenpopulated with elements, both subjects and objects,from real-world examples of social engineering attacks,as provided in the discussion of the specific socialengineering attack template. Using the generic description,the elements from the real-world examples and the fullydetailed flow of the attack as provided in each phase andstep of the social engineering attack framework, one isable to devise a social engineering attack scenario. Thisscenario is then reflective of a real-world example of whichevery phase and step is fully documented as per the socialengineering attack framework. Using the proposed socialengineering attack templates, one is able to formulate a

social engineering attack scenario that always follows thesame process, with regards to phases and steps, whilstthe social engineering attack is still representative of areal-world scenario.

The remainder of this section is dedicated to mappingthe social engineering attack templates to the states ofthe SEADM and verifying whether the social engineeringattack detection model can assist in detecting these attacks.The following discussions will highlight at which pointsthe SEADM could prevent the success of the socialengineering attack if it were properly followed, but willmake allowances for failure in following the SEADM tofully explore each scenario. Note that the discussionof each state makes implicit reference to the subsidiaryquestions defined and described for the state in Section 4.

5.1 Bidirectional Communication Scenario

The generic description for this scenario reads as follows:“This template illustrates an SEA where the attackerattempts to gain physical access to a computerised terminalat the premises of an organisation. The assumption isthat when the attacker has once gained access to thecomputerised terminal, he/she is deemed to have beensuccessful. The attacker is now able to install a backdooronto the computerised terminal for future and furtheraccess from the outside.” This scenario is populatedwith elements from a real-world example where the socialengineer pretends to be someone who works on themanagement floor and convinces a cleaner of his supposedrole. The goal of the attacker is to manipulate the cleanerinto granting the social engineer access to the managementfloor. This allows the social engineer to gain physicalaccess to the computerised terminals on the managementfloor [20, 21].

In this scenario a social engineer has to convince thecleaner, the receiver, to believe that he is indeed a staffmember. In this scenario, the cleaners have full access tothe building, yet, their security awareness is typically quitelow. They are not trained to respond to unusual requestssuch as giving other employees access to the managementfloor. If the request is successful, access has been gainedto the management floor, and a key logger is deployed ontoa workstation. This attack is performed using bidirectionalcommunication because the social engineer communicateswith the cleaner and convinces him that the social engineeris allowed to have access to the management floor and theworkstations.

S1 – Understanding the Request: The request from thesocial engineer should clearly state that access needs tobe gained to the management floor. The social engineercan also justify to the receiver why access is required tofurther allow the receiver to understand the request. If thereceiver understands the request, edge U is followed to S3.If the receiver does not initially understand the request, theSEADM would move to state S2 via edge ¬U .

S2 – Requesting information to fully understand the

Page 11: FINITE STATE MACHINE FOR THE SOCIAL … › wm-418498...2018/02/16  · V109 2 2018 S IN INSI I NINS 133 FINITE STATE MACHINE FOR THE SOCIAL ENGINEERING ATTACK DETECTION MODEL: SEADM

Vol.109 (2) June 2018 SOUTH AFRICAN INSTITUTE OF ELECTRICAL ENGINEERS 143

request: In the unlikely event that the request is notinitially understood, the social engineer would explainhis/her request in more detail. As the request itselfis straightforward, if unusual, it is assumed that itwill ultimately be understood, resulting in a transition(S2, I,S3).

S3 – Does the receiver meet the requirements toperform the request?: In this scenario, the receiver doesnot have the authority to grant access to the managementfloor. If the SEADM were followed, appropriate securitypolicy were in place, and the cleaner had sufficientknowledge of said policy, the cleaner would decline therequest and refer the social engineer to a more qualifiedindividual, thwarting the attack. This would be reflectedin the SEADM by a transition via edge ¬C to the failurestate, SF . For the purposes of discussion, and assumingthe cleaner is not properly trained and fears reprisalsfrom a more senior employee, the transition (S3,C,S4) isfollowed.

S4 – Does the request have any further requirementsthat need to be met before the request can be serviced?:In the scenario, only management and cleaners should haveaccess to the management floor. The requested action isthus not available to the public, or in service of a medicalemergency, and is subject to further verification of therequester’s identity. If the SEADM were followed at thispoint, this would result in the transition (S4,R,S5).

S5 – To what extent is the requester’s identityverifiable?: As bidirectional communication is utilised, itallows for the receiver to communicate back via face toface communication and ask more questions to verify therequester. In this case, the authority principle is utilisedand the social engineer mimics an authoritative figure whoshould have access to the management floor. The pretextutilised during this attack is that the social engineer is partof management and that he or she should have access tothe management floor. The receiver is only able to verifythe falsified authority level from the social engineer in thisscenario. Since only a single verification requirement ismet, transition (S5,VL,SF ) would be the most appropriate,resulting in the request being deferred to an individualwith more authority, thus thwarting the attack. If thesocial engineer can provide additional false information,and the cleaner decides to err on the side of caution, thetransition (S5,VM,S6) may be followed instead. As thesocial engineer is not known to the cleaner and is not anactual employee, edge VH should not be followed.

S6 – Can you verify the requester’s identity from athird party source?: The receiver will now have theability to verify the information from another employeeon the management floor. In the case that there are noother employees on the management floor, the transition(S6,¬T,SF) will be taken and the social engineering attackwill be thwarted. If it is assumed that there are otheremployees on the management floor who can be contactedto verify the information, the receiver will be able toask whether the authority level of the social engineer is

indeed true. The other employee will deny this and thusthe verification process will show that the informationprovided is not the same as the verification requirements.Consequently, the transition (S6,¬T,SF) will still be taken,and the social engineering attack will be thwarted.

S7 – Does the authority level of the requester providethem with sufficient rights to request the action orinformation? Assuming the receiver is untrained, theSEADM is not followed, and the requester’s identityis incorrectly assumed to be verified, their presumedauthority would grant them access to the managementfloor, resulting in transition (S7,A,SS). If the SEADMwere followed, however, the attack would move to thefailure state SF twice before this state is first reached.

In this scenario, the SEADM would have detected andthwarted the social engineering attack at two separatepoints before reaching the final state, S7, in the model.

5.2 Unidirectional Communication Scenario

The generic description for this scenario reads as follows:“This template illustrates an SEA in which the attackerattempts to obtain financial gain by sending out paper mail.This letter requests a group of individuals to make a smalldeposit into a bank account owned by the attacker. Inthis template, the attacker develops a phishing letter thatmasks the attacker as a charity organisation requestingdonations. Once the attacker has received the small depositfrom the targeted individual, the SEA is deemed to besuccessful.” This scenario is populated with elements fromthe real-world example where the social engineer performsa pretext using postal letters. The social engineer pretendsto be various officials, internal employees, employees oftrading partners, customers, utility companies or financialinstitutions, and the social engineer solicits confidentialinformation by using a wide range of persuasive techniques[22].

In this scenario, the attacker will develop a phishingletter that masks the attacker as a charity organisationrequesting donations. The phishing letter contains thecontact details, the logo and the purpose of the charityto improve the authenticity of the letter. This attack usesunidirectional communication and thus the receiver is notable to communicate with the attacker. The rest of thissection maps the scenario to the model.

S1 – Understanding the Request: The letter from thesocial engineer should clearly state that a receiver isrequested to make a donation to the specific charity. Theletter will include all the required details because thisreceiver cannot communicate with the social engineer. Inthe SEADM, this should result in the transition (S1,U,S3).In the unlikely event that the receiver does not understandthe request, the transition (S1,¬U,S2) will be followed.

S2 – Requesting information to fully understand therequest: The social engineer would have tried to ensurethat the targeted individual fully understands the request.

Page 12: FINITE STATE MACHINE FOR THE SOCIAL … › wm-418498...2018/02/16  · V109 2 2018 S IN INSI I NINS 133 FINITE STATE MACHINE FOR THE SOCIAL ENGINEERING ATTACK DETECTION MODEL: SEADM

Vol.109 (2) June 2018SOUTH AFRICAN INSTITUTE OF ELECTRICAL ENGINEERS144

As the attack uses unidirectional communication, thereceiver is unable to request further information. As aresult, in the SEADM, the only available state transitionis (S2,¬I,SF), thwarting the attack.

S3 – Does the receiver meet the requirements toperform the request?: The receiver is the owner of theirbank account and has the required information to makethe deposit. Given that their understanding of the requesthas been ascertained, and assuming they have a bankaccount with an available balance and are both capableand authorised to deposit their money into another bankaccount, the transition (S3,C,S4) will be followed.

S4 – Does the request have any further requirementsthat need to be met before the request can be serviced?:The requested action is to make a deposit into thebank account of the requester. This action is onlyavailable to receiver, and is not in service of preventing alife-threatening emergency. Furthermore, this request canbe seen as either unusual or new as the requester wouldnot usually receive this specific type of letter from thecharity. It is additionally likely that the requester feelsuneasy about, or suspicious of, the request, which is itselfa reason for refusal. The transition S3,R,S4 should thus befollowed.

S5 – To what extent is the requester’s identityverifiable?: Since unidirectional communication isutilised, the receiver can only verify the identity of therequester using the information provided in the receivedletter. While the receiver may be aware of the existenceof the charity organisation, if the letter provides noadditional information for verification purposes (suchas contact details), or if information in the charity’sofficial correspondence or online presence is inconsistentwith information provided in the fabricated request, theSEADM indicates that the transition (S5,VL,SF) shouldbe followed, thwarting the attack. If the letter containsthe actual contact details of the charity organisation,thereby providing some additional verifiable information,the transition (S5,VM,S6) may be followed instead. As therequest is unidirectional and does not specify the charity’sactual bank details, the transition (S5,VH ,S7) should not befollowed; edge VH indicates very high confidence, whichshould not be followed when a request is unusual andprovides limited verifiable information.

S6 – Can you verify the requester’s identity from a thirdparty source?: As the charity is a well known charity, therequest may be verified by contacting the charity directly.The receiver will make a phone call to the charity toverify the information. If the charity cannot be reached,the transition (S6,¬T,SF) should be followed, thwartingthe attack. If the receiver is able to contact the charitydirectly, the receiver will be able to ask the organisationwhether such a letter has in fact been sent out. The charityorganisation will deny this and thus the verification processwill show that the information provided is not the same asthe verification requirements. Consequently, the transition(S6,¬T,SF) will be followed and the social engineering

attack will be thwarted.

S7 – Does the authority level of the requester providethem with sufficient rights to request the action orinformation? Assuming that the SEADM was notfollowed up to this point, and the receiver believes theinformation provided in the fabricated request, the receivermay consider the requester to have sufficient authorityto make the request. If this is the case, the transition(S7,A,SS) may be followed, allowing the attack to succeed.If the SEADM was followed, however, state S7 wouldnot have been reached, preventing this transition fromoccurring.

While thwarting this form of attack requires somediligence on the part of the receiver, following the SEADMshould prevent the SEA from succeeding.

5.3 Indirect Communication Scenario

The generic description for this scenario reads as follows:“This template illustrates an SEA in which the attackerattempts to gain unauthorised access to a workstationwithin an organisation by using a storage device. Oncethe target has plugged the storage device (in this case aUSB flash drive) into the targeted workstation, the SEAis deemed to be successful; the attacker is now able toinstall a backdoor onto the workstation via the storagedevice. The social engineer can then proceed to use thisworkstation as a pivot point for any further attacks on theorganisation.” This scenario is populated with elementsfrom the real-world example where the social engineerattempts to gain unauthorised access to a workstation inan organisation by using a storage medium device [23,24].This attack is also depicted in a popular television seriesabout penetration testing, Mr. Robot [23].

In this scenario, the organisation does not have a companypolicy in place that disallows employees plugging storagedevices into their workstations. The social engineer willleave the device outside the organisation’s building to befound by an employee. The device will be infected witha trojan so that when it is plugged into the workstation, itopens a backdoor for the social engineer to connect to thesystem remotely. As the storage device is left unattended,this attack utilises indirect communication. The rest of thissection maps this scenario to the model.

S1 – Understanding the Request: The storage mediumdevice planted by the social engineer should be markedclearly to indicate that it contains important and/orconfidential information. The social engineer expects thatthe receiver – the employee who finds this device – willeither try to return the device to its rightful owner (abenevolent target), or attempt to access the contents ofthe drive (a malevolent target). The social engineer mayattempt to manipulate a specific employee into finding thedevice, or may settle for any employee. As it is an inherentrequest, the request should be easily understandable to thetarget, and the transition (S1,U,S3) should be followed inthe SEADM.

Page 13: FINITE STATE MACHINE FOR THE SOCIAL … › wm-418498...2018/02/16  · V109 2 2018 S IN INSI I NINS 133 FINITE STATE MACHINE FOR THE SOCIAL ENGINEERING ATTACK DETECTION MODEL: SEADM

Vol.109 (2) June 2018 SOUTH AFRICAN INSTITUTE OF ELECTRICAL ENGINEERS 145

S2 – Requesting information to fully understand therequest: This state would not typically be entered, asthe social engineer would have made certain that thestorage medium device is deployed at such a location thatonly individuals who have access to a workstation andwho understand how such devices work would find thedevice. If this is not the case, the attack runs the risk ofimmediate failure, as the receiver may take the device to alost-and-found where the attack is indefinitely halted, or tothe IT department where the attack may be detected. Eitherof these cases would result in the transition (S2,¬I,SF), afailure state.

S3 – Does the receiver meet the requirements toperform the request?: As it is assumed that theorganisation does not prohibit the use of external USBdrives on company workstations, and the receiver hasaccess to a workstation that supports the USB attachment,the receiver is capable of performing the request. In mostinstances, this would be sufficient for the requirementsof performing the implicitly requested action, and thetransition (S3,C,S4) would be followed. If the drive islabelled as confidential, however, the receiver may beconsidered to not have sufficient authority to attach thedrive. In this instance, the transition (S3,¬C,SF) wouldbe followed in the SEADM, resulting in the attack beinghalted or thwarted.

S4 – Does the request have any further requirementsthat need to be met before the request can beserviced?: The implicit request is directed at the receiverto manipulate them into attaching the device to a companyworstation, either to return it to its rightful owner or todetermine its contents. This action is not available tothe public, or in service of preventing a life-threateningemergency. While there are no administrative orprocedural reason for refusal due to a lack of specificcompany policy, the request would be considered unusual,and so the transition (S4,R,S5) would be followed.

S5 – To what extent is the requester’s identityverifiable?: Since indirect communication is utilised, theonly piece of information the receiver has is the physicalstorage device, which does not provide any externalverification information. Following the SEADM, thiswould result in a transition (S5,VL,SF), resulting in theimplicit request being either halted or detected by a moreknowledgeable individual who is allowed to safely, andon a secure workstation, verify the contents of the storagedevice and potentially contact the rightful owner. NeitherVM or VH should be followed in this instance.

S6 – Can you verify the requester’s identity from a thirdparty source?: While this state should not be reached,there is no available third-party to provide verification ofthe device. The only option in this instance would beto follow transition (S6,¬T,SF), similarly resulting in thefailure of the attack.

S7 – Does the authority level of the requester providethem with sufficient rights to request the action or

information? If this state were hypothetically reached, thelack of specific organisational policy governing the use ofexternal storage media would provide sufficient authorityfor the implicit request to be performed, resulting in atransition (S7,A,SS) and allowing the attack to succeed.Similar to S6, however, this state should not be reachedif the SEADM is properly followed.

This demonstrates how the SEADM may be successfullyutilised to prevent an attack when SEAs use indirectcommunication.

The preceding three sections have shown, through the useof attack templates, how the SEADM may be applied tohelp detect and prevent several social engineering attackvectors that utilise bidirectional, unidirectional and indirectcommunication approaches. The SEADM is, however,only a tool, and is best utilised in conjunction with securityawareness and robust security policy to reinforce and guideits appropriate application and use, largely dependent onthe security context.

The following section concludes the paper with a summaryof the advantages of the underlying finite state machineand how it was still able to thwart social engineering attacktemplates.

6. CONCLUSION

The protection of information is extremely important inmodern society and even though the security aroundinformation is continuously improving, a weak point isstill human actors who are susceptible to manipulationtechniques. This paper explored social engineering as adomain and social engineering attack detection techniquesas a process within this domain. To this end, both theprevious papers by the authors, Social Engineering AttackDetection Model: SEADM [6] and Social EngineeringAttack Detection Model: SEADMv2 [14] were revisited.

Both of the previous iterations of the SEADM focusedon expanding the capability of the accuracy of detection.The models were populated with questions cateringfor attacks utilising either bidirectional communication,unidirectional communication or indirect communication.Previous work did mention that the model is extensiblewith additional questions, but was unclear on how andwhere additional questions should be added.

This paper improves on the SEADM by providing theunderlying finite state machine, which allows researchersto better understand and utilise the SEADM. Representingthe SEADM as a finite state machine allows one tohave a more concise overview of the process that isfollowed throughout the model. The model provided ageneral procedural template for implementing detectionmechanisms for social engineering attacks. The statediagram provides a more abstract and extensible model thathighlights the inter-connections between task categoriesassociated with different scenarios. This paper also showsthat the finite state machine is both deterministic and

Page 14: FINITE STATE MACHINE FOR THE SOCIAL … › wm-418498...2018/02/16  · V109 2 2018 S IN INSI I NINS 133 FINITE STATE MACHINE FOR THE SOCIAL ENGINEERING ATTACK DETECTION MODEL: SEADM

Vol.109 (2) June 2018SOUTH AFRICAN INSTITUTE OF ELECTRICAL ENGINEERS146

correct for all possible input alphabets, simplifying theprocess of implementing the SEADM model either as aprocess or in software. The state diagram is currentlyimplemented as a mobile application as part of a socialengineering prevention training tool [25].

The improved SEADM was further taken and testedagainst social engineering attack templates, showingthat the SEADM remains capable of thwarting socialengineering attacks. Adapting the SEADM to a finite statemachine improved the extensibility of the model withoutnegatively impacting on detecting social engineeringattacks.

The SEADM, with the underlying finite state machine, canbe used as a general framework to protect against socialengineering attacks. Even if the model is not adheredto in respect of every request, it will cause one to thinkdifferently about requests — and this is already a step inthe right direction.

REFERENCES

[1] D. Harley, “Re-floating the titanic: Dealing withsocial engineering attacks,” in European Institute forComputer Antivirus Research, 1998, pp. 4–29.

[2] L. Laribee, “Development of methodical social engi-neering taxonomy project,” MSc, Naval PostgraduateSchool, Monterey, California, June 2006.

[3] K. Ivaturi and L. Janczewski, “A taxonomy for socialengineering attacks,” in International Conference onInformation Resources Management, G. Grant, Ed.Centre for Information Technology, Organizations,and People, June 2011, pp. 1–12.

[4] F. Mohd Foozy, R. Ahmad, M. Abdollah, R. Yusof,and M. Mas’ud, “Generic taxonomy of socialengineering attack,” in Malaysian Technical Univer-sities International Conference on Engineering &Technology, Batu Pahat, Johor, November 2011, pp.1–7.

[5] P. Tetri and J. Vuorinen, “Dissecting socialengineering,” Behaviour & Information Technology,vol. 32, no. 10, pp. 1014–1023, 2013.

[6] F. Mouton, L. Leenen, M. M. Malan, and H. Venter,“Towards an ontological model defining the socialengineering domain,” in ICT and Society, ser.IFIP Advances in Information and CommunicationTechnology, K. Kimppa, D. Whitehouse, T. Kuusela,and J. Phahlamohlaka, Eds. Springer BerlinHeidelberg, 2014, vol. 431, pp. 266–279.

[7] J. W. Scheeres, “Establishing the human firewall:reducing an individual’s vulnerability to socialengineering attacks,” Master’s thesis, Air ForceInstitute of Technology, Wright-Patterson Air ForceBase, Ohio, March 2008.

[8] K. D. Mitnick and W. L. Simon, The art ofintrusion: the real stories behind the exploits ofhackers, intruders and deceivers., W. Publishing., Ed.Indianapolis: Wiley Publishing, 2005.

[9] J. Debrosse and D. Harley, “Malice through thelooking glass: behaviour analysis for the nextdecade,” in Proceedings of the 19th Virus BulletinInternational Conference, September 2009.

[10] G. L. Orgill, G. W. Romney, M. G. Bailey,and P. M. Orgill, “The urgency for effectiveuser privacy-education to counter social engineeringattacks on secure computer systems,” in Proceedingsof the 5th Conference on Information TechnologyEducation, ser. CITC5 ’04. New York, NY, USA:ACM, 2004, pp. 177–181. [Online]. Available:http://doi.acm.org/10.1145/1029533.1029577

[11] F. Mouton, M. M. Malan, L. Leenen, andH. Venter, “Social engineering attack framework,” inInformation Security for South Africa, Johannesburg,South Africa, Aug 2014, pp. 1–9.

[12] F. Mouton, L. Leenen, and H. Venter,“Social engineering attack examples, templatesand scenarios,” Computers & Security,vol. 59, pp. 186 – 209, 2016. [Online].Available: http://www.sciencedirect.com/science/article/pii/S0167404816300268

[13] M. Bezuidenhout, F. Mouton, and H. Venter, “Socialengineering attack detection model: Seadm,” inInformation Security for South Africa, Johannesburg,South Africa, August 2010, pp. 1–8.

[14] F. Mouton, L. Leenen, and H. S. Venter, “Socialengineering attack detection model: Seadmv2,” inInternational Conference on Cyberworlds (CW),Visby, Sweden, October 2015, pp. 216–223.

[15] F. Mouton, M. Malan, and H. Venter, “Developmentof cognitive functioning psychological measuresfor the seadm,” in Human Aspects of InformationSecurity & Assurance, Crete, Greece, June 2012, pp.40–51.

[16] D. Gragg, “A multi-level defense against socialengineering,” SANS Institute InfoSec ReadingRoom, Tech. Rep., December 2002.

[17] R. Bhakta and I. Harris, “Semantic analysis ofdialogs to detect social engineering attacks,” in Se-mantic Computing (ICSC), 2015 IEEE InternationalConference on, Feb 2015, pp. 424–427.

[18] S. S. Epp, Discrete Mathematics with Applications,4th ed. Brooks/Cole Publishing Co., 2010.

[19] F. Mouton, M. M. Malan, and H. S. Venter, “Socialengineering from a normative ethics perspective,” inInformation Security for South Africa, Johannesburg,South Africa, August 2013, pp. 1–8.

Page 15: FINITE STATE MACHINE FOR THE SOCIAL … › wm-418498...2018/02/16  · V109 2 2018 S IN INSI I NINS 133 FINITE STATE MACHINE FOR THE SOCIAL ENGINEERING ATTACK DETECTION MODEL: SEADM

Vol.109 (2) June 2018 SOUTH AFRICAN INSTITUTE OF ELECTRICAL ENGINEERS 147

[20] L. Janczewski and L. Fu, “Social engineering-basedattacks: Model and new zealand perspective,”in Computer Science and Information Technology(IMCSIT), Proceedings of the 2010 InternationalMulticonference on, Oct 2010, pp. 847–853.

[21] T. Dimkov, A. van Cleeff, W. Pieters, and P. Hartel,“Two methodologies for physical penetration testingusing social engineering,” in Proceedings of the 26thAnnual Computer Security Applications Conference,ser. ACSAC ’10. New York, NY, USA: ACM,2010, pp. 399–408. [Online]. Available: http://doi.acm.org/10.1145/1920261.1920319

[22] M. Workman, “A test of interventions for securitythreats from social engineering,” Information Man-agement & Computer Security, vol. 16, no. 5, pp.463–483, 2008.

[23] S. Esmail, “eps1.5 br4ve-trave1er.asf,” June 2015,mr. Robot: Season 1, Episode 6. [Online]. Avail-able: http://www.usanetwork.com/mrrobot/episode-guide/season-1-episode-6-eps15br4ve-trave1erasf

[24] M. Jodeit and M. Johns, “Usb device drivers: A step-ping stone into your kernel,” in Computer NetworkDefense (EC2ND), 2010 European Conference on,Oct 2010, pp. 46–52.

[25] F. Mouton, M. Teixeira, and T. Meyer, “Bench-marking a mobile implementation of the socialengineering prevention training tool,” in InformationSecurity for South Africa, Johannesburg, SouthAfrica, Aug 2017, pp. 106–116.