a game-theoretic framework for contingency analysis in
TRANSCRIPT
Graduate Theses and Dissertations Iowa State University Capstones, Theses and Dissertations
2021
A game-theoretic framework for contingency analysis in power A game-theoretic framework for contingency analysis in power
systems systems
Hamid Emadi Iowa State University
Follow this and additional works at: https://lib.dr.iastate.edu/etd
Recommended Citation Recommended Citation Emadi, Hamid, "A game-theoretic framework for contingency analysis in power systems" (2021). Graduate Theses and Dissertations. 18488. https://lib.dr.iastate.edu/etd/18488
This Dissertation is brought to you for free and open access by the Iowa State University Capstones, Theses and Dissertations at Iowa State University Digital Repository. It has been accepted for inclusion in Graduate Theses and Dissertations by an authorized administrator of Iowa State University Digital Repository. For more information, please contact [email protected].
A game-theoretic framework for contingency analysis in power systems
by
Hamid Emadi
A dissertation submitted to the graduate faculty
in partial fulfillment of the requirements for the degree of
DOCTOR OF PHILOSOPHY
Major: Mechanical Engineering
Program of Study Committee:Sourabh Bhattacharya, Major Professor
Manimaran GovindarasuJuan Ren
Soumik SarkarSunanda Roy
The student author, whose presentation of the scholarship herein was approved by the program ofstudy committee, is solely responsible for the content of this dissertation. The Graduate Collegewill ensure this dissertation is globally accessible and will not permit alterations after a degree is
conferred.
Iowa State University
Ames, Iowa
2021
Copyright © Hamid Emadi, 2021. All rights reserved.
ii
TABLE OF CONTENTS
Page
LIST OF TABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv
LIST OF FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
ACKNOWLEDGMENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
CHAPTER 1. INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Contingency Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Game Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.1 Zero-sum games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.2.2 Solution Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.2.3 Security Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
CHAPTER 2. SECURITY GAMES WITH ADDITIVE UTILITY . . . . . . . . . . . . . . . 82.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.2 Problem Formulation: Security Game . . . . . . . . . . . . . . . . . . . . . . . . . . 102.3 Who is the weakest link?: Special case ka = kd = 1 . . . . . . . . . . . . . . . . . . 122.4 Structural Properties of the Optimal Solution . . . . . . . . . . . . . . . . . . . . . . 152.5 Computation of the Value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
CHAPTER 3. ON THE CHARACTERIZATION OF SADDLE POINT EQUILIBRIUMFOR SECURITY GAMES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253.2 Problem Formulation: Security Game . . . . . . . . . . . . . . . . . . . . . . . . . . 283.3 Structural Properties of the Attacker’s Strategy . . . . . . . . . . . . . . . . . . . . . 293.4 Computation of v∗ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323.5 Dual Analysis: Structural Properties of the Defender’s Strategy and Algorithms . . . 373.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
CHAPTER 4. AN EFFICIENT COMPUTATIONAL STRATEGY FOR CYBER-PHYSICALCONTINGENCY ANALYSIS IN SMART GRIDS . . . . . . . . . . . . . . . . . . . . . . 434.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444.3 Game-Theoretic Modeling: Optimal defense strategy and maximum impact . . . . . 464.4 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
iii
4.4.1 Empirical evaluation of near-modular behaviour of the disturbance value . . . 494.4.2 Variations in the attacker models . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.5 Implementation of Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
CHAPTER 5. NETWORK DESIGN FROM GAME THEORETIC PERSPECTIVE . . . . 575.1 A sub-graph with minimum game value . . . . . . . . . . . . . . . . . . . . . . . . . 575.2 Power Network Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
CHAPTER 6. NASH EQUILIBRIUM IN NON-ZERO SUM SECURITY GAMES WITHADDITIVE UTILITY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 656.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 656.2 Problem Formulation: Security Game . . . . . . . . . . . . . . . . . . . . . . . . . . 676.3 Structural properties of the optimal solution . . . . . . . . . . . . . . . . . . . . . . . 716.4 Type I Equilibria: Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 726.5 Type II Equilibria: Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 776.6 Computation of the Equilibria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 786.7 Special Case: Zero-Sum Security Game . . . . . . . . . . . . . . . . . . . . . . . . . 816.8 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 826.9 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
CHAPTER 7. FUTURE WORKS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 857.1 Rank deficiency and approximation security games . . . . . . . . . . . . . . . . . . . 857.2 Security Games with non-additive utility function . . . . . . . . . . . . . . . . . . . . 857.3 Node attack model in a power network . . . . . . . . . . . . . . . . . . . . . . . . . . 867.4 Incomplete-information security games . . . . . . . . . . . . . . . . . . . . . . . . . . 86
BIBLIOGRAPHY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
APPENDIX. RANK DEFICIENCY OF SECURITY GAMES . . . . . . . . . . . . . . . . 95
iv
LIST OF TABLES
Page
Table 1.1 An example of a bimatrix game: Battle of the sexes. . . . . . . . . . . . . . 4
Table 4.1 Links added to the case 5-bus system to ensure 3-edge-connectivity of thegrid. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Table 4.2 Links added to the case 9-bus systems to ensure 3-edge-connectivity of thegrid. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Table 4.3 Links added to the IEEE 14-bus systems to ensure 3-edge-connectivity ofthe grid. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Table 6.1 Sets S1, . . . , S9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
v
LIST OF FIGURES
Page
Figure 4.1 Modified IEEE 5-bus system. Dashed links are added to standard IEEE cases. 50
Figure 4.2 Modified IEEE 9-bus system. Dashed links are added to standard IEEE cases. 51
Figure 4.3 Modified IEEE 14-bus system. Dashed links are added to standard IEEEcases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Figure 4.4 Figure shows the histogram of e for the (a) 5-bus network (b) 9-bus network(c) 14-bus network.(d) 39-bus network . . . . . . . . . . . . . . . . . . . . . 53
Figure 4.5 Figure shows the expected outcome of the game for the 5-bus and 9-busnetworks against attackers having different capabilities (Section IV.B). . . . 54
Figure 4.6 The augmented 39-bus system used in our simulations. Our link additionsto the system are depicted by dashed lines along with the correspondingreactance values. Links shown in green highlight the defender resource al-location for ka = 2, kd = 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Figure 4.7 The difference between v∗ and the expected outcome of Algorithm 4 for thedefender in 39 bus network. For a specific value of m and kd, the averageerror has been computed over 10 different sets of φ’s over 100 iterations(it = 100). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Figure 5.1 (a) Graph G(ν, ε). (b) Sub-graph G1(ν, ε1). (c) Sub-graph G2(ν, ε2). . . . . 58
Figure 5.2 (a) Decrement of v in each iteration. (b) Red and blue profiles show the ini-tial and final distribution, respectively. And green intervals are the feasibledomain for Φ. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Figure 5.3 Histogram of the percentage reduction of the impact . . . . . . . . . . . . . 64
vi
ACKNOWLEDGMENTS
I would like to take this opportunity to express my thanks to those who helped me with various
aspects of conducting research and the writing of this thesis. First and foremost, Dr. Sourabh
Bhattacharya for his guidance, patience and support throughout this research and the writing of
this thesis. His insights and words of encouragement have often inspired me and renewed my hopes
for completing my graduate education. I would also like to thank my committee members for their
efforts and contributions to this work: Dr. Manimaran Govindarasu, Dr. Soumik Sarkar, Dr Juan
Ren, and Dr. Sunanda Roy. I would also like to thank the NSF CPS program for supporting the
research in this thesis through the grant ECCS 173996.
I would additionally like to thank my group mates Rui Zou, Tianshuang Gao , Burhan Hyder,
Joseph Clanin and Kush Khanna for helpful discussions.
Finally, I would like to thank my family for their love and support.
vii
ABSTRACT
Security is an important aspect of modern large-scale infrastructure networks. In one hand,
there has been a significant improvement in the efficiency and reliability of systems due to enhanced
interconnection of intelligent devices. On the other hand, this provides an opportunity for strategic
adversaries to exploit the vulnerabilities of the network and cause damage. An example of an
infrastructure system that is a part of our daily lives, and is vulnerable to adversarial attacks is the
power network. Enhancing power grids’ security, performance and resilience has been an important
research topic in engineering. A significant penetration of information communication technologies
(ICT) in modern power systems, renders this network vulnerable to cyber attacks from strategic
malicious entities.
According to The North American Electric Reliability Corporation (NERC), every power system
should be operated such that failure of a single component should not leave the rest of components
heavily loaded. This is called N-1 rule in power networks. However higher order contingency
conditions are inevitable due to cooperated cyber attacks or natural calamities. As a result N-k
contingency analysis is an important notion of security and resilience of power grids. The central
challenge of higher order cyber-physical contingency analysis is the exponential blow-up of the
attack surface due to a large number of attack vectors. This gives rise to computational challenges
in devising efficient attack mitigation strategies. Due to these challenges, most of the tools are
effective in analysing contingencies caused by failure of one or two components in the power network.
Moreover, current tools for contingency analysis do not consider the strategic and smart attackers,
and consider different attack profiles as faults in the system. However, smart attackers can exploit
such naive defense strategies and cause a significant damage to the system.
In this work, we focus on challenges due to higher order contingency conditions resulting from
cyber-attacks. Our approach provides efficient tools to consider higher order cyber-physical contin-
viii
gency analysis in a game-theoretic frame work, which enables the operator to exploit the strategic
actions of the attacker. Moreover, we provide computationally efficient algorithms to deal with
higher order contingency conditions, and find the optimal defense strategies which cannot be ex-
ploited by the smart attacker. Our contributions in this thesis are as follows:
Cyber-Physical contingency analysis modeling using Game-Theoretic framework:
We develop a game-theoretic model in which different actions of the attacker and the defender
lead to different contingency conditions. We formulate the problem as a security game. First, we
consider a zero-sum scenario and provide the best algorithm (in terms of computational complexity)
to compute the Saddle-Point strategies for the players. Moreover, we generalize the zero-sum model
to a general-sum security model, in which the pay-off for the players are different, and provide a
computationally efficient algorithms to compute the Nash Equilibrium in general scenarios. Our
results for this part appeared in Emadi and Bhattacharya (2019)
Structural properties of the solutions: We present structural properties of the optimal
attacker and defender strategies in terms of the parameters (impact) of the problem. This helps
us to identify the most important targets that need to be protected against an adversarial attack
in the face of resource constraints. Moreover, unlike state-of-art algorithms that solve security
game which are primarily iterative in nature, our algorithm does not lack of a central tenet and
it provides a deep insight the relationship between the parameter of the problem and the optimal
solutions. Our results for this part appeared in Emadi and Bhattacharya (2020).
A paradigm for robust network design: Using the structural properties of the solutions,
we address the problem of designing the optimal topology and power generation in power networks
to minimize impact in the face of an adversarial attack. To the best of our knowledge, this is first
work that explores such design problems in the context of cyber-physical systems.
IEEE Case studies and evaluation of the proposed solutions: In order to evaluate the
performance of our algorithms, we validate the efficacy of the proposed solutions through extensive
simulations on a number of IEEE standardized power networks. Our results for this part appeared
in Emadi et al. (2021).
1
CHAPTER 1. INTRODUCTION
Cyber-physical system (CPS) is an engineered system in which a physical system or a process is
augmented with cyber components such as computational hardware or communication network.
As a result of this integration CPSs are smart, capable and efficient. One example of CPS is
smart grid which has physical layer of generators and transmission lines, and cyber layer of a
control center and communication network which enables remote monitoring and control of a
network. Another examples of CPS are smart cars and intelligent transportation system. Any cps
should be stable reliable, resilient and secure. In this work we focus on security of a CPSs.
There has been a significant improvement in the efficiency and reliability of systems due to
enhanced interconnection of intelligent devices. However, this also leads to additional
vulnerabilities to the system due to introducing cyber layer to the system. For instance, it
increases access points to CPS. Using standard communication protocols, from one side improves
the system performance, and on the other hand, it can be considered as a vulnerability of CPS
because of cyber-attacks. This gives rise to important questions regarding the security of the
network, for example, what are the potential threats to the network, or what are the cost-efficient
defense strategies.
Since attackers can exploit the vulnerabilities of the network to launch a high-impact
low-frequency attack, it is necessary to examine the security of the network from a mathematical
perspective in order to obtain optimal defense strategies against strategic attackers. Game theory
(Basar and Olsder (1999)) is a useful mathematical framework to model adversarial scenarios, and
it provides defense strategies that cannot be exploited from the attacker.
2
1.1 Contingency Analysis
According to North American Electric Reliability Corporation (NERC), power grids are required
to withstand a single component failure, known as the N-1 contingency standard. However,
multiple lines failure is also plausible because of different reasons. For example, natural calamity
or maintenance and construction works or coordinated cyber-physical attack. Therefore, it is
necessary to consider higher order contingency conditions.
The main challenge in high order contingency analysis is the high order number of possibilities
which grows exponentially with the dimension of the network and the order of contingency. To
analyze the N − k contingency, one need to examine all(Nk
)possibilities. For example, for a
modest size power system model with N = 1000, there are 499500 N − 2 contingencies, 166167000
N − 3 contingencies, and over 41 billion N − 4 contingencies.
There has been many researches on analyzing contingencies caused by failure of single or two
components of the smart grid (Turitsyn and Kaplunovich (2013),Eppstein and Hines (2012), Enns
et al. (1982), Davis and Overbye (2009), Davis and Overbye (2010), Chertkov et al.
(2010),Bienstock and Verma (2010), Soltan et al. (2017)).
Kaplunovich and Turitsyn (2016) and Turitsyn and Kaplunovich (2013) proposed an algorithm
based on matrix updates for quantifying the two component failure of the power grid (N − 2
contingency). Consequently, Kaplunovich and Turitsyn (2014) introduced contingency and
influence graphs to study N − 2 contingency.
Graph-based approaches have been used to detect the most important links and nodes in the
smart grid. Wood et al. (2013) introduced a notion of line outage distribution for quantifying the
most important transmission lines. Soltan et al. (2017) followed the same notion and proposed an
approach to quantify the effect of single line failure and cascading effect based on a notion of
disturbance value in a graph theoretic framework.
Researchers also consider contingency analysis of higher component failure from probabilistic
approaches. Chertkov et al. (2010) proposed an algorithm to identify the most probable failure
3
modes in static load distribution using linear programming. Hines et al. (2013) provided an
algorithm to identify the collections of multiple contingencies that lead to cascading failure.
N − k contingency analysis for higher order also examined in Bienstock and Verma (2010), Chen
and McCalley (2005), Poudel et al. (2016) and Soltan et al. (2018). A mixed integer -based
algorithm was proposed by Bienstock and Verma (2010), however this algorithm lacks the
scalability for higher order contingencies. In this work, we adapt the approach proposed by Soltan
et al. (2018) to identify the impact of multiple links failure in a smart grid. This method is based
on disturbance value. Moreover, they showed that this metric can be approximated by a function
which has a specific additive property which lead to reduction of computational complexity of
solving a game for contingency conditions of higher order.
Current approaches for contingency analysis in power networks do not consider the attacker
which has grid’s information and the smart cyber attack that can exploit the vulnerabilities of the
system. In order to augment the notion of smart cyber attack to the contingency analysis we
introduce the contingency analysis in a game theoretic framework.
1.2 Game Theory
One of the general class of two-player games is bimatrix games in which the players have a finite
number of alternatives to choose. A bimatrix game is comprised of two matrices, A,B ∈ Rm×n,
where rows and columns are associated to player-1 and player-2’s decision, respectively. Each pair
of entries (aij , bij) denote the outcome of the game corresponding to specific decision of ith row
for player-1, and jth column for player-2. Each player makes his decision rationally which aim for
the maximum outcome. In spite of the simple formulation of bimatrix games, since there exists no
cooperation between the players, finding the optimal solution for the players is a challenging
decision making problem.
An equilibrium solution of a non-cooperative bimatrix game is a point for which the deviation by
any of the players from the equilibrium does not result to a better outcome. This is called
4
non-cooperative Nash Equilibrium which is named after American mathematician John Forbes
Nash (Nash (1951)). We show the solution concept of a bimatrix game in an example.
Suppose two player in a game with the payoff matrices for the players represented in Table 1.1
which has two Nash Equilibria of σ1 = (x1, y1), σ2 = (x2, y2), with outcome of (2, 1) and (1, 2),
respectively.
Table 1.1: An example of a bimatrix game: Battle of the sexes.
y1 y2
x1 (2,1) (-1,-1)
x2 (-1,-1) (1,2)
As shown in the above example, a bimatrix game may have multiple pure Nash Equilibria.
However, there are cases for which Nash Equilibrium in pure strategies might not exist. In
general, Nash Equilibrium can be defined over the class of strategies which include all mixed
strategies, defined as the set of all probability distribution on the set of pure strategies of each
player. In other words, a mixed strategy Nash equilibrium is a profile of mixed strategies such that
no player can reach to a better expected outcome by unilateral deviation from his mixed strategy.
In general computing Nash equilibrium of a two-player non-zero sum game is PPAD (Daskalakis
et al. (2009), Mehta (2014)). Due to the computational complexity of non-zero sum games,
researchers consider games with different special structure in the utility function and different
solution concepts.
1.2.1 Zero-sum games
Zero-sum games are special case of non-zero-sum games in which the utility function of one player
is negative of the other player’s utility function. In other words, for any specific actions of players,
player-1’s payoff is equal to what player-2 will lose. The optimal solution of zero-sum games are
also called Saddle-Point solution, and corresponding strategies for the players are Saddle-Point
strategies.
5
In general, any two-player zero-sum game can be formulated as an linear programming (LP)
problem Basar and Olsder (1999), in the following form:
maximizep
v
subject to v ≤n∑i=1
piAij , j = 1, . . . ,m
p1 + · · ·+ pn = 1
pi ≥ 0 for i = 1, . . . , n.
(1.1)
Solving the above LP, provides the Saddle-Point strategies (p∗) and outcome of the game (v∗) for
player-1 (row player). The Saddle-Point strategy of second player (column player), q∗ can be
computed by solving the dual problem of (1.1) as follows:
minimizeq
v
subject to v ≥m∑j=1
Aijqj , i = 1, . . . , n
q1 + · · ·+ qm = 1
qi ≥ 0 for i = 1, . . . ,m.
(1.2)
Since solving an LP is polynomial in time, therefore Saddle-Point of a zero-sum game can be
computed in polynomial time. The most efficient running time of solver for a general LP problem
is O(n2.055) Jiang et al. (2020).
Zero-sum security games can be considered as a concise representation of a large scale zero-sum
game. Although there are algorithms in polynomial time for solving zero-sum games, computing
Saddle-Point solution for security games are not tractable due to the combinatorial nature of the
problem and increasing the problem’s dimension exponentially.
1.2.2 Solution Concepts
Nash Equilibrium is not the only solution concept in game theory. An alternative solution
concept for two player games is Stackelberg solution. In this model one player is the leader, and
6
he can commit to a mixed strategy. Then, player-2 observes the commitment, and picks an
strategy. In an adversary setting, Stackelberg model is reasonable because of the fact that the
attacker has opportunity to observe the defender’s day to day action and collect information of
defender’s actions. One of the advantages of the Stackelberge model is the computational cost,
which is polynomial in time Conitzer and Sandholm (2006).
In two-player zero-sum games, Nash equilibrium strategies and Stackelberg strategies both
coincide with minimax strategies (and, hence, with each other), due to von Neumann’s minimax
theorem (Basar and Olsder (1999)).
Another solution concept is ε-Nash Equilibrium, which is a sub-optimal solution. Lipton et al.
(2003) propose a concept of approximate ε-Nash equilibrium. They showed that for any
two-person game there exists an equilibrium with only logarithmic support (in the number of
available pure strategies). Moreover, they provide a quasi-polynomial algorithm for computing
such an approximate equilibrium. Lipton et al. (2003) also show that a special class of games
have small support. They prove that if the payoff matrices are low rank then the exact optimal
solution has small support. This leads to the efficient algorithms for computing Nash Equilibrium
in low rank games. We relate this finding to the security games and we will show later that
security games with additive utility are rank deficient games.
1.2.3 Security Games
The classic security game is defined between an attacker and a defender. The attacker has limited
resources to attack a set of targets, and the defender also has a limited resources to protect the
targets. In this setting, both players have information on the impact of targets and other players
resources. The payoff and cost to the players are defined based on their decision on attack and
protection of targets. The objective of the players is to choose a mixed strategy which satisfy
specific conditions under some solution concept such as Nash equilibrium or Stackelberg
equilibrium. This problem can be considered as a resource allocation problem in game theoretic
framework. Security games have been applied in many practical applications and security
7
agencies such as US Coast Guard and Federal Air Marshals Service (FAMS) Tsai et al. (2009),
Transportation System Administration Chen et al. (2009) and even in the wildlife protection Fei
et al. (2018), and security at the Los Angeles International Airport Pita et al. (2008).
There exists two challenges of classic security models. First challenge is the computational
complexity of the solution due to the combinatorial nature of the problem. Second challenge is to
consider the dependency of targets impacts. Xu et al. (2014) show that the spatial and temporal
security game is generally NP-hard, and Wang et al. (2017) show that security games with
non-additive utility functions are NP-hard. To alleviate the latter challenge, researchers consider
some relaxation on the utility functions of the problem. For instance, considering an additive
property of the utility functions. In other words, the failure impact of a group of targets can be
considered as the sum of the impact of individual target separately.
In the past decades, there have been numerous algorithms developed for various extensions of the
classical security games; for example the Bayesian security game Paruchuri et al. (2008), and the
security game with uncertain attacker behavior Balcan et al. (2015).
In this work, we focus on security games with additive utility functions. We consider both
zero-sum and non-zero-sum cases, and we provide an efficient algorithms to compute the optimal
solutions under Nash equilibrium solution concept.
8
CHAPTER 2. SECURITY GAMES WITH ADDITIVE UTILITY
2.1 Introduction
Security of networks has become ubiquitous in modern large networking infrastructures. There
has been a significant improvement in the efficiency and reliability of systems due to enhanced
interconnection of intelligent devices. However, this has opened the door for strategic adversaries
to exploit the vulnerabilities of the network and cause damage. This gives rise to important
questions regarding the security of the network, for example, what are the potential threats to the
network, or what are the cost-efficient defense strategies. In this work, we investigate an asset
protection game between an adversary and a network provider.
In order to minimize the impact of a malicious attack on a large-scale system, resources need to
be efficiently allocated to protect “high-value” targets. Since attackers can exploit the
vulnerabilities of the network to launch a high-impact low-frequency attack, it is necessary to
examine the security of the network from a mathematical perspective in order to obtain optimal
defense strategies. Game theory (Basar and Olsder (1999)) is a useful tool to model adversarial
scenarios. Security games model attack scenarios wherein an attacker attacks a number of targets
while the defender allocates its resources to protect them to minimize the impact. The payoff for
the attacker and the defender is based on the successfully attacked and protected targets,
respectively. Traditionally, attacker-defender games have been modeled as zero-sum games, and
the resulting saddle-point strategies are assumed to be optimal for both players.
The combinatorial nature of a security games renders the problem of obtaining optimal strategies
for the players computationally infeasible. Kiekintveld et al. (2009) proposed an algorithm for
randomized security resource allocation by introducing a compact representation for Stackelberg
security game with multiple resources based on a mixed-integer programming formulation. They
consider an attacker that attacks one target, and a defender that has multiple resources to defend
9
the targets. Bhattacharya et al. (2011) propose an approximation algorithm to compute
Stackelberg strategy for a security game in which the defender tries to minimize the total cost of
the resources. Korzhyk et al. (2011c) show that under a natural restriction on security games
(subsets of defense sets are defense sets), any Stackelberg strategy is also a Nash equilibrium
strategy. Brown et al. (2012) consider a security game with multiple attackers, and provide
approximate algorithms to compute optimal solutions.
Although, Stackelberg models have been used in real world applications (Pita et al. (2008), Jain
et al. (2010)), one of their major drawbacks is the fact that the defender cannot be sure that the
attacker is aware of the defender’s mixed strategy before his/her decision. Yin et al. (2010) and
Korzhyk et al. (2011b) model the uncertainty of the attacker’s knowledge about the defender’s
mixed strategy as part of the game, and propose an iterative algorithm based on alternating
between Nash equilibrium solver and a Stackelberg solver. Another remedy for Strong Stackelberg
solutions is proposed by Guo et al. (2018) which introduces the solution concept of the inducible
Stackelberg equilibrium to avoid overoptimism. In the past, there has been some work to connect
equilibrium computation in security games to combinatorial optimization. Xu (2016) and Wang
and Shroff (2017) reduced the security game to a combinatorial optimization problem which can
be characterized by a set consisting of the defender’s pure strategies. Moreover, their framework
captures most of the characteristics of the security game models.
In the past decade, there has been extensive research on the complexity of computing the
equilibrium of security games due to their combinatorial nature. Korzhyk et al. (2010) show that
computing the optimal Stackelberg strategy in security resource allocation game, when attacker
attacks one target, is NP-hard in general. However, when resources are homogeneous and
cardinality of protection set is at most 2, polynomial-time algorithms have been proposed by the
authors. Korzhyk et al. (2010) propose an LP formulation similar to Kiekintveld’s formulation,
and present a technique to compute the mixed strategies in polynomial time. In the presence of
multiple attackers with limited resources, solving the security game becomes significantly more
challenging. Korzhyk et al. (2011a) propose a polynomial-time algorithm for computing Nash
10
equilibrium in security games modeled as a non-zero game with multiple attacker resources. Our
work lies in a similar vein. In contrast to the iterative procedure proposed in Korzhyk et al.
(2011a), we derive structural properties of the optimal solution to arrive at a polynomial-time
algorithm to compute the value of a zero-sum security game.
In this work, we assume that the utility function has an additive property i.e., the total utility of
successfully attacking multiple targets is equal to the sum of utilities of the individual targets. In
several practical problems, it is possible to approximate the utility function with an additive
utility function (Soltan et al. (2018), Korzhyk et al. (2011c)). However, one may also consider
non-additive utility functions to capture the interdependency between targets. Wang and Shroff
(2017) and Wang et al. (2017) examine the security game with non-additive utilities and multiple
targets. They use the framework proposed by Xu (2016) which shows that a security game is
equivalent to a combinatorial optimization problem over the pure strategies of the defender. They
prove that computing optimal strategies is NP-hard in general, and under some constraints they
propose polynomial-time algorithms.
In this chapter, we address a security game between resource-constrained players in which the
utility function has an additive property. First, we present the problem formulation. Next, we
consider a special case of the problem, for which ka = kd = 1. In other words, players have only
one resource. Next, we present the structural properties of the optimal attacker strategy in
general case. We present a polynomial-time algorithm to compute the value of a large-scale
zero-sum game. Finally, we present our conclusions.
2.2 Problem Formulation: Security Game
Consider a two-person zero-sum game on a graph G(ν, ε) containing n vertices and m links, where
ν = {1, . . . , n} and ε = {1, . . . ,m} are the vertex set and the edge set, respectively. We assume an
attacker (player 1) chooses ka-links to attack. So, there are na =(mka
)actions for player 1. On the
other hand, protection budget of links is limited, and we assume that only kd links will be
protected by the defender. So, there are nd =(mkd
)actions for player 2. The defender (Player 2)
11
has no knowledge about the links chosen by player 1. In order to find the optimal strategy for the
players, we formulate a strategic security game (X ,Y, A), where X and Y denote the action sets
for attacker and defender, respectively, and card(X ) = na, card(Y) = nd. Every element of X ,
denoted by xi, is defined as the attacked links. Similarly, yi ∈ Y is defined as the protected links.
Let I = {1, . . .m}. Each xi ∈ X and yi ∈ Y is a ka-tuple, and kd-tuple subset of I, respectively.
The attacker has no information about the links that are protected by the defender. Let φi
denote the cost associated to link i, and each link is labeled such that
φi ≥ φj for i > j.
We consider an additive property for the utility function i.e., entries of the cost matrix A are
defined as follows:
aij =∑
{l|l∈xi∩ycj}φl. (2.1)
A represents the game matrix or payoff matrix for player 1. Since we consider a zero-sum game,
the payoff matrix for player 2 is −A.
Let p, q be the probability vectors representing the mixed strategies for player 1 and player 2,
respectively. The expected utility function is
v = pTAq.
According to the minimax theorem, every finite two-person zero-sum game has a saddle point
with the value, v∗, in mixed strategy p∗ =[p∗1, . . . , p
∗na
]Tfor player 1, and mixed strategy
q∗ =[q∗1, . . . , q
∗nd
]Tfor player 2, such that player 1’s average gain is at least v∗ no matter what
player 2 does. And player 2’s average loss is at most v∗ regardless of player 1’s strategy, that is
pTAq∗ ≤ p∗TAq∗ ≤ p∗TAq.
In the next section, we present an algorithm to compute v∗ which is polynomial in m, ka and kd.
12
2.3 Who is the weakest link?: Special case ka = kd = 1
In this section, we consider a case, for which attacker and defender have only one resource.
Intuitively, This problem can be interpreted as finding the weakest target against a strategic
attack. When ka = kd = 1, matrix A has the following form:
A =
0 φ1 · · · φ1
φ2 0 φ2 . . . φ2
.... . .
...
φN−1 . . . φm−1 0 φm−1
φm . . . φm 0
, (2.2)
and
0 < φ1 ≤ φ2 ≤ · · · ≤ φm. (2.3)
We use the property of matrix A and give a closed form solution for the game matrix A. First, we
present the following result from Karlin (2003) as a lemma.
Lemma 1. A necessary and sufficient condition that p and q to be optimal strategies of
non-singular game matrix A is that there exist a square sub-matrix A of A such that
1T A−11 6= 0
v =1
1T A−11(2.4)
pT = v1T A−1, p ≥ 0, (2.5)
q = vA−11, q ≥ 0, (2.6)
where p, q are the optimal strategies for matrix game A, and correspond to the rows and columns
of A which appear in A. The remaining components of p and q are identically zero. Furthermore,
the inner product of p with deleted columns of A which leads to A, is greater than or equal to v,
and the inner product of q with deleted rows of A, is less than or equal to v.
The following theorem presents the optimal strategies for the game matrix (2.2).
13
Theorem 2. p and q are the optimal mixed strategies of a two person zero-sum game with game
matrix (2.2), and v is the value of the game with the following expressions:
p =
0, · · · , 0︸ ︷︷ ︸d
, p
T ,q =
0, · · · , 0︸ ︷︷ ︸d
, q
T
pT = v1T A−1, (2.7)
q = vA−11, (2.8)
v =1
1T A−11, (2.9)
where A is a sub-matrix of A, obtaining by deleting first d rows and d columns of A. Moreover, d
is the smallest variable which satisfies the following expression:
m∑j=d+1
(1
φd+1− 1
φj) ≤ 1
φd+1, (2.10)
Proof. Matrix A can be written in the form of
A = Φ(−I + 11T ),
where Φ is diag([φ1, . . . , φm]), and
A−1 = (11T
m− 1− I)Φ−1.
According to lemma 1, we need to find a sub-matrix of A such that (2.5) and (2.6) hold. We
begin with the sub-matrix A = A,
v =1
1TA−11=
1
1T ( 11T
m−1 − I)Φ−11
=m− 1∑mj=1
1φj
,
pT = v1T (11T
m− 1− I)Φ−1,
=v
m− 1
[1
φ1, . . . ,
1
φm
]T.
14
All components of p are positive. Next, we compute q.
q = v(11T
m− 1− I)Φ−11,
where ith component of q is
qi =v
m− 1(1−mφi
+m∑j=1
1
φj).
If q1 is positive, then by monotonicity of φi’s (2.3), q > 0, and p = p, q = q and there is no zero
component in the optimal mixed strategies (i.e. d = 0), otherwise we delete first row and first
column of A, and we check the obtained A. Repeating the aforementioned approach for A
obtained from deleting first d rows and first d columns of A results to (2.7),(2.8),(2.9) for v, p, q,
where
p =v
m− d− 1
[1
φd+1, . . . ,
1
φm
]T.
From above expression p > 0. Moreover, ith entry of q is
qi =v
m− d− 1(1−m+ d
φi+d+
m∑j=d+1
1
φj).
From monotonicity of φi’s, if q1 > 0, then q > 0. Therefore, the above expression is positive for all
i = 1, . . . ,m− d if
m∑j=d+1
(1
φd+1− 1
φj) ≤ 1
φd+1,
which implies that (2.10) need to be satisfied. The value of the game is
v =1
1T A−11=
m− d− 1∑mj=d+1
1φj
.
Next, we need to check the inner product of p with deleted columns, and the inner product of q
with deleted rows. Without loss of generality, we consider d = 1 (i.e. we are deleting first row and
column). The inner product of p with the first column of A is
pT [φ2, . . . , φm]T = vm− 1
m− 2,
15
which is greater than v. The inner product of q with first row of A is
φ1qT1 =
v
m− 2
m∑j=2
φ1
φj, (2.11)
and, since
1−mφ1
+m∑j=1
1
φj< 0,
m∑j=2
φ1
φj< m− 2,
therefore, (2.11) is less than v.
Based on the above Theorem, in order to compute optimal strategies, we need to find the smallest
d for which (2.10) is satisfied, and then p, q and v can be computed directly from expressions in
Theorem 2.
2.4 Structural Properties of the Optimal Solution
Since v∗ denote the value of the game, the following holds:
v∗ = maxp
min1≤i≤nd
(pTA)i = minq
max1≤j≤na
(Aq)j
Let (pTA)i denote the ith element of pTA. From (6.1), (pTA)i can be written in the following
form,
(pTA)i =
nd∑j=1
pjaji =
nd∑j=1
pj∑
l∈xj∩yicφl =
∑j∈yci
αjφj ,
where,
αj =∑{i|j∈xi}
pi. (2.12)
Note that (pTA)i ≥ v∗, ∀i, and minimum value of (pTA)i is v∗. We say pl ∈ αj when pl is present
in the R.H.S of (6.7) (i.e. j ∈ xl).
Lemma 3. Let Γ = {αi1 , . . . , αika} be an arbitrary set of αi’s with cardinality ka.
16
1. ∃pc s.t. pc ∈ αi1 , . . . , αika and pc /∈ αika+1, . . . , αim
2. Given any αr /∈ Γ, there exists αj ∈ Γ such that pi’s can be perturbed to reduce αr by δ > 0,
and increase αj by δ without any change in αi’s for i ∈ I \ {r, j}.
Proof. 1) We prove for the special case when Γ = {α1, . . . , αka}. From (6.7), it is clear that p1 is
common in α1, . . . , αka . Moreover, p1 does not appear in αka+1, . . . , αm because none of
ka + 1, . . . ,m lie in x1. For the general case, it is possible to relabel φi’s such that
Γ = {α′1, . . . , α′ka} = {αi1 , . . . , αika}. Therefore, p′1 lies in α′1, . . . , α′ka
, and we can define pc = p′1 .
2) In order to prove this claim, we assume that pc ∈ αr. From part 1, there exists αj , such that
pc /∈ αj . From part 1, there exist p′c such that p′c ∈ αj , and p′c belongs to all αi’s which contain pc
except αr. So by reducing pc by δ and increasing pc′ by δ, we are able to reduce αr and increase
αj without affecting other αi’s.
Theorem 4. For ka + kd ≤ m, the optimal solution p∗ and v∗ satisfy the following properties:
1. α∗iφi ≥ α∗jφj for i > j,
2. v∗ = α∗1φ1 + · · ·+ α∗m−kdφm−kd,
3. α∗iφi = α∗jφj ∀ i, j ∈ {m− kd, . . . ,m}.
Proof. 1) We prove by contradiction. Assume that there exists i and j such that αiφi < αjφj for
i > j for the optimal solution. Since φi ≥ φj , αj > αi. From property 2 in Lemma 1, pd, p′d > 0
exist such that pd ∈ αj (pd /∈ αi) and p′d ∈ αi (p′d /∈ αj). We show that if pd is decreased by δ, and
p′d is increased by δ, v∗ either increases or remains constant. As a consequence, the first
assumption is not correct, and we conclude that αiφi ≥ αjφj . In order to show that v∗ is not
decreasing, we analyze an arbitrary (pTA)r. Since we know that (pTA)r ≥ v∗, v∗ can be decreased
only when (pTA)r is decreased for (pTA)r = v∗. If (pTA)r contains both αiφi and αjφj , the value
of (pTA)r is increased by reducing αj and increasing αi. If (pTA)r only contains αiφi, it will
increase as well. If (pTA)r contains only αjφj , there exists another (pTA)r′ such that
(pTA)r′ = (pTA)r − αjφj + αiφi. (2.13)
17
Since we assumed that αjφj > αiφi, (pTA)r > (pTA)r′ ≥ v∗. Therefore, we can pick δ such that
by reducing αj , v∗ is not reduced. The last case is when (pTA)r does not contain αiφi, αjφj . In
this case v∗ is not affected. Hence the first property holds.
2) Since v∗ = mini(pTA)i, and (pTA)i’s are constructed from all m− kd combinations of αjφj the
second property holds.
3) We prove by contradiction. Assume that ∃i ∈ {m− kd + 1, . . . ,m} such that αiφi > αi−1φi−1.
Since ka + kd ≤ m ,there exists Γ = {αi1 , . . . , αika} such that i1, . . . , ika ≤ m− kd, and from
property 2 in lemma 22, there exists an r ∈ {1, . . . , ka} such that we can reduce αi and increase
αr without affecting the other α’s which in turn increases the v∗. Since v∗ is the value of the
game, we reach a contradiction. Therefore, the assumption at the beginning cannot hold as a
result of which the property holds.
Corollary 4.1. If ka + kd ≤ m, the optimal solution satisfies the following:
α∗mφm = · · · = α∗sφs ≥ · · · ≥ α∗s−ka+1φs−ka+1, (2.14)
s ∈ {ka, . . . ,m− kd}.
The proof of the above corollary follows from Theorem 8 and Lemma 22. From Lemma 22, we
can conclude that any αj 6= 0 for j = 1, . . . , αs−ka can be reduced to increase v∗. Therefore
α∗1 = · · · = α∗s−ka = 0.
Let Ua and Ud, called active sets of attacker and defender, denote the union of xi’s and yi’s
corresponds to the support sets of p∗ and q∗, respectively.
Corollary 4.2. In a security game (X ,Y, A), Ua = {i, i+ 1, . . . ,m}, where i ∈ {1, . . . ,m− ka}.
2.5 Computation of the Value
We consider the game from the attacker’s perspective. For Ua = {i, . . . ,m}, the following holds:
(p∗TA)j > v∗ ∀j s.t. xj ∩ Ua = ∅.
18
Consequently,
q∗j = 0 ∀j s.t. xj ∩ Ua = ∅,
otherwise, p∗TAq > v∗. Hence, Ud ⊆ Ua.
According to Corollary 4.2, both players choose strategies that involve set of links with highest
impacts (φi). We use this property to reduce the possible scenarios for the attacker by
constructing an (m− kd)× (m− kd) matrix U such that its element (i, i+ r), denoted as Ui,i+r, is
associated with the following condition:
αmφm = · · · = αsφs > αs−1φs−1 ≥ · · · ≥ αs−rφs−r
s = m− kd − i+ 1 (2.15)
r ∈ {0, . . . , ka}, i ∈ {1, . . . ,m− kd}. (2.16)
The above condition can be interpreted as Ua = {s− r, s− r + 1, . . . ,m}, and
Ud = {s, s+ 1, . . . ,m} for cell Ui,i+r. Since Ud ⊆ Ua, we consider that Ui,j is not a candidate for
the solution of the security game, when j < i.
The following theorem relates v∗ to elements of U .
Theorem 5. v∗ = max{Ui,j}. Elements of U are defined as follows:
• Ui,i = kaici
when ciφs ≥ ka.
• Ui,i+r =∑s−1
l=s−r φl + (ka−r)ici
,
when ciφs−r > i, and ciφs ≥ ka − r > ciφs−1
• Ui,i+r = (ka − r − ci−1φs)φs−r +∑s−1
l=s−r+1 φl + iφs
when ciφs−r ≤ i, and ciφs ≥ ka − r > ciφs − 1,
• Otherwise(i.e. when the above conditions are not satisfied), Ui,j is not a candidate for the
solution, and this entry is not considered in max.
where ci =∑m
j=s1φj
.
19
Proof. The diagonal element Ui,i corresponds to the following case:
αmφm = · · · = αsφs, (2.17)
αj = 0 for j ∈ {1, . . . , s− 1}
Substituting the above condition in the expression for v∗ in Theorem 8, we obtain the following:
v =
m−kd∑l=s
αlφl = iαjφj =⇒ αj =v
iφj, j ∈ {s, . . . ,m}
From Lemma 22, every pi is present in ka αi’s. Since∑na
j=1 pj = 1,∑m
j=1 αj = ka. Substituting
(3.16) into∑m
j=1 αj , we obtain the following:
m∑j=s
v
iφj= ka =⇒ v =
kai∑mj=s
1φj
(2.18)
Let ci =∑m
j=s1φj
. Substituting v in (3.16) leads to the following:
αj =
kaφjci
j ∈ {s, . . . ,m}
0 j ∈ {1, . . . , s− 1}
Since φj ’s are in ascending order, αs ≤ 1 implies that αj ≤ 1 for j > s. φsci < ka implies that
αs > 1 which contradicts the definition of α in (6.7). Hence, in this case the maximum value of αs
is 1. Therefore,
αjφjci < ka, for j = s, . . . ,m,
which implies that∑m
l=1 αl = ka cannot be satisfied. In other words, Ui,i cannot be the optimal
solution when φsci < ka, and this cell is not a candidate for the solution of the game. Hence we
can put this entry equal to 0, when ciφs < ka.
The off-diagonal entry Ui,i+r corresponds to the following condition:
αmφm = · · · = αsφs > αs−1φs−1 ≥ · · · ≥ αs−rφs−r
s = m− kd − i+ 1 (2.19)
r ∈ {1, . . . , ka}, i ∈ {1, . . . ,m− kd − 1}. (2.20)
20
Substituting αjφj ’s from (3.7) in v =∑m−kd
j=1 αjφj leads to the following expression for v:
v =s−1∑l=s−r
αlφl + iαjφj j ∈ {s, . . . ,m}
=⇒ αj =v −
∑s−1l=s−r αlφl
iφjj ∈ {s, . . . ,m} (2.21)
Since∑m
l=1 αl = ka, we obtain the following:
s−1∑l=s−r
αl +m∑l=s
v −∑s−1
j=s−r αjφj
iφl= ka (2.22)
=⇒ v =
s−1∑l=s−r
(ciφl − i)ci
αl +kai
ci, (2.23)
where ci =∑m
j=s1φj
. Next, we consider two cases based on the coefficients of αl in the above
expression.
First, we consider the case in which the coefficients of αl are positive (i.e. ciφs−r > i). From
(2.23), we can conclude that the maximum value of v occurs at the following values of α’s:
αj =
0 j ∈ {1, . . . , s− r − 1}
1 j ∈ {s− r, . . . , s− 1}ka−rciφj
j ∈ {s, . . . ,m}
(2.24)
If ka−rciφs
≤ 1, then entry of Ui,i+r is a candidate for the solution of the security game.
Substituting (3.11) in (2.23) leads to the following expression for v:
v =s−1∑l=s−r
φl +(ka − r)i
ci(2.25)
If ka−rciφs
> 1, the constraint of αj ≤ 1 is violated, and consequently this cell of U cannot be the
solution for the security game.
Next, we consider a case in which some coefficients of αl in (2.23) are negative, and ka−rciφs
> 1. In
this case, αs should be modified, and the only case which can be a candidate for the solution of
21
the game is
αj =
0 j ∈ {1, . . . , s− r − 1}
δ j = s− r
1 j ∈ {s− r + 1, . . . , s}φsφj
j ∈ {s+ 1, . . . ,m},
(2.26)
where δ = ka − r − ci−1φs, which is resulted from∑m
j=1 αj = ka. Since 0 < δ ≤ 1,
ka − r − ci−1φs > 0, which is equivalent to ciφs − 1 < ka − r.
Moreover, v can be computed from Theorem 8, second part, which is given by the following
expression:
v = δφs−r + iφs +s−1∑
l=s−r+1
φl. (2.27)
From definition of U , in Ui,i+r, the active sets are Ua = {s− r, s− r + 1, . . . ,m}, and
Ud = {s, s+ 1, . . . ,m}. Each column and row of U , corresponds to the attacker and defender’s
active set, respectively.
Corollary 5.1. v∗ can be computed in O((m− kd)2).
The above corollary can be concluded from the fact that U has (m− kd)2 entries. Algorithm 1
gives the value of the game and active links for the attacker and the defender.
Next, we show the feasibility of each cell of U i.e., there exists a p which satisfies the values of α
in each cell of U obtained in the previous analysis. In other words, we show that in each cell of U
that can be a candidate for the value of the game, there exists p ≥ 0,∑N
j=1 pj = 1 such that
Mp = α, where α = [α1, . . . , αm]T is obtained from the analysis in proof of Theorem 9. M is a
matrix of dimension(mk
)×m, and each column has k entries equal to 1 and rest of the entries
equal to 0. In other words, M is a matrix constructed from(mk
)combinations of k one in an m
dimensional vector. We refer to M as a combinatorial matrix, and denote it as M[m,k].
Lemma 6. Let α = [α1, . . . , αm]T , and∑m
j=1 αj = k, and 0 ≤ αj ≤ 1. α lies in the convex-hull of
columns of M[m,k]. In other words, there exists p ≥ 0,∑N
j=1 pj = 1, such that Mp = α.
22
Algorithm 1 Computation of the value, and active links
1: Input: φ1, . . . , φm and ka, kd2: Output: v∗,Ua,Ud3: Construct U based on Theorem 9
4: for i = 1 : m− kd do
5: if ciφs ≥ ka then
6: Ui,i = kaici
7: else
8: Ui,i = 0
9: end if
10: for r = 1 : ka do
11: if ciφs−r > i, and ciφs ≥ ka − r > ciφs−1 then
12: Ui,i+r =∑s−1
l=s−r φl + (ka−r)ici
,
13: else if ciφs−r ≤ i, and ciφs ≥ ka − r > ciφs − 1 then
14: Ui,i+r = (ka − r − ci−1φs)φs−r +∑s−1
l=s−r+1 φl + iφs15: else
16: Ui,i+r = 0
17: end if
18: end for
19: end for
20: v∗ ← maxUi,j21: (i∗, j∗)← arg maxUi,j22: Ua ← {m− kd − j∗ + 1, . . . ,m}23: Ud ← {m− kd − i∗ + 1, . . . ,m}
Proof. The proof is by induction. We assume that the lemma is true for m′ = m− 1 and
k′ = 1, . . . , k − 1. Moreover, M[m,k] can be written as
M[m,k] =
1T 0T
M[m−1,k−1] M[m−1,k]
. (2.28)
By separating p into p1 and p2,
M[m,k]p =
1T 0T
M[m−1,k−1] M[m−1,k]
p1
p2
=
1T p1
M[m−1,k−1]p1 +M[m−1,k]p2
. (2.29)
23
Second entry of the above matrix can be written in the following form:
α1M[m−1,k−1]p′1 + (1− α1)M[m−1,k]p
′2, (2.30)
where, p1 = α1p′1, p2 = (1− α1)p′2. Since we assumed that the lemma is true for m′ = m− 1 and
k′ = 1, . . . , k − 1, there exists p′1 and p′2 in a simplex such that the following hold:
M[m−1,k−1]p′1 =
k − 1
k − α1
α2
...
αm
, (2.31)
M[m−1,k]p′2 =
k
k − α1
α2
...
αm
. (2.32)
By substituting the above expressions in (2.29), we conclude that M[m,k]p = α, where p lies on a
simplex. In other words, α lies in a convex-hull of columns of M . In order to complete the proof,
we need to show that the lemma holds for arbitrary M[k+1,k] and M[m,1].
Note that M[m,1] = Im×m. Therefore p = α, and∑m
j=1 pj =∑m
j=1 αj = 1. Next,
M[k+1,k]p = (11T − I)p = α (2.33)
p = (11T − I)−1α = (11T
k− I)α = 1− α. (2.34)
Consequently, lemma holds for base cases, which completes the proof.
Finally, we consider the case when ka + kd > m. When k > m2 , the optimal solution satisfies the
following:
αmφm = · · · = αka+1φka+1 ≥ αkaφka ≥ · · · ≥ α1φ1,
24
As in the case of ka + kd ≤ m, we can construct a matrix U with ith entry on the diagonal as
follows:
αmφm = · · · = αsφs, (2.35)
αj = 0, j ∈ {1, . . . , s− 1}
s = ka − i+ 2
i ∈ {1, . . . , ka + 1},
Since v =∑m−kd
j=1 αjφj ,
v = 0 for i = 1, . . . , ka + kd −m+ 1,
which implies that these cells cannot be a candidate for the value of the game. For
i ≥ ka + kd −m+ 2, we obtain the following:
αj =kaciφj
, v =ka(i− ka − kd +m− 2)
ci. (2.36)
and off-diagonal entry (i, i+ r) corresponds to the following condition:
αmφm = · · · = αsφs > αs−1φs−1 ≥ · · · ≥ αs−rφs−r,
s = ka − i+ 2 (2.37)
r ∈ {1, . . . , ka}, i ∈ {1, . . . , ka}.
For i ≤ ka + kd −m+ 1, all entries corresponding to the first ka + kd −m+ 1 columns are a
candidate for the value of the game. For other off-diagonal elements, we have an analysis similar
to the off-diagonal entries for ka + kd ≤ m.
2.6 Conclusion
In this chapter, we investigated a security game. We formulated a zero sum-game in which the
payoff matrix of the game has a special structure which results from the additive property of the
utility function. We presented structural properties of the optimal attacker strategy. Based on the
structural properties, we proposed a polynomial-time algorithm to compute the value of the
large-scale zero-sum game.
25
CHAPTER 3. ON THE CHARACTERIZATION OF SADDLE POINT
EQUILIBRIUM FOR SECURITY GAMES
3.1 Introduction
Game theory Basar and Olsder (1999) is a useful tool to model adversarial scenarios. Security
games model attack scenarios wherein an attacker attacks a number of targets while the defender
allocates its resources to protect them to minimize the impact. One of the main questions in the
area of security is how to allocate resources efficiently due to the limited available resources for
the defender. The payoff for the attacker and the defender is based on the successfully attacked
and protected targets, respectively. Traditionally, due to the adversarial nature of the problem,
attacker-defender games have been modeled as zero-sum games, and the resulting saddle-point
strategies are assumed to be optimal for both players. In general, two-player zero-sum game can
be formulated as an linear programming (LP) problem Basar and Olsder (1999), and therefore
saddle-point equilibrium can be computed in polynomial time. The most efficient running time of
solver for a general LP problem is O(n2.055) Jiang et al. (2020). However, solving security games
with more than 2 resources for attacker and defender with a general LP solver is computationally
expensive due to the combinatorial nature of the problem.
In the past two decades, game theory has played an important role in quantifying and analyzing
security in large-scale networked systems. Here, we mention some of these efforts across several
applications. For example, a game theoretic framework is proposed for security of smart grids
in Boudko and Abie (2018),Lim et al. (2012),Law et al. (2014) and Shan and Zhuang (2020). In
these models, players can distribute the limited budget over the entire set of nodes of the
network, and consequently the combinatorial nature of the game is relaxed. Authors in Boudko
and Abie (2018) propose an evolutionary game framework that models integrity attacks and
defenses in an Advanced Metering Infrastructure (AMI), modeled as a tree, in a smart grid.
26
In Lim et al. (2012), a game-theoretic defense strategy is developed to protect sensor nodes from
attacks and to guarantee a high level of trustworthiness for sensed data. Authors in Johnson et al.
(2010),Trajanovski et al. (2015),Hamdi and Abie (2014),Boudko and Abie (2019) and Laszka
et al. (2018) consider the notion of information security in a networked system from game
theoretic perspective. Johnson et al. (2010) introduces a method for resolving uncertainty in
interdependent security scenarios in computer network and information security. In Trajanovski
et al. (2015), authors examine security game in which each player collectively minimizes the cost
of virus spread while assuring connectivity. In Hamdi and Abie (2014),Boudko and Abie (2019),
authors propose a game-theoretic model for adaptive security policy and power consumption in
the Internet of Things. In Laszka et al. (2018), authors introduce a game-theoretic framework for
optimal stochastic message authentication, and they provide guarantees for resource-bounded
systems. In Grossklags et al. (2008), authors focus on notions of self-protection (e.g., patching
system vulnerabilities) and self insurance (e.g., having good backups) rather than only security
investments in information security games. In Khouzani et al. (2015), authors propose a game
theoretic framework for picking vs guessing attacks in the presence of preferences over the secret
space, and they analyze the trade-off between usability and security. For a comprehensive survey
of game-theoretic approaches in security and privacy in computer and communication, a reader
could refer to Manshaei et al. (2013).
Security games pose computational challenges in analysis and synthesis of optimal strategies due
to exponential increase in the size of the strategy set for each player. A class of security games
which renders tractable computational analysis is that of Stackelberg games Basar (1973). In
Stackelberg models, the leader moves first, and the follower observes the leader’s strategy before
acting. Efforts involving randomized strategies Kiekintveld et al. (2009) and approximation
algorithms Bhattacharya et al. (2011) for Stackelberg game formulation of security games have
been proposed to efficiently allocate multiple resources across multiple assets/targets. In order to
extend the efficient computational techniques for simultaneous move games, efforts have been
made to characterize conditions under which any Stackelberg strategy is also a NE
27
strategy Korzhyk et al. (2011c). An extensive review of various efforts to characterize and reduce
the computational complexity of Stackelberg games with application in security can be found
in Emadi and Bhattacharya (2019), and references therein. Here, we mention a few that are
relevant to the problem under consideration. Korzhyk et al. (2010) shows that computing the
optimal Stackelberg strategy in security resource allocation game, when attacker attacks one
target, is NP-hard in general. However, when resources are homogeneous and cardinality of
protection set is at most 2, polynomial-time algorithms have been proposed by the authors.
Korzhyk et al. (2010) propose an LP formulation similar to Kiekintveld’s formulation, and
presents a technique to compute the mixed strategies in polynomial time.
In Korzhyk et al. (2011a), a security game between an attacker and a defender is modeled as a
non-zero-sum game with multiple attacker resources. The authors analyze the scenario in which
the payoff matrix has an additive structure. They propose an O(m2) iterative algorithm for
computing the mixed-strategy Nash Equilibrium where m is the size of the parameter set.
Motivated from Korzhyk et al. (2011a), in Emadi and Bhattacharya (2019), we analyzed a
zero-sum security game with multiple resources for attacker and defender in which the payoff
matrix has an additive structure. Based on combinatorial arguments, we presented structural
properties of the saddle-point strategy of the attacker, and proposed an O(m2) algorithm to
compute the saddle-point equilibrium and the value of the game, and provided closed-form
expressions for both. In this paper, we show that a zero-sum security game can be reduced to the
problem of minimizing the sum of the k-largest functions over a polyhedral set which can be
computed in linear time Ogryczak and Tamir (2003). Based on this insight, we use a variational
approach to propose an O(m) algorithm which is the best possible in terms of the complexity.
Moreover, we present structural properties of the saddle-point strategy of both players, and an
explicit expression for the value of the game.
In this chapter, first, we present the problem formulation. next, we present structural properties
of the optimal attacker strategy with a different approach based on variation analysis. Next, we
present a linear time algorithm to compute the value of a large-scale zero-sum game and we
28
improve the time required to find the solution, presented in the past chapter. Next, we present
structural properties of the defender’s optimal strategy, and a dual algorithm to compute the
value and equilibrium. Finally, we present our conclusions.
3.2 Problem Formulation: Security Game
Consider a two-person zero-sum game, and let I = {1, . . . ,m} denote a set of targets. We assume
an attacker (player 1) chooses ka-targets to attack. So, there are na =(mka
)actions for player 1.
On the other hand, protection budget of targets is limited, and we assume that only kd targets
will be protected by the defender (player 2). So, there are nd =(mkd
)actions for player 2. The
defender has no knowledge about the targets chosen by player 1. In order to find the optimal
strategy for the players, we formulate a strategic security game (X ,Y, A), where X and Y denote
the action sets for attacker and defender, respectively, and card(X ) = na, card(Y) = nd. Every
element xi ∈ X represents a set of targets that are attacked. Similarly, yi ∈ Y represents a
protected targets. Each xi ∈ X and yi ∈ Y is a ka-tuple, and kd-tuple subset of I, respectively.
The attacker has no information about the targets that are protected by the defender. Let φi
denote the cost associated to target i. Moreover, without loss of generality, we assume that
targets are labeled such that φi ≥ φj ≥ 0 for i > j.
We consider an additive property for the utility function i.e., entries of the cost matrix A are
defined as follows:
Aij =∑
{l|l∈xi∩ycj}φl. (3.1)
A represents the game matrix or payoff matrix for player 1. Since we consider a zero-sum game,
the payoff matrix for player 2 is −A. Note that we assume both players have the complete
information of the target costs.
Let p, q be the probability vectors representing the mixed strategies for player 1 and player 2,
respectively. The expected utility function is
v = pTAq.
29
According to the minimax theorem, every finite two-person zero-sum game has a saddle point
with the value, v∗, in mixed strategy p∗ =[p∗1, . . . , p
∗na
]Tfor player 1, and mixed strategy
q∗ =[q∗1, . . . , q
∗nd
]Tfor player 2, such that player 1’s average gain is at least v∗ no matter what
player 2 does. And player 2’s average loss is at most v∗ regardless of player 1’s strategy, that is
pTAq∗ ≤ p∗TAq∗ ≤ p∗TAq.
In order to solve every finite matrix game, we can reduce the game to the following LP problem,
maximizep
v
subject to v ≤na∑i=1
piAij , j = 1, . . . , nd
p1 + · · ·+ pna = 1
pi ≥ 0 for i = 1, . . . , na.
(3.2)
However, the dimension of the decision variables in the above formulation is (na + 1) which is
exponential in terms of m. In the next section, we present an equivalent LP formulation with
dimension m to compute v∗.
3.3 Structural Properties of the Attacker’s Strategy
In this section, we investigate the structural properties of the optimal attacker’s strategy. The
value of the game (v∗) can be defined as follows based on the attacker’s mixed strategy p:
v∗ = maxp
min1≤i≤nd
(pTA)i,
where (pTA)i denote the ith element of pTA. From (6.1), (pTA)i can be written in the following
form,
(pTA)i =
na∑j=1
pjaji =
na∑j=1
pj∑
l∈xj∩yicφl =
∑l∈yci
αlφl,
where,
αj =∑{i|j∈xi}
pi =⇒ M[m,ka]p = α, (3.3)
30
where α = [α1, . . . , αm]T , and M[m,ka] ∈ Rm×na is a combinatorial matrix 1.
Since M[m,ka] is a combinatorial matrix,∑m
i=1 αi = ka. Moreover, in the following lemma we show
that for any feasible α there exists a feasible p. Hence, the problem reduces to computing α∗.
In the following, let ei denote the unit vector of dimension m with ith element equal to one, and
∇αv represents the gradient of v respect to α.
Lemma 7. M[m,ka] is a surjective mapping.
Proof. Please refer to Emadi and Bhattacharya (2019) for the proof.
Based on the above lemma, the problem reduces to computing α∗.
Lemma 8. α∗ satisfies the following property:
α∗iφi ≥ α∗jφj for i > j (3.4)
where α∗j is defined in (6.7) for p∗.
Proof. Assume the following holds for α∗:
α∗imφim ≥ · · · ≥ α∗i1φi1 . (3.5)
Note that v∗ is (m− kd)-sum of smallest αlφl, that is v∗ =∑im−kd
l=i1α∗l φl. Assume that there exist
i and j such that α∗iφi < α∗jφj for i > j. φi ≥ φj ⇒ α∗j > α∗i .
α∗i < 1, α∗j > 0⇒ (ei − ej)T∇αv|v∗ ≤ 0. Since v∗ is the maximum value of the (m− kd)-sum of
smallest αlφl, we arrive at a contradiction. Therefore, α∗iφi ≥ α∗jφj for i > j.
Corollary 8.1. α∗ and v∗ satisfy the following property:
(a) v∗ =∑m−kd
l=1 α∗l φl,
1A combinatorial matrix M[m,k] ∈ Rm×(mk ) is a boolean matrix containing all combinations of k 1’s. Each columnof M has k entries equal to 1 and rest of the entries equal to 0. In other words, M is a matrix constructed from
(mk
)combinations of k one in an m dimensional vector
31
(b) α∗mφm = · · · = α∗sφs > α∗s−1φs−1 ≥ · · · ≥ α∗s−rφs−r, α∗s−r−1 = · · · = α∗1 = 0
for 1 ≤ s ≤ max(ka,m− kd) and 0 ≤ r ≤ s− 1.
Proof. (a) Since v∗ is (m− kd)-sum of smallest αlφl, This property can be concluded directly
from Lemma 8.
(b) Let k denote max(ka,m− kd). First, we show that there is an optimal solution such that
α∗mφm = · · · = α∗kφk. We proceed the proof by contradiction. We assume that ∃i ∈ {k + 1, . . . ,m}
such that α∗iφi > α∗i−1φi−1. Since∑m
l=1 αl = ka, there is j ∈ {1, . . . k} such that α∗j < 1.
Therefore, (ej − ei)T∇αv|v∗ ≤ 0. Note that if (ei − ej)T∇αv|v∗ < 0 is a contradiction with the fact
that v∗ is the optimal value, and if (ei − ej)T∇αv|v∗ = 0 then it means there are multiple
solutions which at least one satisfy the property. Moreover, from Lemma 8, if α∗mφm = α∗sφs then
α∗mφm = · · · = α∗sφs, which completes the proof.
Let s∗, r∗ denote the indices for optimal structure expressed in Corollary 8.1 . Let Ua and Ud,
called active sets of attacker and defender, denote the union of xi’s and yi’s corresponding to the
support sets of p∗ and q∗, respectively.
Corollary 8.2. In a security game (X ,Y, A), Ua = {s∗ − r∗, . . . ,m}. When s∗ > m− kd, the
defender has a pure strategy with Ud = {m− kd + 1, . . . ,m}, else Ud = {s∗, . . . ,m} (for
s∗ ≤ m− kd).
Proof. The proof of first part directly follows from the fact that αm, . . . , αs∗−r∗ > 0 and
αs∗−r∗−1 = · · · = α1 = 0.
For second part, consider (p∗TA)j . The following condition holds for Ui∗,i∗+r∗ :
αmφm = · · · = αs∗φs∗ > αs∗−1φs∗−1 ≥ · · · ≥ αs∗−r∗φs∗−r∗
αs∗−r∗−1 = · · · = α1 = 0.
When ka + kd ≤ m, (p∗TA)j > v∗ for all j such that {1, . . . , s− 1} 6⊆ ycj . Consequently,
q∗j = 0 ∀j s.t. {1, . . . , s− 1} 6⊆ ycj ,
32
else p∗TAq∗ > v∗, which is a contradiction. Therefore, any q∗j corresponding to yj such that
yj ∩ {1, . . . , s− 1} 6= ∅ is zero. In other words, Ud = {s∗, . . . ,m}.
Based on similar arguments, we can conclude that Ud = {s∗, . . . ,m} for ka + kd > m and
s∗ ≤ m− kd. When ka + kd > m and s∗ > m− kd, (p∗TA)j > v∗ for all j such that
{s, . . . ,m} ∩ ycj 6= ∅, and consequently q∗j = 0. Therefore,
q∗j = 0 ∀j s.t. {s, . . . ,m} 6⊆ yj
Since the defender has kd resources, it has a pure strategy to allocate it to targets
{m− kd + 1, . . . ,m}.
Remark 1. According to Corollary 8.2, both players choose mixed strategies that involve targets
with highest impacts (φi).
3.4 Computation of v∗
Based on Lemma 8, we can solve the following LP to compute v∗:
maximizeα1,...,αm
m−kd∑l=1
αlφl
subject to αiφi ≥ αjφj for all i > j
m∑i=1
αi = ka
αi ≤ 1 i = 1, . . . ,m.
(3.6)
Note that from Lemma 7, we show that for any α which satisfies the conditions in (4.4), there
exists a p on a simplex such that Mp = α, which satisfy the feasibility condition of (4.4).
From Corollary 8.1, v∗ and α∗ can be computed by examining all feasible solutions for 1 ≤ s ≤ k
(k = max(ka,m− kd)), and 0 ≤ r ≤ s− 1 which satisfy the condition in Corollary 8.1 (b). Let U
denote a square matrix of dimension k. The (i, i+ r)th entry of U (denoted by Ui,i+r) is the
solution to the following problem:
33
maximizeα1,...,αm
m−kd∑l=1
αlφl
subject to αmφm = · · · = αsφs > αs−1φs−1 ≥ · · · ≥ αs−rφs−r
αs−r−1 = · · · = α1 = 0, s = k − i+ 1
0 ≤ αl ≤ 1, ∀l ∈ I.
The following theorem relates v∗ to the elements of U .
Theorem 9. v∗ = maxi,j{Ui,j}, and the entries of U are as follows:
For i ≤ ka + kd −m:Ui,i = 0,
Ui,i+r =∑m−kd
l=s−r φl when ciφs ≥ ka − r > ciφs−1
For i > ka + kd −m:
Ui,i = ka(i−k−kd+m)ci
when ciφs ≥ ka
Ui,i+r =∑s−1
l=s−r φl + (ka−r)(i−k−kd+m)ci
,
when ciφs−r > (i− k − kd +m), and ciφs ≥ ka − r > ciφs−1,
Ui,i+r = (ka − r − ci−1φs)φs−r +∑s−1
l=s−r+1 φl + (i− k − kd +m)φs
when ciφs−r ≤ (i− k − kd +m) and ciφs ≥ ka − r > ciφs − 1
Ui,i+r = 0 otherwise
where ci =∑m
j=s1φj
.
Proof. First, we consider the case s ≤ m− kd. Let the optimal solution be α∗ = (α∗1, . . . , α∗m).
Since αi = αs(φs/φi) for i ≥ s, δαi = δαs(φs/φi) for any perturbation δαs. Since∑αi = ka, any
allowable perturbation around α∗ satisfies the following condition:
m∑j=1
δαj = 0 =⇒s−1∑j=1
δαj + ciφsδαs = 0, (3.7)
34
where ci =∑m
j=s1φj
. Consider a perturbation that involves perturbing αl for l < s and
αs, . . . , αm. From (3.7), we obtain the following:
δαl = −ciφsδαs (3.8)
Based on the first order necessary conditions for maxima, we obtain the following:
δv|v∗ < 0 =⇒ φlδαl +
m−kd∑j=s
φsδαs = φlδαl + φs(m− kd − s+ 1)δαs < 0 (3.9)
Let g(φ) = −ciφ+ (m− kd− s+ 1). Substituting (3.8) in (3.9) leads to the condition g(φl)δαs < 0.
g(φl) > 0⇒ δαs < 0⇒ ∀j < s, α∗j = 1. If for any l < s, g(φl) < 0⇒ δαs > 0⇒ α∗s = 1.
As a result, we obtain the following conditions:
α∗j = 1 if g(φj) > 0 (3.10)
α∗s = 1 if ∃j such that αjg(φj) < 0,
From (3.10), we conclude that α∗ and v∗ can have the following forms:
1.
αj =
0 j = 1, . . . , s− r − 1
1 j = s− r, . . . , s− 1
ka−rciφj
j = s, . . . ,m
(3.11)
From feasibility conditions in (4.4) (i.e.∑m
l=1 αl = ka, αj ≤ 1 and αsφs > αs−1φs−1), we
conclude that at (i, i+ r)th entry of U , feasibility conditions are satisfied if
ciφs ≥ ka − r > ciφs−1. Substituting (3.11) in Ui,i+r =∑m−kd
j=1 αjφj leads to the following
expression for Ui,i+r:
Ui,i+r =s−1∑l=s−r
φl +(ka − r)(i− k − kd +m)
ci(3.12)
2.
αj =
0 j = 1, . . . , s− r − 1
δ j = s− r
1 j = s− r + 1, . . . , s
φsφj
j = s+ 1, . . . ,m,
(3.13)
35
where δ = (ka − r − ci−1φs), which results from∑m
j=1 αj = ka. Since 0 < δ ≤ 1,
0 < ka − r − ci−1φs ≤ 1, which is equivalent to ciφs ≥ ka − r > ciφs − 1.
Moreover, substituting (3.13) in Ui,i+r =∑m−kd
l=1 αlφl leads to the following expression for v:
Ui,i+r = δφs−r + (i− k − kd +m)φs +
s−1∑l=s−r+1
φl. (3.14)
3.
αmφm = · · · = αsφs, αs 6= 0, αj = 0 for j ∈ {1, . . . , s− 1}
Substituting the above condition in Ui,i+r =∑m−kd
l=1 αlφl, we obtain the following:
Ui,i+r =
m−kd∑l=s
αlφl = (i− k − kd +m)αjφj (3.15)
=⇒ αj =Ui,i+r
(i− k − kd +m)φj, j ∈ {s, . . . ,m} (3.16)
By substituting (3.16) into Ui,i+r =∑m−kd
l=1 αlvl, we obtain the following:
m∑j=s
Ui,i+r(i− k − kd +m)φj
= ka =⇒ Ui,i+r =ka(i− k − kd +m)∑m
j=s1φj
(3.17)
Let ci =∑m
j=s1φj
. Next, we have to check whether α satisfies the feasibility conditions
of (4.4). Substituting Ui,i+r in (3.16) leads to the following:
αj =
kaφjci
j ∈ {s, . . . ,m}
0 j ∈ {1, . . . , s− 1}
Finally, we consider the case when ka + kd > m. Since Ui,i+r =∑m−kd
l=s−r αlφl and αj = 0 for
j = 1, . . . ,m− kd, for i = 1, . . . , ka + kd −m, Ui,i = 0. Moreover, Ui,i+r can be written as
Ui,i+r =∑m−kd
l=s−r αlφl, and the feasible α’s are given as follows:
αj =
0 j = 1, . . . , s− r − 1
1 j = s− r, . . . , s− 1
ka−rciφj
j = s, . . . ,m
(3.18)
If ciφs ≥ ka − r > ciφs−1, then (i, i+ r)th entry of U is feasible. For all i > ka + kd −m, the
arguments are same as for the case ka ≤ m− kd.
36
Since v∗ is the maximum value which satisfies all feasibility conditions, v∗ is the maximum entry
of U .
Next, we show that U is a sparse matrix, which leads to a linear time algorithm for computing v∗.
Let U I and U II be square matrices of dimension k defined as follows:
U Ii,i+r =
Ui,i+r ciφs−r > (i− k − kd +m), ciφs ≥ ka − r > ciφs−1,
0 otherwise, (3.19)
U IIi,i+r =
Ui,i+r ciφs−r ≤ (i− k − kd +m), ciφs ≥ ka − r > ciφs − 1
0 otherwise(3.20)
Lemma 10. Given an infeasible cell in U I , either all the cells to the right (in the same row) or
all the cells below (in the same column) are infeasible.
Proof. Consider an infeasible cell (i, i+ r) in U I . For a cell to be infeasible, at least one of the
three inequalities in (3.19) needs to be violated.
(a) First, consider the case ciφs−1 ≥ ka − r ⇒ ciφs−1 ≥ ka − r′, ∀r′ ≥ r. In other words, if
ciφs−1 ≥ ka − r, there is no feasible solution in (i, i+ r′)th entry of U I for all r′ ≥ r.
(b) Next, consider the case, ka − r > ciφs. Since ci+1φs−1 = ciφs−1 + 1,
ka − r + 1 > ci+1φs−1 ⇒ (i+ 1, i+ r)th entry of U I cannot be feasible. Since i is arbitrary,
we can conclude that (i+ j, i+ r)th entry of U I cannot be feasible for all j ≥ 1.
(c) Finally, consider the case in which the inequality ciφs−r > (i− k − kd +m) is the only one
that is violated at (i, i+ r)th entry of U I . Therefore,
ciφs ≥ ka − r > ciφs−1 ⇒ ka − r + 1 > ci+1φs−1, and consequently, there is no feasible
solution in (i+ j, i+ r)th entry of U I for all j ≥ 1. Therefore, any column of U I contains at
most one feasible (non-zero) entry.
37
Corollary 10.1. At most one cell in a column of U I is feasible.
Proof. The proof follows directly from the arguments for Lemma 10 (c).
Theorem 11. v∗, α∗ can be computed in O(k) time.
Proof. From Lemma 10 and Corollary 10.1, we can conclude that from a current cell (i, j) in U I ,
one needs to search either in cell (i+ 1, j) or cell (i, j + 1) to find the next feasible element.
Therefore, a linear search (O(k)) that alternates between rows and columns leads to the cell
containing the maximum element.
Next, we show that all feasible entries in U II can be computed in O(k) time. For each row i,
there is at most one r which satisfies ciφs ≥ ka − r > ciφs − 1 in (3.20). Therefore, for each row in
U II , we can find the feasible cell in constant time. This implies that all feasible entries in U II can
be computed in O(k) time, and a linear or a logarithmic search among the feasible entries
provides the maximum element.
Algorithm 1 gives v∗, α∗ and active targets for the attacker and the defender in linear time.
3.5 Dual Analysis: Structural Properties of the Defender’s Strategy and
Algorithms
In this section, we present structural results for the optimal strategy of the defender, and present
an O(m) algorithm to compute v∗ and its corresponding optimal strategy. From the definition of
v∗, we obtain the following:
v∗ = minq
max1≤j≤na
(Aq)j .
where (Aq)j denote the jth element of Aq. From (6.1), (Aq)j can be written in the following form,
(Aq)j =
nd∑i=1
qiaji =
nd∑i=1
qi∑
l∈xj∩yicφl =
∑l∈xj
βlφl,
38
where,
βj =∑
{i|j∈yic}
qi =⇒ β = M[m,m−kd]q, (3.21)
where β = [β1, . . . , βm]T , and M[m,(m−kd)] ∈ Rm×nd is a combinatorial matrix. Since M[m,m−kd] is
a combinatorial matrix,∑nd
i=1 βi = m− kd. Moreover, from lemma 7, for any feasible β there
exists a feasible q.
The following lemma provides the structure of β∗.
Lemma 12. β∗ satisfies one of the following conditions:
(a) φs−r ≥ β∗sφs = · · · = β∗mφm ≥ φs−r−1, and β∗1 = · · · = β∗s−1 = 1,
(b) φs−r ≥ β∗sφs = · · · = β∗mφm = φs−r−1, and β∗s−1φs−1 ≥ β∗sφs,
and β∗1 = · · · = β∗s−2 = 1,
where 1 ≤ s ≤ m, 0 ≤ r ≤ s− 1, r + 1 ≤ ka ≤ r +m− s.
Proof. Let the sequence {i1, . . . , im} of indices satisfy the following condition:
β∗i1φi1 ≥ · · · ≥ β∗imφim (3.22)
Note that v∗ =∑ika
l=i1β∗l φl.
First, we show that β∗ikaφika = β∗ika+1
φika+1. Assume 1) β∗ika
φika > β∗ika+1φika+1
2) there exists an
i ∈ {ika , . . . , im} such that β∗i < 1. Since (ei − eika )T∇βv|v∗ < 0 at v∗, we arrive at a
contradiction. Now, assume β∗ika+1= · · · = β∗im = 1. Since
∑ml=1 βl = m− kd and kd ≥ 1, there
exist i, j ∈ {i1, . . . , ika}, i > j such that β∗i , β∗j < 1. Therefore, (ej − ei)T∇βv|v∗ < 0, and we arrive
at a contradiction. Therefore, β∗ikaφika = β∗ika+1
φika+1. In a similar manner, we can show that
β∗mφm = β∗ikaφika .
Next, we prove that ∀i such that β∗i φi 6= β∗mφm, there is at most one β∗j < 1 and β∗jφj > β∗mφm,
and the rest of β∗i ’s are 1. Assume that there are β∗j < 1, β∗k < 1 such that β∗jφj 6= β∗mφm and
β∗kφk 6= β∗mφm. If β∗kφk, β∗jφj > β∗mφm, and j > k, then (ek − ej)T∇βv|v∗ < 0, and we arrive at a
contradiction. Now, assume that β∗jφj < β∗mφm, and β∗j < 1, therefore (ej − ei)T∇βv|v∗ < 0 for all
i such that β∗i φi > β∗jφj , which leads to a contradiction.
39
Next, we prove that β∗ always satisfies one of the conditions in the Lemma. Let
Γ = {i|β∗i φi = β∗mφm}. First, we prove that ∀j ∈ Γ if β∗j < 1 then j + 1 ∈ Γ. To begin with, we
assume that β∗j < 1, β∗jφj = β∗mφm and j + 1 /∈ Γ. If β∗j+1φj+1 > β∗jφj , (ej − ej+1)T∇βv|v∗ < 0,
which leads to a contradiction. Moreover, if β∗j+1φj+1 < β∗jφj then β∗j+1 < 1 and
(ej+1 − ei)T∇βv|v∗ < 0 for all i such that β∗j+1φj+1 < β∗i φi. This completes the proof for the first
structure in the Lemma. Let j = min(Γ) and βj∗ = 1. Therefore, for any i ∈ Γ, either
β∗i < 1⇒ i+ 1 ∈ Γ or β∗i = 1, φj = φi. The last condition leads to the second structure in the
Lemma.
Similar to the analysis for the attacker, from the above Lemma, we can compute v∗ and β∗ by
examining all possible solutions which satisfy conditions (a) or (b) in Lemma 12. Let W be a
square matrix of dimension m.
Theorem 13. v∗ = mini,j{Wi,j}, where entries of W are defined as follows:
Wi,i+r = (ka−r)(i−kd)ci
+∑s−1
l=s−r φl,
for s− 1 ≥ r ≥ 0, r +m− s ≥ ka ≥ r + 1, ciφs−r ≥ i− kd ≥ ciφs−r−1,
Wi,i+r = (i− kd + 1− ciφs−r−1)φs−1 + (ka − r)φs−r−1 +∑s−2
l=s−r φl,
for s− 1 ≥ r ≥ 0, r +m− s ≥ ka ≥ r + 1,
ciφs−r−1 + 1 > i− kd + 1 ≥ ci+1φs−r−1
Wi,i+r = +∞, otherwise
where, ci =∑m
j=s1φj
.
Proof. First case corresponds to the structure (a) in Lemma 12. In this case, β1 = · · · = βs−1 = 1.
Since∑m
l=1 βl = m− kd, and βsφs = · · · = βmφm, we obtain the following expression for βj
βj =i− kdciφj
, j = s, . . . ,m, (3.23)
where i = m− s+ 1 and ci =∑m
j=s1φj
. Next, we provide the feasibility conditions for structure
(a). Since 0 ≤ βj ≤ 1, ciφs ≥ i− kd. Moreover, φs−r ≥ βsφs ⇒ ciφs−r ≥ i− kd. Additionally,
40
βsφs ≥ φs−r−1 ⇒ i− kd ≥ ciφs−r−1. Note that ka-largest terms of βlφl contain at least one term
in the set {βsφs, . . . , βmφm}. Therefore, r +m− s ≥ ka ≥ r + 1. By substituting (3.23) into
Wi,i+r =∑ika
l=i1βlφl,
Wi,i+r =(ka − r)(i− kd)
ci+
s−1∑l=s−r
φl. (3.24)
The second case corresponds to the structure (b) in Lemma 12. In this case, β1 = · · · = βs−2 = 1.
Since∑m
l=1 βl = m− kd and βsφs = · · · = βmφm = φs−r−1, we obtain the following:
βj =φs−r−1
φj, j = s, . . . ,m, (3.25)
βs−1 = i− kd + 1− ciφs−r−1, (3.26)
where i = m− s+ 1 and ci =∑m
j=s1φj
. Next, we provide the feasibility conditions for structure
(b). Since βs−1φs−1 ≥ φs−r−1, and 0 ≤ βs−1 < 1, ciφs−r−1 + 1 > i− kd + 1 ≥ ci+1φs−r−1. Note
that ka-largest terms of βlφl contain at least one term in the set {βsφs, . . . , βmφm}. Therefore,
r +m− s ≥ ka ≥ r + 1. By substituting (3.25), (3.26) into Wi,i+r =∑ika
l=i1βlφl,
Wi,i+r = (i− kd + 1− ciφs−r−1)φs−1 + (ka − r)φs−r−1 +s−2∑l=s−r
φl. (3.27)
Since W is a square matrix of dimension m, v∗ can be computed in O(m2). As in the case of the
defender, we can show that W can be computed in O(m) due to sparsity of W (the feasibility
conditions).
Theorem 14. v∗ can be computed in O(m).
Proof. Let W a and W b denote matrices of the following form:
W ai,i+r =
Wi,i+r satisfying structure (a) in Lemma 12
0 otherwise, (3.28)
41
W bi,i+r =
Wi,i+r satisfying structure (b) in Lemma 12
0 otherwise. (3.29)
First, note that all feasible entries of W a and feasible entries of W b are disjoint due to
complimentary feasibility conditions (i− kd ≥ ciφs−r−1 in W a, and i− kd < ciφs−r−1 in W b).
Moreover, since ciφs−r ≥ i− kd ≥ ciφs−r−1, for any s there is at most one specific r which satisfies
conditions of W a. This implies that computation of all feasible entries of W a is in O(m).
Therefore, any row of W has at most one feasible entry of W a.
Next, we show that computing all feasible entries of W b is in O(m). From the second structure of
Lemma 12, for any s, r and i = m− s+ 1, if βs−1 > 1 (βs−1 < 0), (i, i+ r)th entry of W is
infeasible for all i > i, r > r (i < i, r < r) since it implies βs−1 > 1 (βs−1 < 0). Therefore, at every
entry of W , βs−1 provides a criteria for which the rest of entries, in the same row and in the same
column are entirely infeasible, and consequently it is not required to check the feasibility of those
entries. In other words, value of βs−1 provides a criteria for direction of searching for feasible
entries of W b. Thus, computing feasible entries of W b is in O(m).
3.6 Conclusion
In this chapter, we address a security game as a zero-sum game in which the utility function has
additive property. We analyzed the problem from attacker and defender’s perspective, and we
provided necessary conditions for the optimal solutions. Consequently, the structural properties of
the saddle-point strategy for both players are given. Using the structural properties, we reach to
the linear time algorithm, and semi-closed form solutions for computing the saddle points and
value of the game.
42
Algorithm 2 Computation of the value, and active targets
1: Input: φ1, . . . , φm and ka, kd2: Output: v∗, α∗,Ua,Ud3: Construct U based on Theorem 9 and Theorem 11.
4: i1 ← 1
5: for j = 1 : m− kd do
6: for i = i1 : j do
7: if ciφs ≥ ka − r > ciφs−1 then
8: if ciφ− s− r > i− k − kd +m then
9: Ui,i+r =∑s−1
l=s−r φl + (ka−r)i−k−kd+mci
,
10: end if
11: i1 ← i
12: return i
13: else if ka − r > ciφs then
14: i1 ← i
15: return i
16: else
17: Ui,i+r = 0
18: i1 ← i
19: end if
20: end for
21: end for
22: for i = 1 : k do
23: find r such that ciφs ≥ ka − r > ciφs − 1
24: if ciφs−r ≤ i− k − kd +m, and ciφs ≥ ka − r > ciφs − 1 then
25: Ui,i+r = (ka − r − ci−1φs)φs−r +∑s−1
l=s−r+1 φl + (i− k − kd +m)φs26: else
27: Ui,i+r = 0
28: end if
29: end for
30: v∗ ← maxUi,j31: (i∗, j∗)← arg maxUi,j32: Ua ← {k − j∗ + 1, . . . ,m}33: Ud ← {k − i∗ + 1, . . . ,m}
43
CHAPTER 4. AN EFFICIENT COMPUTATIONAL STRATEGY FOR
CYBER-PHYSICAL CONTINGENCY ANALYSIS IN SMART GRIDS
4.1 Introduction
Analyzing the impact of component failures is critical to successful design, monitoring and control
of power grids, communication networks and other cyber-physical systems. Modern power
networks are designed to meet the N − 1 reliability criterion wherein no single line failure
critically impacts the functioning of the grid. The problem of identifying and planning for
1-component contingencies has been extensively studied (for example, in Zhao et al. (2018) Du
et al. (2019) Coelho et al. (2018) Li et al. (2016)). Vulnerability to cyber attacks is an
unavoidable consequence of the transition to the smart grid. The possibility of such attacks has
made larger-scale component failures more probable and has elucidated the need for consideration
of k-component failures to enhance the cyber-security and resiliency of the system Ashok et al.
(2017). Due to the inherent combinatorial complexity of considering all possible k-component
failures, exhaustive and exact N − k cyber-physical contingency analysis in large grids requires
exponential time. Indeed, for real-time applications, such analysis in even moderately-sized grids
may be computationally infeasible.
In the past, techniques such as heuristic pruning algorithms Hasan et al. (2017), cutting plane
algorithms Chen et al. (2014), and probabilistic methods Che et al. (2017) Bagheri and Zhao
(2019) Chen and McCalley (2005) Sundar et al. (2018) have been proposed to reduce the search
space of k-component contingencies. Although each of these methods provides a significant
improvement over the enumerative approach, they still suffer from some inherent inefficiencies.
For example, the pruning and cutting plane algorithms evaluate a number of non-critical
contingencies and some of the probabilistic methods identify only a subset of critical contingencies
Chen and McCalley (2005) or require the use of predetermined component failure data Sundar
44
et al. (2018). An alternate approach proposed in Poudel et al. (2016) performs low-order
contingency analysis on a reduced grid obtained by comparing the topological and electrical
structure of the original grid. However, this reduction can fail to account for the impact of
contingencies involving single lines with high power flows. Game-theoretic models have also been
proposed in the past to address contingency analysis and investments strategies Hyder and
Govindarasu (2020). They include formulation of contingency analysis as a simultaneous move
Emadi and Bhattacharya (2019) game as well as a Stackelberg game Bienstock and Verma
(2010)Sundar et al. (2018).
Our current work is based on Emadi and Bhattacharya (2020) which presents a linear-time
algorithm to obtain saddle-point strategies for zero-sum games with additive utility functions.
Based on the approximately modular Fujishige (2005) behaviour of of the disturbance value
function associated with k-line failures Soltan et al. (2017), we cast the
cyber-physical-contingency analysis of a power network as a additive zero-sum game. First, we
formulate the problem of k-line contingency analysis as a security game, review the notion of
disturbance value introduced in Soltan et al. (2017), and describe its additive approximation.
Next, we frame the linear program used to solve the security game as in Emadi and Bhattacharya
(2019). We simulate contingency analysis in the next part on four small networks obtained by
augmentation of standard 5,9,14, and 39 bus systems, and compare the results obtained under the
additivity assumption by the methods of Emadi and Bhattacharya (2020) and Emadi and
Bhattacharya (2019) with the actual contingency impacts. Finally, a heuristic method to
implement the defender resource allocation on a grid according to the results is proposed and
simulated on the augmented 39 bus system.
4.2 Problem Formulation
Consider a power grid modeled as a graph G(ν, ε), where the set of vertices ν = {1, . . . , n}
corresponds to the buses, and the set of edges ε = {e1, . . . , em} corresponds to the transmission
lines between the buses. We assume that the transmission lines are purely reactive. The weight of
45
edge ei ∈ ε is yei = 1xei
, where xei > 0 is the reactance of the transmission line corresponding to
ei. Let P ∈ Rn denote the power supply/demand vector whose ith entry is positive when there is
generation at bus i, negative when there is load at bus i, and zero when i is a neutral node. We
consider an arbitrary direction for the edges and define D ∈ Rn×m as the directed incidence
matrix of G. Let f denote the vector of power flows in each line in the adopted direction of the
edges in G. We consider a lossless and balanced network. In other words, each line is purely
reactive and∑
i∈ν Pi = 0. Considering a DC power flow approximation, the power flow equation
is given by f = Y DTPL†, where L ∈ Rn×n is the Laplacian matrix associated to weighted graph
G, L† denotes the pseudo-inverse of L and Y = diag(y1, . . . , ym) is a diagonal matrix of the
reciprocals of transmission line reactances.
Next, we describe the attack scenario and the attack surface for which the contingency analysis is
being performed. We consider a transmission line attack in which an attacker breaches the access
points through cyber layer, and trips the relay associated with the individual transmission lines.
In order to secure the power network from such attacks, additional security measures need to be
deployed at the access points vulnerable to such attacks. However, due to limitations in the
attackers resources, we assume that the attacker can trip at most ka < m transmission lines.
Similarly, due to limitations in the cyber-defense budget, the defender can deploy additional
security measures on at most kd < m lines. Therefore, there are na =(mka
)and nd =
(mkd
)actions
for player 1 and player 2 respectively. We assume that neither player has any information about
the strategy of the other player. However, information about the network (topology, line
reactances) is common information between the players.
In this work, we consider the disturbance value, initially introduced in Soltan et al. (2017), as a
metric of the impact of a k-link failure. Let Ki = {e1, ..., ek} denote a set of link failures. Let φKi
denote the disturbance value associated with failure of links in Ki. In Soltan et al. (2017), the
authors show that φKi can be approximated by
φKi ≈∑ej∈Ki
fj2rj
1− yjrj, (4.1)
46
where rj is the equivalent reactance between the end nodes of line ej . As a result, high-risk
contingencies can be quickly identified by focusing only on lines with high 1-line disturbance
values. This approximation is shown to have an approximation error of less than 10% and 0.98
correlation to the actual disturbance value in simulations of 3-line failures that do not cause a
disconnection of the grid in the IEEE 118 and 300 bus systems. Considering contingencies with
high disturbance values allowed the authors to reduce the space of contingencies in their
simulations on these grids by more than 90%.
Given the actions of the 2 players and the impacts associated with any given pair of actions, the
contingency analysis problem reduces to find the optimal strategies of the defender.
4.3 Game-Theoretic Modeling: Optimal defense strategy and maximum
impact
In order to find the optimal strategy for the defender, we formulate a strategic security game
(X ,Y, A), where X and Y denote the action sets for attacker and defender, respectively, and
card(X ) = na, card(Y) = nd. Every element xi ∈ X represents a set of attacked links. Similarly,
yi ∈ Y represents a set of protected links. Each xi ∈ X and yi ∈ Y is a ka-tuple, and kd-tuple
subset of ε, respectively. Let A represent the game matrix or payoff matrix for player 1. Since we
consider a zero-sum game, the payoff matrix for player 2 is −A. The element in row i and column
j, Aij , represents the payoff to the attacker when the defender and attacker choose yj and xi,
respectively.
In the previous section, we defined φKi as the impact associated with a k-link failure based on
disturbance values and presented an expression for it (4.1). This can be considered as the
payoff/utility for the attacker when it is successful in attacking links in Ki. Since the utility
function has the additive property, entries of the cost matrix A are defined as follows:
Aij =∑
{l|l∈xi∩ycj}φl. (4.2)
47
Let p (q) denote the probability vector representing the mixed strategies for player 1 (player 2).
The expected utility (impact to the network) when attacker (player 1) and defender (player 2)
play mixed strategies p and q, respectively, is v = pTAq. According to the minimax theorem,
every finite two-person zero-sum game has a saddle point with the value, v∗, in mixed strategy
p∗ =[p∗1, . . . , p
∗na
]Tfor player 1, and mixed strategy q∗ =
[q∗1, . . . , q
∗nd
]Tfor player 2, such that the
average gain of player 1 is at least v∗ no matter what player 2 does and the average loss of player
2 is at most v∗ regardless of the strategy of player 1. That is
pTAq∗ ≤ p∗TAq∗ ≤ p∗TAq.
Every finite matrix game can be reduced to the following LP problem,
maximizep
v
subject to v ≤na∑i=1
piAij , j = 1, . . . , nd
p1 + · · ·+ pna = 1, pi ≥ 0 ∀i
(4.3)
However, the dimension of the decision variables in the above formulation is (na + 1) which is
exponential in terms of m. Based on our previous work in Emadi and Bhattacharya (2020)
regarding games with additive utility, (4.3) can be converted to a new LP in m variables and m
constraints as follows:
maximizeα1,...,αm
m−kd∑l=1
αlφl
subject to αiφi ≥ αjφj for all i > j
m∑i=1
αi = ka, αi ≤ 1, i = 1, . . . ,m.
(4.4)
where
αj =∑{i|j∈xi}
pi, βj =∑
{i|j∈yic}
qi (4.5)
In the above equations, α = [α1, . . . , αm]T and β = [β1, . . . , βm]T can be interpreted as the attack
and exposure probability vectors, respectively. For instance, αj is the summation of all pi’s for
48
which target j lies in action set of xi. Since we consider additive property, payoff for each player
is summation of expected outcome for each target (i.e. attack probability × impact × exposure
probability). Consequently, we can define the optimality conditions in terms of (α, β). (α∗, β∗) is
a NE of a security game (X ,Y, A) if and only if any feasible deviation from α∗ (β∗), does not lead
to better payoff for the attacker (defender).
Algorithm 3 Computation of Impact (v∗) and Optimal Defender Strategy (β∗)
1: Input: φ,m, ka, kd2: Output: v∗ and β∗
3: for i = 1 : m do
4: for j = 1 : m do
5: s = m− i+ 1, r = j − i, ci =∑m
l=s1φl
6: if (s− 1 ≥ r ≥ 0) ∧7: (r +m− s ≥ ka ≥ r + 1) ∧8: (ciφs−r ≥ i− kd ≥ ciφs−r−1) then
9: Wi,j = (ka−r)(i−kd)ci
+∑s−1
l=s−r φl10: case ← case I
11: else if (s− 1 ≥ r ≥ 0) ∧12: (r +m− s ≥ ka ≥ r + 1) ∧13: (ciφs−r−1 + 1 > i− kd + 1 ≥ ci+1φs−r−1) then
14: Wi,j = (i− kd + 1− ciφs−r−1)φs−1+
15: (ka − r)φs−r−1 +∑s−2
l=s−r φl16: case ← case II
17: else
18: Wi,j =∞19: end if
20: end for
21: end for
22: v∗ ← minW , (i∗, r∗, s∗)← argminW
23: if case = case I then
24: β∗1 = · · · = β∗s∗−1 = 1, β∗j = i−kdciφj
, j = s∗, . . . ,m,
25: else if case = case II then
26: β∗1 = · · · = β∗s∗−2 = 1
27: β∗s∗−1 = i∗ − kd + 1− ci∗φs∗−r∗−1
28: β∗j =φs∗−r∗−1
φj, j ∈ {s∗, .,m}
29: end if
Algorithm 3 presents a technique to compute β∗ and v∗. It takes as input the parameters of the
problem (φ,m, ka, kd) and provides the optimal resource allocation strategy for the defender. The
49
overall algorithm runs in O(m) steps which is a significant improvement from an exponential-time
algorithm. Details regarding the complexity and completeness of Algorithm 1 can be obtained in
Emadi and Bhattacharya (2020).
4.4 Simulation Results
In this section, we provide a number of simulations over power networks to show that impacts of
k line failures can be approximated by the sum of the individual 1-line failures. That is, we
demonstrate that the additivity property is a valid assumption in these networks. Furthermore,
we examine the scenario in which players solve the exact game versus the scenario in which the
approximated solution obtained via the additive property is taken. We explicitly compare the
payoffs to the players in these scenarios.
4.4.1 Empirical evaluation of near-modular behaviour of the disturbance value
In this section, we evaluate the validity of approximation in (4.1) (additivity). We define an
approximation error as follows:
e =|φKi −
∑j∈Ki
φj |φKi
× 100 (4.6)
We examine four networks with 5,9,14 and 39 busses. In order to obtain networks with
edge-connectivity of 3, we augment standard 5,9,14 and 39 bus networks1 with additional edges.
Tables 4.1,4.2,4.3 and Figure 4.1,4.2,4.3 and 4.6 provide details regarding the additional edges. In
the new networks, ka = 2 is guaranteed without islanding.
Figure 4.4 shows the histogram of e for all possibilities of 2-line failure for the four networks.
From the figure, we can observe that the gap between the approximation in (4.1) and reality
closes as the size of the network grows. Intuitively, we expect that larger networks have smaller
approximation error since the failure of a group of links has less effect on the rest of network as
15-bus PJM example from Rui Bo, 9-bus example case from Chow, IEEE 14 -bus case, 39-bus New England case.All cases are examined in MATPOWER 7.1
50
the size of the network grows. Moreover, the structure of the matrices involved in the calculation
of the disturbance value Soltan et al. (2017) further support the aforementioned hypothesis.
Figure 4.1: Modified IEEE 5-bus system. Dashed links are added to standard IEEE cases.
Table 4.1: Links added to the case 5-bus system to ensure 3-edge-connectivity of the grid.
Link Additions to 5-bus Network
End Busses of ei xei1,3 0.02
2,4 0.01
5,3 .02
Table 4.2: Links added to the case 9-bus systems to ensure 3-edge-connectivity of the grid.
Link Additions to 9-bus Network
End Busses of ei xei End Busses of ei xei1,2 0.085 9,7 0.085
1,3 0.161 7,5 0.176
2,3 0.12 9,5 0.176
4.4.2 Variations in the attacker models
In this section, we consider the following variations of the attacker model while the defender is
assumed to implement β∗:
51
Figure 4.2: Modified IEEE 9-bus system. Dashed links are added to standard IEEE cases.
Table 4.3: Links added to the IEEE 14-bus systems to ensure 3-edge-connectivity of the grid.
Link Additions to IEEE 14-bus Network
End Busses of ei xei End Busses of ei xei1,3 0.34 10,14 0.085
3,8 0.19 14,11 0.34
8,10 0.27 12,11 0.34
1) Computationally superior attacker: We consider an attacker who implements a strategy
obtained by solving a game without using approximation (4.1), i.e., one who computes the exact
disturbance value of a k-line failure. Figure 4.5 shows the expected outcome of the play (v2) for
several values of kd in the 5 and 9 bus networks. Figure 4.5 shows a close overlap between v1 and
v2 for both networks. A computationally superior attacker therefore has minimal effect on the
expected outcome in the simulations shown in Figure 4.5. In other words, it is reasonable for the
defender to compute their optimal allocation based upon the additivity approximation.
52
Figure 4.3: Modified IEEE 14-bus system. Dashed links are added to standard IEEE cases.
2) Attacker with side information: We consider an attacker that has side information regarding
the limited computational capabilities of the defender and implements the output of Algorithm 1.
Figure 4.5 shows the expected outcome of the play (v3) for several values of kd for the two
different networks. We can observe that v3 is always smaller than v2 and v1. Therefore, an
attacker that hedges its play on the side information and thus modifies its play to α∗ gets a lower
payoff. In light of this result, the defender may wish to hedge its play by allowing the side
information to reach the attacker.
4.5 Implementation of Strategies
In this section, we address the problem of implementing β∗ on a real power network. The
probability of the defender assigning a resource to a link i when he plays the mixed strategy q∗ is
γ∗i = 1− β∗i . For a set of indices T , let γ∗T be the vector of entries of γ∗ which occur at the indices
53
(a) (b)
(c) (d)
Figure 4.4: Figure shows the histogram of e for the (a) 5-bus network (b) 9-bus network (c) 14-bus
network.(d) 39-bus network
in T and denote
γT =γ∗T∑i∈T γ
∗i
.
Note that selection of target i is deterministic when γ∗i = 1 or γ∗i = 0. Computing q∗ from β∗
(from (6.9)) is computationally challenging, for it requires solving a system of underdetermined
equations involving a large (exponential in m) number of unknown variables. Algorithm 2
presents a more efficient strategy for the defender to choose links based on β∗. This approach
iteratively chooses links based upon the exposure probabilities obtained using Algorithm 1.
54
Figure 4.5: Figure shows the expected outcome of the game for the 5-bus and 9-bus networks
against attackers having different capabilities (Section IV.B).
Figure 4.7 shows the difference between v∗ and the outcome of the game for 39-bus network when
the defender implements Algorithm 4. The difference is computed for several values of m and kd
(ka = 2). For each value of m and kd, Figure 4.7 depicts the average difference computed over 10
games with randomly chosen φ’s after 100 iterations of Algorithm 4. From the figure, we can
conclude that difference between the outcome of the game when the defender implements
Algorithm 2 and v∗ is of the order of 10−12 which is negligible. Figure 4.6 shows the covered links
for 39-bus system when ka = 2, kd = 5.
4.6 Conclusion
In this work, we formulate cyber-physical contingency analysis in power networks as an
attacker-defender game. Leveraging structural properties of the power network and empirically
proven near-modular behaviour of the impact metric, we propose a computationally efficient
55
Figure 4.6: The augmented 39-bus system used in our simulations. Our link additions to the system
are depicted by dashed lines along with the corresponding reactance values. Links shown in green
highlight the defender resource allocation for ka = 2, kd = 5.
technique to obtain optimal deployment of cybersecurity measures under budget constraints. We
believe that this work is a first-step towards alleviating the “curse of complexity” in N-k
cyber-phyiscal contingency analysis. Currently, our examination of this problem does not take into
account the possibility of cascading failures, islanding, or variable line capacities. Identification of
structural properties of the system that could drastically reduce the computational complexity of
cyber-physical contingency analysis involving (a) A wide range of impact metrics that also take
into account economics of power generation and distribution; (b) Management of dynamic
56
Algorithm 4 Implementation of Strategies
1: Input: γ∗
2: Output: Selected targets T
3: S = {i|γ∗i = 0} and T = {i|γ∗i = 1}4: I = {1, . . . ,m}5: count = |T |6: while count < kd do
7: j: select 1 target from I \ (T ∪ S)
8: with probability γI\(T∪S).
9: T ← T ∪ {j}10: count← count+ 1
11: end while
Figure 4.7: The difference between v∗ and the expected outcome of Algorithm 4 for the defender
in 39 bus network. For a specific value of m and kd, the average error has been computed over 10
different sets of φ’s over 100 iterations (it = 100).
situations arising from high-impact high-frequency strategic attacks; (c) Attacks that incorporate
cascading failures or disconnections of the grid; (d) Attacks involving partial knowledge about the
attacker’s strategies and resources; are aspects of the broader goal to direct future work.
57
CHAPTER 5. NETWORK DESIGN FROM GAME THEORETIC
PERSPECTIVE
We can associate a value v∗(G, ka, kd) as the impact of a resource-constrained attacker defender
game to any power network. This value, in general, is a function of Graph topology, physical
parameters of the graph and attacker and defender’s resources. So we consider v∗ as the objective
function in our design problems. One of the main points in power grid is the connectivity of the
network. So a network should be connected to enable power transmission to all load buses.
Another limitations in the power network is Line capacities and Generator capacities. In general,
power flow in each line cannot exceed a certain value which is a function of physical parameters of
that line. Moreover, generators also should work in an upper and lower bound in terms of power
generation. Based on the aforementioned limitations, we propose two different design problems.
5.1 A sub-graph with minimum game value
In this section, we consider a graph G(ν, ε) containing n vertices and m links, where
ν = {1, . . . , n} and ε = {1, . . . ,m} are the vertex set and the edge set, respectively. We associate
φi to link i as an attack cost associated to link i, and links are labeled such that
φi ≥ φj for i > j.
We are interested in finding a connected sub-graph of G for which the value of the game is
minimized. Note that∑φi cannot be considered as a criteria for comparison of the value of two
different graphs. We illustrate this fact in the following example.
Example 5.1.1. Let G(ν, ε) be a graph, depicted in Figure 5.1a. Figure 5.1b and 5.1c show two
sub-graphs of G called G1(ν, ε1) and G2(ν, ε2). Suppose
φ = [13, 24, 28, 35, 37, 45, 50, 59, 71, 73, 83, 86]T
58
1
2
3
4
5
6
13
35
59
24
73
37
28
7150
45
86
83
(a)
1
2
3
4
5
6
13
24
73
28
71
45
86
83
(b)
1
2
3
4
5
6
13
35 59
37
50
45
86
83
(c)
Figure 5.1: (a) Graph G(ν, ε). (b) Sub-graph G1(ν, ε1). (c) Sub-graph G2(ν, ε2).
In this example∑
i∈ε1 φi = 423,∑
i∈ε2 φi = 408 and v(G1) = 78.84, v(G2) = 86.67. This implies
that
∑i∈ε1
φi >∑i∈ε2
φi, v(G1) < v(G2) (5.1)
In the following, we show that for a given connected graph, there is a minimum weighted
spanning tree (MST), which has the minimum game value.
Lemma 15. v∗ is a non-increasing function of φi for all i.
Proof. We intuitively expect that v∗ is not increasing as any of φi’s is decreased since the overall
cost to the defender is decreased. Moreover, from Theorem 9, all expressions of v∗ is
non-increasing function of φi’s.
A direct conclusion of the above Lemma is the following corollary.
Corollary 15.1. A connected sub-graph of G which has the minimum v∗ is a tree.
Theorem 16. MST of G has the minimum v among all connected sub-graphs of G.
Proof. First, we show that cycle property for producing MST holds for producing the minimum
value tree (MVT). According to cycle property Kleinberg and Tardos (2006), any edge e belongs
to a cycle C such that the corresponding cost impact φe is the biggest cost among edges in C,
59
cannot be part of MST, and consequently e cannot be part of MVT. We assume that edge e
which connects vertices ni to nj be an edge in MST or MVT. Therefore, deleting e partitions the
vertices into two parts, and ni, nj lie in either parts. Since e is part of a cycle in G, there exists a
path from ni to nj and e is not part of this path. Consequently, there exists an edge e′ which
connects two parts of the vertices, which can be replaced by e and make another tree with lower
weight. Moreover since φe ≥ φe′ , v in the second case is less than or equal to the previous case.
Reverse Delete Algorithm is an algorithm in which the most expensive edge is deleted in each
step, maintaining connectivity, and consequently, based on the cycle property, it produces MST.
The same argument holds for producing MVT. Consider any edge e removed by this algorithm.
Before e is removed, it must lie on some cycle C (otherwise removing e would disconnect G). As
the first encountered on that cycle, it must be the most expensive on it, so by the cycle property
does not belong to any MST or MVT. Hence the “Reverse Delete” algorithm produces MST and
MVT, because it removes only edges that cannot be in any MST. In other words, there is an MST
which is an MVT too.
From the above theorem, MVT can be computed with algorithms which are used to compute
MST. For example, MVT can be computed with Reverse Delete Algorithm in O(m logm)
time Kleinberg and Tardos (2006).
5.2 Power Network Design
Inspired from power economic dispatch optimization Chen et al. (2005), we introduce the
following problem:
minimizeΦ
v(Φ) = maxi,j{Ui,j}
subject to
m∑j=1
φj = c
Φlb ≤ Φ ≤ Φub
(5.2)
where Φ = [φ1, . . . , φm]T , and Φlb = [φ1lb, . . . , φm
lb]T , Φub = [φ1ub, . . . , φm
ub]T denote the lower
bound and upper bound for Φ, respectively. Moreover, c is a constant positive value. In the
60
following, we use properties of v∗, and an algorithm is provided to find a sub-optimal point for the
above problem.
For a given Φ, let v◦ = max{Ui,j}, and i◦ and i◦ + r◦ denote row and column of U where v◦ is
located, respectively. In other words, v◦ is a feasible point of (5.2). Let S1 = {φ1, . . . , φs◦−r◦−2},
and S2 = {φs◦−r◦ , . . . , φm}.
From Lemma 15, v◦ is non-increasing as any of φj ∈ S2 is decreased.
Corollary 16.1. Perturbing any φj ∈ S1, such that φ′j ≤ φs◦−r◦−1 does not change v◦.
Proof. This is directly follows from Theorem 9.
In the following, we give a procedure to reduce the value of the game by perturbing φj ’s. In order
to simplify the notation, we omit superscript ” ◦ ” from i, s, r, v. First, assume that v lies in U I .
In other words,
v = (ka−r)ici
+∑s−1
l=s−r φl,
ciφs−r > i, ciφs ≥ ka − r > ciφs−1.
From Lemma 16.1, we can increase all φj in S1 to φs−r−1. On the other hand, we can decrease
φj ∈ S2. In the following lemma, the order of reduction for φj ∈ S2 to decrease v is given. Let δm
denote the value for which, if φj is decreased by δm, then i or r is changed. In other words, the
entry of v is changed while φj is decreased by δm.
Lemma 17. For any φj1 , φj2 ∈ S2, j2 > j1, and 0 ≤ δ < δm, v(φj1 − δ) ≤ v(φj2 − δ).
Proof. We proof the lemma by considering all possible cases of φj1 , φj2 ∈ S2, j2 > j1. Since δ ≤ δm,
by decreasing φj ’s, i and r do not change. If φj1 , φj2 ∈ {φs−r, . . . , φs−1}, from the expression for v,
v(φj1 − δ) = v(φj2 − δ). (5.3)
If φj1 ∈ {φs−r, . . . , φs−1} and φj2 ∈ {φs, . . . , φm},
v(φj2 − δ)− v(φj1 − δ) = δ(1− (k − r)ic′iφ′j2ciφj2
), (5.4)
61
where c′i = ci + δφ′j2
φj2, and φ′j2 = φj2 − δ. The above expression is positive because
ciφj2 > i, c′iφ′j2 ≥ c
′iφs > k − r. (5.5)
If φj1 , φj2 ∈ {φs, . . . , φm},
v(φj2 − δ)− v(φj1 − δ) = (ka − r)i(1
c′i− 1
c′′i), (5.6)
where c′i = ci + δφ′j2
φj2, and c′′i = ci + δ
φ′j1φj1
. The above expression is positive because c′′i ≥ c′i.
When i, r change due to decreasing of φj ’s, we have the same ordering of reduction for new
position of i, r. Therefore, the lemma holds.
When v lies in U II ,
v = (ka − r − ci−1φs)φs−r +
s−1∑l=s−r+1
φl + iφs, (5.7)
ciφs−r ≤ i, ciφs ≥ ka − r > ciφs − 1. (5.8)
In the following Lemma, we provide the order of reduction for φj ’s to decrease v. Let
I1 = {φs−r}, I2 = {φs−r+1, . . . , φs−1}, I3 = {φs} and I4 = {φs+1, . . . , φm}.
Lemma 18. When v lies in U II ,and 0 ≤ δ < δm, the order of reduction of φj’s is as follow
(notation IiIj means: decreasing φ’s in Ii reduces the value of the game more than decreasing φ’s
in Ij):
(a) I2I1, I3I4, I2I4.
(b) If (k − r − i)− ci−1(φs − φs−r) ≤ 0, then I3I1.
(c) If i− 1− ci−1φs−r ≤ 0 then I2I3.
(d) If ci−1φs − (k − r) + φsφs−r
φs(φs−δ) ≤ 0 then I1I4.
(e) Smaller φj’s in I4 is more effective than greater φj’s in I4.
(f) Decreasing any φj’s in I2 results to the same amount of reduction for v.
62
Proof. Since δ ≤ δm, by decreasing φj ’s, i and r do not change. For each pair we compare the
value of v(φj2 − δ)− v(φj1 − δ).
Suppose φj1 ∈ I1 and φj2 ∈ I2, then
v(φ′j2)− v(φ′j1) = k − r − ciφs ≤ 0.
This implies that decreasing φj2 results to smaller v than decreasing φj1 . Next, assume φj1 ∈ I1
and φj2 ∈ I3
v(φ′j2)− v(φ′j1) = (k − r − i)− ci−1(φs − φs−r).
The above expression gives a criteria to pick between φj1 ∈ I1 and φj2 ∈ I3. Next, consider
φj1 ∈ I1 and φj2 ∈ I4
v(φ′j2)− v(φ′j1) = −ci−1φs + (k − r)− φsφs−rφs(φs − δ)
,
which gives a criteria between φj1 ∈ I1 and φj2 ∈ I4. Next suppose φj1 ∈ I2 and φj2 ∈ I2
v(φ′j2)− v(φ′j1) = 0,
which means, Decreasing any φj ’s in I2 results to the same amount of reduction for v. Next
assume φj1 ∈ I2 and φj2 ∈ I3
v(φ′j2)− v(φ′j1) = −i+ 1 + ci−1φs−r,
which gives a criteria to pick between φj1 ∈ I2 and φj2 ∈ I3. Next, consider φj1 ∈ I2 and φj2 ∈ I4
v(φ′j2)− v(φ′j1) = 1− φsφs−rφ′j2φj2
≥ 0.
Since the above expression is positive, decreasing φj1 results to smaller v than decreasing φj2 .
Next, suppose φj1 ∈ I3 and φj2 ∈ I4
v(φ′j2)− v(φ′j1) = i− ci−1φs−r −φsφs−rφ′j2φj2
= i− ciφs−r + φsφs−r(1
φ2s
− 1
φ′j2φj2) ≥ 0.
63
2 4 6 8 10 12 14 16 18 20
Iteratin
128
130
132
134
136
138
140
142
144
v
(a)
0 2 4 6 8 10 12 14 16 18 200
10
20
30
40
50
60
(b)
Figure 5.2: (a) Decrement of v in each iteration. (b) Red and blue profiles show the initial and
final distribution, respectively. And green intervals are the feasible domain for Φ.
Since the above expression is positive, decreasing φj1 results to smaller v than decreasing φj2 .
Finally, assume φj1 ∈ I4 and φj2 ∈ I4
v(φ′j2)− v(φ′j1) = δ(1
(φj1 − δ)φj1− 1
(φj2 − δ)φj2) ≥ 0.
Since the above expression is positive, decreasing φj1 results to smaller v than decreasing φj2 .
This completes the proof.
Algorithm 2 decreases v for a given Φ such that feasibility conditions in (5.2) are satisfied.
Figure 5.2a,5.2b show changing v, initial and final distribution of Φ, respectively for a network
with m = 21, ka = 10 and kd = 10. Moreover, we simulate 10000 different random cases with
m = 177 and ka = kd = 40. Figure 5.3, shows the histogram of percentage of reduction for v.
64
7 8 9 10 11 12 13 14 15 16
percentage of reduction of v
0
100
200
300
400
500
600
700
num
ber
of c
ases
Figure 5.3: Histogram of the percentage reduction of the impact
Algorithm 5 Network Design
1: Input: Φlb,Φub, c and m, ka, kd2: Output: Φ∗, v∗
3: initiate φ, and compute v,S1,S2, w
4: while w > 0 do
5: if v lies in U I then
6: decrease φj ’s based on the order of Lemma 17,
7: update v,S1,S2, w,
8: else if v lies in U II then
9: decrease φj ’s based on the order of Lemma 18,
10: update v,S1,S2, w,
11: end if
12: end while
13: v∗ ← v
14: Φ∗ ← Φ
65
CHAPTER 6. NASH EQUILIBRIUM IN NON-ZERO SUM SECURITY
GAMES WITH ADDITIVE UTILITY
6.1 Introduction
Security is an important aspect of critical infrastructure systems, for example, power grid,
intelligent transportation system, healthcare, to name a few. Augmenting these systems with
information and communication technologies (ICT) introduces additional vulnerabilities against
smart and strategic adversaries. Game Theory Basar and Olsder (1999) is a tool that models such
adversarial interactions and provides defense strategies that cannot be exploited by the attacker.
Specifically, Security Games is an area of research that addresses the problem of modeling
adversarial attacks, and developing defense strategies to counter them. Although, the origins of
the field can be traced back to 1940’s, there is a recent resurgence of interest due to its successful
application in several real-world applications, for example, security at the LAX airport or security
allocation in transportation network by Federal Air Marshal Service (FAMS). Tambe (2011)
provides an extensive review on security games and related applications. In this work, we analyze
an asset-protection game in which the players have constraints in terms of the resources available
to attack or defend.
Security games are modeled as a two-player game between an attacker and a defender. The
attacker (defender) selects a group of targets to attack (defend). The attacker’s (defender’s)
payoff is based on the impact of successfully attacked targets (defended targets). Depending on
the degree of asymmetry of the impacts to the players, both zero-sum and non-zero-sum game
formulations are used to model the adversarial interaction. Additionally, solution concepts vary
based on the information available to the players, and their mode of play, For example, Nash
Equilibrium (NE) provides the guarantee that any unilateral deviation from the optimal solution
will not lead to a better outcome when both players act simultaneously, whereas a Stackelberg
66
Equilibrium (SE) is computed in a scenario in which the leader commits to a strategy, and the
follower observes the leader’s strategy before acting. In this work, we consider simultaneous
moves for both the attacker and defender. However, both players can act only on a subset of
targets due to resource constraints.
The NE of a zero-sum game can be computed in polynomial time Jiang et al. (2020) since the
problem can be formulated as a linear program. However, due to the combinatorial nature of the
security games, there is an exponential growth in size of action sets with increasing targets and
resources, thereby rendering the solution computationally intractable. In Korzhyk et al. (2010),
authors show that solving a security game with heterogeneous resources is NP-hard in general,
even for a single resource attack. In Korzhyk et al. (2010), a polynomial algorithm for computing
Stackelberg solution is provided for security games with at most two homogeneous resources.
In Korzhyk et al. (2011c), authors show that under a natural restriction on security games,
Stackelberg equilibrium is equivalent to NE. Randomized resource allocation
strategies Kiekintveld et al. (2009) and approximate solutions Bhattacharya et al. (2011) for
Stackelberg formulation of security games have been proposed in the past. Although, Stackelberg
models have been used in real world applications (Pita et al. (2008), Jain et al. (2010)), the
defender cannot be sure that the attacker is aware of the defender’s mixed strategy before his/her
decision. Yin et al. (2010) and Korzhyk et al. (2011b) model the uncertainty of the attacker’s
knowledge about the defender’s mixed strategy as part of the game, and propose an iterative
algorithm based on alternating between NE solver and a Stackelberg solver. Computing NE of
two-player non-zero sum game has been shown to be PPAD Daskalakis et al. (2009). However,
efficient solutions can be found for alternate solution concepts. For example, a Strong Stackelberg
Equilibrium (SSE) for a non-zero sum game can be computed in polynomial time Conitzer and
Sandholm (2006).
Games with special structural properties that lead to computationally-efficient strategies for
equilibrium computation have been investigated in the past Stein et al. (2008, 2011). One such
property is addivity in the utility function of the players. Computing the equilibrium of a security
67
game with a general non-additive utility function has been shown to be NP-hard Wang et al.
(2017). However, the solution to a zero-sum additive game can be reduced to the problem of
minimizing the sum of the k-largest functions over a polyhedral set which can be computed in
linear time Ogryczak and Tamir (2003). In Emadi and Bhattacharya (2019), we investigated
structural properties of the saddle-point equilibrium of a zero-sum game with additive utility.
Based on the structural properties, we presented a linear-time algorithm for computing the value
and the equilibrium in Emadi and Bhattacharya (2020).
Non-zero sum security games can model attackers with diverse incentives and different costs for
the players. In Korzhyk et al. (2011a), authors propose an iterative polynomial time algorithm to
compute NE of a non-zero-sum security game with homogeneous resources and additive utility
function. Our current paper is in a similar vein. In contradistinction to Korzhyk et al. (2011a),
which focuses on an algorithm to arrive at an NE, we find structural properties of the equilibrium
solutions based on a variational approach, We characterize all possible equilibria of the game in
terms of their payoff and multiplicity. Based on the structural properties, we arrive at a
quadratic-time algorithm to compute the equilibria and their corresponding values. Note that
Korzhyk et al. (2011a) also provides a quadratic time algorithm to arrive at the equilibrium which
is different from the one presented in our current work.
In this chapter,first, we present the problem formulation. Next, some structural properties of the
equilibrium are presented. In the following two sections, the analysis and computation of the
equilibria are presented. Finally, we present the conclusion.
6.2 Problem Formulation: Security Game
We consider a simultaneous move one-shot non-zero-sum game between an attacker and a
defender on a target set I = {1, . . .m}. We define a strategic security game (X ,Y, A,B), where X
and Y denote the set of action sets for attacker and defender, respectively. We assume that the
attacker can choose ka < m targets to attack, and the defender can protect kd < m targets.
Therefore, |X | = na, |Y| = nd, where na =(mka
)and nd =
(mkd
).
68
Let ai, bi denote the payoff for the attacker and defender, respectively, for a successful attack on
target i, and a = [a1, . . . , am]T , b = [b1, . . . , bm]T . In order to render the problem meaningful, we
assume that ai ≥ 0 and bi ≤ 0 for all i ∈ I. Therefore, we can define a payoff matrix A and B for
the attacker and defender, respectively, where the rows correspond to the actions of the attacker
and the columns correspond to the actions of the defender. In this work, we assume that the
payoff for the players follows an additive property:
Aij =∑
l∈xi∩ycj
al, Bij =∑
l∈xi∩ycj
bl. (6.1)
We assume that A and B are common information for both players.
Let p and q denote probability vectors representing the mixed strategies for player 1 and player 2,
respectively. The expected payoff for player 1 (v1) and player 2 (v2) are given as follows:
v1 = pTAq, v2 = pTBq.
Definition 6.2.1. A pair (p∗, q∗) is said to constitute a non-cooperative Nash Equilibrium (NE)
solution to a bi-matrix security game (X ,Y, A,B) in mixed strategies, if the following inequalities
are satisfied for all probability vectors p, q:
pTAq∗ ≤ p∗TAq∗, p∗TBq ≤ p∗TBq∗.
Here the pair (v∗1, v∗2) := (p∗TAq∗, p∗TBq∗) is known as the NE outcome of the game in mixed
strategies.
Theorem 19. (Existence of NE Basar and Olsder (1999)) Every finite bi-matrix game has at
least one NE in mixed strategies p∗ =[p∗1, . . . , p
∗na
]Tfor player 1, and q∗ =
[q∗1, . . . , q
∗nd
]Tfor
player 2.
69
The solution to a finite matrix game can be reduced to the following non-linear programming
problem Basar and Olsder (1999):
maximizep,q,v1,v2
pTAq + pTBq − v1 − v2
subject to Aq ≤ v11
pTB ≤ v21T
pT1 = 1
pi ≥ 0 for i = 1, . . . , na
qT1 = 1
qi ≥ 0 for i = 1, . . . , nd.
(6.2)
There are no effective algorithms for solving the general nonlinear programming problem. In
general, computing NE of two-player non-zero sum game is PPAD Daskalakis et al. (2009).
Additionally, the dimension of the decision variables in (6.2) is (na + nd + 2), which is exponential
in terms of m. This becomes computationally inefficient as m grows which is the case for
large-scale networks.
The expected utility functions can be written in the following way:
v1 =
na∑i=1
nd∑j=1
piAijqj (6.3)
=
na∑i=1
pi
nd∑j=1
Aijqj (6.4)
=
na∑i=1
pi
nd∑j=1
qj(∑
l∈xi∩yjcal) (6.5)
=
na∑i=1
pi(∑l∈xi
βlal) =
m∑i=1
αiaiβi, (6.6)
where
αj =∑{i|j∈xi}
pi =⇒ α = M[m,ka]p (6.7)
Similarly, we obtain the following for v2:
v2 =
m∑i=1
αibiβi, (6.8)
70
where
βj =∑
{i|j∈yic}
qi =⇒ β = M[m,m−kd]q. (6.9)
α = [α1, . . . , αm]T , β = [β1, . . . , βm]T can be interpreted as the attack and exposure probability
vectors, respectively. For instance, αj is the summation of all pi’s for which target j lies in action
set of xi. Since we consider additive property, payoff for each player is summation of expected
outcome for each target (i.e. attack probability × impact × exposure probability). Moreover,
M[m,ka] ∈ Rm×na and M[m,(m−kd)] ∈ Rm×nd are combinatorial matrices 1, which are surjective
mapping from probability space p, q to Dα = {α|0 ≤ αi ≤ 1,∑m
l=1 αl = ka} and
Dβ = {β|0 ≤ βi ≤ 1,∑m
l=1 βl = m− kd}, respectively Emadi and Bhattacharya (2019). From
Definition , we can define the optimality conditions in terms of (α, β).
Definition 6.2.2. Optimality condition of a security game: (α∗, β∗) is a NE of a security game
(X ,Y, A,B) if and only if any feasible deviation from α∗ (β∗), does not lead to better payoff for
the attacker (defender) i.e. v1 ≤ v∗1 (v2 ≤ v∗2).
In this work, we investigate the equilibria of a non-zero-sum security game (X ,Y, A,B). First, we
present interchangeability property from Korzhyk et al. (2011a).
Definition 6.2.3. Optimality condition of a security game: Suppose (α∗, β∗) be a NE, then there
are two constant numbers c1, c2 such that for any i ∈ I if α∗i > 0 then aiβ∗i ≥ c1 and if α∗i < 1
then aiβ∗i ≤ c1. Similarly, for any i ∈ I if β∗i > 0 then biα
∗i ≥ c2 and if β∗i < 1 then biα
∗i ≤ c2.
Theorem 20. (Interchangeability Korzhyk et al. (2011a)) Solutions of a security game
(X ,Y, A,B) are interchangeable. That is if (α, β), (α′, β′) are two NE of a security game, then
(α′, β), (α, β′) are also NE.
1A combinatorial matrix M[m,k] ∈ Rm×(mk ) is a Boolean matrix constructed by concatenation of all combinationsof m-dimensional Boolean vectors with k entries equal to 1. Each column of M has k entries equal to 1 and m − kentries equal to 0.
71
6.3 Structural properties of the optimal solution
In this section, we present structural properties of the optimal solutions for the players. Let
α∗ = [α∗1, . . . , α∗m] and β∗ = [β∗1 , . . . , β
∗m] represent the optimal solutions. We make the following
assumption:
Assumption 1. Payoff associated with distinct targets are different for both players, i.e, ai 6= aj
and bi 6= bj for i 6= j.
Let S = {S1, . . . , S9} denote a collection of sets where Si’s are defined in Table 6.1 (e.g.,
S1 = {(αi, βi)|αi = 0, βi = 1}). Let I = {I1, . . . , I9} denote a collections of sets where
Ij = {i|(αi, βi) ∈ Sj}.
Table 6.1: Sets S1, . . . , S9
β\α 0 (0,1) 1
0 S8 S7 S6
(0,1) S9 S5 S3
1 S1 S4 S2
Definition 6.3.1. Two sets are incompatible if at least one of them has to be empty for every
equilibrium.
The following lemma presents structural properties of equilibria based on the incompatibility of
sets in S. Let {e1, . . . , em} denote the natural basis of Rm.
Lemma 21. For any equilibrium (α∗, β∗), the following hold:
a) I8 ∪ I9 = ∅
b) S6 and S7 are incompatible with S1, S4 and S5
c) S6 and S7 are incompatible with S3.
72
Proof. a) Since b < 0, if ∃ i such that α∗i = 0, β∗i 6= 1⇒ β∗j = 0
∀α∗j 6= 0⇒ (ei − ej)T∇αv1|v∗1 > 0. This contradicts the optimality Definition 6.2.2 for v∗1
since v∗1 < v1. Hence (α∗i , β∗i ) /∈ S8 ∪ S9 .
b) Assume (α∗i , β∗i ) ∈ S6 ∪ S7, and (α∗j , β
∗j ) ∈ S1 ∪ S4 ∪ S5. α∗i 6= 0, α∗j 6= 1, β∗j 6= 0,
⇒ (ei − ej)T∇αv1|v∗1 < 0 which contradicts Definition 6.2.2. Therefore, S6, S7 are not
compatible with S1, S4, S5.
c) Assume ∃ i, j such that (α∗i , β∗i ) ∈ S3, and (α∗j , β
∗j ) ∈ S6 ∪ S7. Note that S1 ∪ S4 ∪ S5 = ∅
from part b). Since S3 is the only set with non-zero β, |S3| > 1 (∑m
l=1 βl = m− kd, β∗i < 1).
Let i, i′ ∈ I3. Therefore α∗i = α∗i′ = 1 and 0 < β∗i < 1, 0 < β∗i′ < 1⇒ bi = bi′ which
contradicts the assumption 1, thereby proving the lemma.
Corollary 21.1. (α∗, β∗) belongs to one of the following categories:
• Type I: ∀i ∈ I, (α∗i , β∗i ) ∈ S1 ∪ S2 ∪ S3 ∪ S4 ∪ S5
• Type II: ∀i ∈ I, (α∗i , β∗i ) ∈ S2 ∪ S6 ∪ S7.
Proof. The proof follows from Lemma 21.
6.4 Type I Equilibria: Analysis
In this section, we present properties of Type I equilibrium. Based on these properties, we
compute the payoff’s for each player at a given equilibrium.
Lemma 22. For (α∗, β∗) of Type I, the following properties hold:
(a) aiβ∗i = c1, ∀i ∈ I4 ∪ I5, where c1 is a constant. Moreover, |I4| ∈ {0, 1}.
(b) biα∗i = c2, ∀i ∈ I3 ∪ I5, where c2 is a constant. Moreover, |I3| ∈ {0, 1}.
(c) ∀ i ∈ I1, j ∈ I2 ∪ I3 ∪ I4 ∪ I5 ⇒ ai < aj.
73
(d) ∀ i ∈ I2 ∪ I3 ∪ I5, i ∈ I4 ⇒ ai < ai.
(e) ∀ i ∈ I5, i ∈ I3, j ∈ I2 ⇒ bi < bi < bj.
Proof. We present proof of a). The proof for b), c), d) and e) are based on arguments similar to
proof of a). 0 < αi < 1 ∀i ∈ I4 ∪ I5. From the definition of a Nash equilibrium, any unilateral
deviation by Player 1 (from α∗i ) should not increase v∗1. Therefore, ∀i, j ∈ I4 ∪ I5,
(ei − ej)T∇αv1|v∗1 = 0 ⇒ aiβ∗i = ajβ
∗j . Moreover, since β∗i = 1 ∀i ∈ I4, S4 can contain at most one
element else it contradicts the assumption that distinct targets have different ai.
Without loss of generality, we assume that ai > aj for i > j (i.e. targets are labeled such that a is
sorted in ascending order, and consequently b follows the ordering of a). Let r and s denote the
cardinality of I1 and I5, respectively. From Lemma 22 (c), we can conclude that I1 is a set of
indices with minimum ai’s, i.e., I1 = {1, . . . , r}. Since∑m
l=1 αl = ka and
αi∈I1 = 0⇒ 0 ≤ r ≤ (m− ka). Let i and i denote indices in I3 and I4 (when they are non-empty
sets), respectively. From Lemma 22 (d), i = r+ 1 since ai < ai∈I2∪I3∪I5 . From Lemma 22 (e), I5 is
a set, for which bi∈I5 < bi∈I2∪I3 . In other words, I5 can be determined by associating targets with
s-minimum bi’s in the set I \ I1 ∪ I4. Let I5 = {j1, . . . , js} when I3 is empty. From Lemma 22 (e),
when I3 is a non-empty set, we can set i = js since bi∈I5 < bi < bi∈I2 , and I5 = {j1, . . . , js−1}.
Consequently, I2 = I \ I1 ∪ I3 ∪ I4 ∪ I5. This completes the description of the sets S1 to S5 in
terms of the variables r and s. In the next section, we provide an expression for the optimal
strategy for the players in addition to their optimal payoffs.
Depending on whether I3 and I4 are empty, we further classify Type I equilibrium into four
categories.
74
Type Ia: I3 = ∅, I4 = ∅
In this case, from Lemma 22 (a), ∀i ∈ I5, aiβ∗i = c1. Since
∑ml=1 β
∗l = m− kd, we obtain the
following for indices in I5:
js∑l=j1
β∗l = s− kd ⇒ c1 =s− kd∑j∈I5
1aj
(6.10)
⇒ β∗i =s− kd
ai∑
j∈I51aj
(6.11)
Additionally, biα∗i = c2 ∀i ∈ I5. Since
∑ml=1 αl = ka, we obtain the following:
js∑l=j1
α∗l = ka −m+ s+ r ⇒ c2 =ka −m+ s+ r∑
j∈I51bj
(6.12)
⇒ α∗i =ka −m+ s+ r
bi∑
j∈I51bj
∀i ∈ I5 (6.13)
The optimal solution should satisfy the following feasibility conditions,
0 ≤ α∗i ≤ 1, 0 ≤ β∗i ≤ 1 (6.14)
Furthermore, the optimal solutions should satisfy the following conditions that arise from
Definition 6.2.2
c1 ≥ ar, c1 ≤ min(ai∈I2), c2 ≤ min(bi∈I2). (6.15)
By substituting α∗, β∗ in (6.6) and (6.8), we obtain the following:
{v∗1 =∑
l∈I2 al + (ka−m+s+r)(s−kd)∑j∈I5
1aj
,
v∗2 =∑
l∈I2 bl + (ka−m+s+r)(s−kd)∑j∈I5
1bj
(6.16)
Type Ib: I3 6= ∅, I4 = ∅
In this case, we consider js = i. Therefore, I5 = {j1, . . . , js−1}, where bj1 < · · · < bjs−1 , and
I3 = {js}. biα∗i = c2 for all i ∈ I3 ∪ I5. Therefore,
α∗i = bjs/bi ∀i ∈ I5, (6.17)
75
and α∗js = 1 (since c2 = bjs). Note that,∑m
l=1 αl = ka in this case. Substituting α∗i = bjs/bi leads
to the following condition:
bjs∑l∈I5
1
bl= ka −m+ r + s− 1. (6.18)
aiβ∗i = c1 for all i ∈ I5. Substituting β∗i into
∑ml=1 βl = m− kd, leads to the following condition:
c1
∑l∈I5
1
al+ β∗js = s− kd. (6.19)
Since 0 < β∗i ≤ 1, right hand side of the above expression should be positive. Moreover, if there is
no feasible solution for β∗js = 1 (i.e β∗js−1> 1), we conclude that there is no feasible solution for
this case (because by decreasing β∗js , c1 is increased, and consequently β∗js−1is increased). From
β∗js = 1, we obtain the following expression for c1 and β∗i :
c1 =s− kd − 1∑
j∈I51aj
, β∗i =s− kd − 1
ai∑
j∈I51aj
∀i ∈ I5. (6.20)
In this case, feasibility conditions are (6.14) and (6.18). Substituting α∗ and β∗ in (6.6) and (6.8)
leads to the following payoffs for the players:
v∗1 =∑l∈I2
al + ajs +(ka −m+ r + s− 1)(s− kd − 1)∑
j∈I51aj
,
v∗2 =∑l∈I2
bl + bjs(s− kd).
Type Ic: I3 = ∅, I4 6= ∅
In this case, i = r + 1, I5 = {j1, . . . , js} and I4 = {r + 1}. Since β∗i
= 1, c1 = ar+1 and
β∗i =ar+1
ai∀i ∈ I5 (6.21)
Substituting β∗i ’s in∑m
l=1 βl = m− kd leads to the following condition:
ar+1
∑l∈I5
1
al= s− kd. (6.22)
biα∗i = c2 for all i ∈ I5. Substituting α∗i into
∑ml=1 αl = ka leads to the following condition:
c2(∑l∈I5
1
bl) + α∗r+1 = ka −m+ r + s+ 1. (6.23)
76
Since 0 < α∗i ≤ 1, right hand side of the above expression should be positive. Moreover,
br+1α∗r+1 ≥ c2. Therefore, if there is no feasible solution for α∗r+1 = c2
br+1, we conclude that there is
no feasible solution for this case (because by decreasing α∗r+1, c2 is increased, and consequently
α∗js is increased). Therefore, we pick α∗r+1 = c2br+1
which leads to the following expression for c2
and α∗i :
c2 =ka −m+ r + s+ 1
1br+1
+∑
j∈I51bj
, αi =ka −m+ r + s+ 1
bi(1
br+1+∑
j∈I51bj
)∀i ∈ I5. (6.24)
In this case, the feasibility conditions are 0 ≤ α∗i ≤ 1, 0 ≤ β∗i ≤ 1 and (6.22). By substituting the
expression for α∗ and β∗ in (6.6) and (6.8), the payoff for the players is given by the following
expressions:
v∗1 =∑l∈I2
al +(ka −m+ r + s+ 1)ar+1
1 + br+1∑
l∈I51bl
+
(ka −m+ r + s+ 1)ar+1∑
l∈I51bl
1br+1
+∑
j∈I51bj
, (6.25)
v∗2 =∑l∈I2
bl +(ka −m+ r + s+ 1)(s− kd + 1)
1br+1
+∑
j∈I51bj
(6.26)
Type Id: I3 6= ∅, I4 6= ∅
In this case, js = i, i = r + 1. Therefore, c1 = ar+1, c2 = bjs , and β∗i
= α∗i
= 1. Consequently, we
obtain the following:
β∗i =ar+1
ai, α∗i =
bjsbi
∀i ∈ I5. (6.27)
Substituting α∗i and β∗i in∑m
l=1 αl = ka and∑m
l=1 βl = m− kd, respectively, we obtain the
following expressions for α∗r+1 and β∗js :
α∗r+1 = ka −m+ r + s− bjs∑l∈I5
1
bl, (6.28)
β∗js = s− kd − ar+1
∑l∈I5
1
al(6.29)
The optimal solution should satisfy the feasibility conditions 0 ≤ α∗i ≤ 1, 0 ≤ β∗i ≤ 1.
Furthermore, the optimal solutions should satisfy the conditions in Definition 6.2.2. This leads to
77
the following condition:
c1 ≤ β∗jsajs , c2 ≤ α∗r+1br+1. (6.30)
Substituting the expressions for α∗ and β∗ in (6.6) and (6.8) leads to the following expressions for
the payoffs of the players:
v∗1 =∑l∈I2
al + ajs [s− kd − ar+1
∑j∈I5
1
aj] + (6.31)
ar+1[ka −m+ r + s− bjs∑j∈I5
1
bj] + bjsar+1
∑l∈I5
1
bl,
v∗2 =∑l∈I2
bl + bjs [s− kd − ar+1
∑j∈I5
1
aj] + (6.32)
br+1[ka −m+ r + s− bjs∑j∈I5
1
bj] + bjsar+1
∑l∈I5
1
al.
6.5 Type II Equilibria: Analysis
In this subsection, we consider (α∗i , β∗i ) ∈ S2 ∪ S6 ∪ S7 for all i ∈ I. In other words, player 2 has a
pure strategy in a Type II equilibrium. Intuitively, pure strategy leads to β∗i = 1 for m− kd
highest bi’s. This can be explained by the optimality condition in Definition 6.2.2 for player
2.Assume i ∈ I2, j ∈ I6 ∪ I7. Therefore, (ej − ei)T∇βv2|v∗2 ≤ 0. That is bi∈I2 ≥ bj∈I6∪I7α∗j∈I6∪I7 ,
which implies that bi∈I2 ≥ bi∈I6∪I7 . Note that from Assumption 1 bi 6= bj for all i 6= j. Without
loss of generality, we assume bi > bj for all i > j. This implies that I2 = {kd + 1, . . . ,m}, and we
set α∗i∈I2 = 1 and β∗i∈I2 = 1.
Next, we compute α∗i∈I6 , α∗i∈I7 . We pick I7 as the indices of the s-minimum bi∈I\I2 . Since
b1 < · · · < bs, I7 = {1, . . . , s}. Note that kd ≥ s ≥ 2. Consequently, I6 = {s+ 1, . . . , kd}. In order
to find a feasible α, we set them to the following values:
α∗i∈I7\{s} =bkd+1
bi, α∗s = ka −m+ s− bkd+1
s−1∑l=1
1
bl(6.33)
From the feasibility conditions, 0 ≤ α∗i ≤ 1, 0 ≤ β∗i ≤ 1. Therefore, α∗s satisfies the following
condition:
0 < α∗s ≤ 1. (6.34)
78
Substituting α∗ and β∗ in (6.6) and (6.8), leads to the following outcome for the players:
v∗1 =∑l∈I2
al, v∗2 =∑l∈I2
bl. (6.35)
6.6 Computation of the Equilibria
In this section, we present a result on the multiplicity of the equilibria for different types.
Lemma 23. If σ = (α, β), σ′ = (α′, β′) are two NE of a security game, then
(a) ∀i ∈ I, αi = α′i or βi = β′i,
(b) c1 = c′1, c2 = c′2 or c1 = c′1, v1 = v′1 or c2 = c′2, v2 = v′2
Proof. (a) Please refer to Korzhyk et al. (2011a).
(b) Assume there is i such that αi > α′i. From part (a), βi = β′i. Therefore, αi > 0, α′i < 1, which
leads to aiβi ≥ c1, aiβ′i ≤ c′1, and consequently, c′1 ≥ c1.
Since∑m
l=1 αl = ka, there is an index j such that αj < α′j , βj = β′j . Similarly, we can show that
c1 ≥ c′1. In accordance with the former conclusion, c1 = c′1. And in the similar way, we can show
that if there is j such that βj > β′j then c2 = c′2.
Note that if β = β′ (α = α′), then v1 = v′1 (v2 = v′2), because of optimality conditions.
Lemma 24. An equilibrium of Type I is unique.
Proof. The proof is based on Lemma 23 which states that for two distinct equilibria (α, β) and
(α′, β′), ∀i ∈ I either αi = α′i or βi = β′i. Additionally, c1 = c′1, c2 = c′2.
First, we show that an equilibrium of Type Ia is unique. Assume (α, β) and (α′, β′) are two
equilibria in Type Ia. From Lemma 23 (a), i ∈ I5 =⇒ i /∈ I ′1 ∪ I ′2. Similarly, i ∈ I ′5 ⇒ i /∈ I1 ∪ I2,
therefore I5 = I ′5. Moreover, c1 = c′1, c2 = c′2 ⇒ αi∈I5 = α′i∈I5 , βi∈I5 = β′i∈I5 . This leads to
I1 = I ′1, I2 = I ′2 (because∑m
l=1 αl = ka, |I2| = |I ′2|). This argument can be extended to multiple
solutions of Type Ia and Type Ib.
Let (α, β) and (α′, β′) be equilibria of Type Ia and Type Ib, respectively. I5 and I ′5 can only differ
in at most one element i.e., i ∈ I ′4(⇒ ai = c′1). Let us first consider the case i ∈ I5 ⇒ aiβi = c1.
79
Since c1 = c′1 ⇒ βi = 1. This contradicts the assumption that i ∈ I5 (⇒ 0 < αi < 1). Next,
assume that i /∈ I5. This implies that I5 = I ′5 =⇒ αi∈I5 = αi∈I′5 . Since∑ml=1 αl = ka =⇒
∑αi∈I′2 + αi + αi∈I′5 = ka. Note that
∑i αi∈I′5 is an integer (because I5 = I ′5).
This implies that αi is an integer, which is a contradiction. This completes the proof that (α, β)
and (α′, β′) cannot be equilibria of Type Ia and Type Ib, respectively. Thus, there are no multiple
solutions in Type Ia and Type Ib. The proof for other cases are based on similar arguments.
Lemma 25. If (α∗, β∗) is of Type II, the following hold:
1. There cannot be an equilibrium of Type I.
2. There is a continuum of equilibria of Type II.
Proof. 1. Assume (α, β) and (α′, β′) are two equilibria of Type I and II, respectively. I7 6= ∅
else∑
l∈I2∪I6 αl = ka = m which contradicts the assumption that ka < m. Furthermore,
|I7| > 1 because 0 < αi∈I7 < 1 and∑m
l=1 αl = ka, which is an integer. Since any deviation of
αi1∈I7 , αi2∈I7 in direction ei1 − ei2 should not lead to a better outcome for player 1,
(ei1 − ei2)T∇αv1|v∗1 = 0. This implies that c′1 = 0. On the other hand, I5 6= ∅ (because∑ml=1 αl = ka is an integer number) ⇒ c1 > 0, which contradicts the fact that c1 = c′1 (from
Lemma 23).
2. Since 0 < αi∈I7 < 1 and I7 = {1, . . . , s}, therefore e = es − es−1 − · · · − e1 is a feasible
direction, and consequently, any small perturbation of α in direction of e satisfies the
optimality conditions and feasibility conditions of solutions of Type II. That is there exists a
neighborhood ε of (α, β) for which all perturbed value of α, in direction e, are solution of
Type II.
Algorithm 6 and 7 present a quadratic-time algorithm for explicitly computing the equilibrium. It
is based on checking for existence of equilibrium of each type that satisfies the necessary
conditions derived in the previous sections for all possible values of r and s. Algorithm 1 is .
80
Algorithm 6 Computation of Equilibrium of Type I
1: Input: a, b, ka and kd2: Output: (α∗, β∗), (v∗1, v
∗2), Type
3: sort a in ascending order,
4: for r = 0 : m− ka do
5: for s = 1 : m− r do
6: Case Ia: I1 = {1, . . . , r},I3 = I4 = ∅,7: I5 = {j1, . . . , js}, I2 = I \ I1 ∪ I3 ∪ I4 ∪ I5
8: compute α, β from (6.13),(6.11)
9: check feasibility (6.14) and optimality (6.15)
10: if solution is feasible then
11: (α∗, β∗)← (α, β), (v∗1, v∗2)← (v1, v2) (6.16),
12: Type←Type Ia
13: return
14: end if
15: Case Ib: I1 = {1, . . . , r},I3 = ∅,I4 = {r + 1}16: I5 = {j1, . . . , js}, I2 = I \ I1 ∪ I3 ∪ I4 ∪ I5
17: compute α, β from (6.17),(6.20)
18: check feasibility conditions (6.14) and (6.18)
19: if solution is feasible then
20: (α∗, β∗)← (α, β), (v∗1, v∗2)← (v1, v2), (6.21)
21: Type←Type Ib
22: return
23: end if
24: Case Ic: I1 = {1, . . . , r},I3 = {js}, I4 = ∅,25: I5 = {j1, . . . , js−1}, I2 = I \ I1 ∪ I3 ∪ I4 ∪ I5
26: compute α, β from (6.24), (6.21)
27: check feasibility (6.14),(6.22)
28: if solution is feasible then
29: (α∗, β∗)← (α, β), (v∗1, v∗2)← (v1, v2), (6.26)
30: Type←Type Ic
31: return
32: end if
33: Case Id: I1 = {1, . . . , r},I3 = {js}, I4 = {r + 1},34: I5 = {j1, . . . , js−1}, I2 = I \ I1 ∪ I3 ∪ I4 ∪ I5
35: compute α, β from (6.27), (6.28)
36: check feasibility (6.14),(6.30)
37: if solution is feasible then
38: (α∗, β∗)← (α, β), (v∗1, v∗2)← (v1, v2), (6.33)
39: Type←Type Id
40: return
41: end if
42: end for
43: end for
81
Algorithm 7 Computation of Equilibrium of Type II
1: Input: a, b, ka and kd2: Output: (α∗, β∗), (v∗1, v
∗2), Type
3: sort b in ascending order,
4: for s = 2 : kd do
5: Type II: I2 = {kd + 1, . . . ,m},I7 = {1, . . . , s},6: I6 = {s+ 1, . . . , kd}7: compute α, β from (6.33)
8: check feasibility (6.14),(6.34)
9: if solution is feasible then
10: (α∗, β∗)← (α, β), (v∗1, v∗2)← (v1, v2), (6.35)
11: Type←Type II
12: return
13: end if
14: end for
6.7 Special Case: Zero-Sum Security Game
In this section, we consider a zero-sum security game, i.e. a = −b. In the following, we show that
the NE of a zero-sum security game can be computed in linear time. Note that in a zero-sum
game v∗1 = −v∗2. Without loss of generality, suppose that a is sorted in ascending order, and let
ds =∑m
i=s1ai
.
Lemma 26. Consider case Id. For a given s,
1. if ka −m+ r + s > dm−s+1am−s+1 then @ solution for (s, r′) such that r′ > r. And @
solution for (s′, r) such that s′ > s.
2. if ka −m+ r + s < dm−s+1am−s+1 − 1 then @ solution for all r′ < r.
Proof. 1) From feasibility condition (6.14), 0 < α∗r+1 < 1. That is
0 < ka −m+ r + s− am−s+1dm−s+2 < 1. (6.36)
The above inequality can be written in the following form:
am−s+1dm−s+1 − 1 < ka −m+ r + s < am−s+1dm−s+1. (6.37)
82
For all r′ > r, if ka −m+ r + s > dm−s+1am−s+1, then ka −m+ r′ + s > dm−s+1am−s+1. This
implies that inequality (6.37) is violated, and consequently there is no solution for s, r′ such that
r′ > r.
Next, suppose ka −m+ r + s > dm−s+1am−s+1. Since dm−s+1am−s < dm−s+1am−s+1 =⇒
ka −m+ r + s+ 1 > dm−s+1am−s + 1. This can be written as
ka −m+ r + s+ 1 > dm−sam−s. (6.38)
The above inequality implies that there is no solution at s+ 1, r. This argument can be extended
to all s′ > s and r.
From Lemma 26, we conclude that it is not required to examine all values of s, r. First, we set
I5 = {m− s+ 2, . . . ,m}, and r = m− s− 1. Next, based on the condition from Lemma 26, we
can decrease r or increase s.
Lemma 27. Consider case Ia. For a given s,
1. if ka −m+ r + s > dm−s+1am−s+1 then @ solution for (s, r′) such that r′ > r. And @
solution for (s′, r) such that s′ > s.
2. if ka −m+ r + s < dm−s+1am−s then @ solution for all r′ < r.
Proof. Similar arguments as proof of Lemma 26 hold in this case.
Note that in case Ib and Ic, for a given s there is at most one specific r which can satisfy (6.18)
and (6.22), respectively. That is the computation of examining each case is linear in terms of m.
Algorithm 2 shows the linear algorithm to analyzing case Id.
6.8 Simulation
Example 6.8.1. In the following, we provide three numerical examples, for which NE is of Type
Ia, Type Id and Type II.
83
Algorithm 8 Equilibrium Computation: Zero-Sum Game
1: Input: a, ka and kd2: Output: (α∗, β∗), (v∗1, v
∗2), Type
3: sort a in ascending order,
4: rnew = m− 3
5: for s = 1 : m do
6: for r = rnew : 0 do
7: Case Id: I1 = {1, . . . , r},I3 = {s}, I4 = {r + 1},8: I5 = {m− s+ 2, . . . ,m}, I2 = I \ I1 ∪ I3 ∪ I4 ∪ I5
9: compute α, β from (6.27), (6.28)
10: check feasibility (6.14),(6.30)
11: if solution is feasible then
12: (α∗, β∗)← (α, β), (v∗1, v∗2)← (v1, v2), (6.33)
13: Type←Type Id
14: return
15: else if ka −m+ r + s > dm−s+1am−s+1 then
16: rnew ← r
17: else if ka −m+ r + s < dm−s+1am−s+1 − 1 then
18: return r
19: end if
20: end for
21: end for
1. Suppose m = 10, ka = 4, kd = 5 and
a = [9, 10, 14, 16, 27, 31, 33, 35, 38, 40]T ,
b = [−22,−4,−11,−45,−7,−41,−26,−49,−3,−23]T
Solution to the security game with the above parameters is of Type Ia
α∗ = [0.21, 1, 0.42, 0.1, 0.67, 0.11, 0.18, 0.09, 1, 0.2]T ,
β∗ = [0.84, 1, 0.54, 0.47, 0.28, 0.24, 0.23, 0.22, 1, 0.19]T ,
(v∗1, v∗2) = (52.74,−28.24).
2. Suppose m = 10, ka = 4, kd = 5 and
a = [5, 8, 19, 20, 22, 34, 39, 43, 49, 50]T ,
b = [−7,−17,−8,−38,−37,−10,−44,−22,−21,−34]T
84
Solution to the security game with the above parameters is of Type Id
α∗ = [0, 0.01, 1, 0.26, 0.27, 10.23, 0.45, 0.48, 0.29]T ,
β∗ = [1, 1, 1, 0.4, 0.36, 0.52, 0.2, 0.19, 0.16, 0.16]T ,
(v∗1, v∗2) = (52.74,−28.24).
3. Suppose m = 10, ka = 8, kd = 7 and
a = [9, 10, 14, 16, 27, 31, 33, 35, 38, 40]T ,
b = [−22,−4,−11,−45,−7,−41,−26,−49,−3,−23]T
Solution to the security game with the above parameters is of Type II
α∗ = [1, 1, 1, 0.151, 0.7, 1, 0.14, 11]T ,
β∗ = [1, 1, 1, 0, 0, 0, 0, 0, 0, 0]T ,
(v∗1, v∗2) = (75,−14).
Note that the defender’s optimal action is a pure strategy.
6.9 Conclusion
In this work, we consider a non-zero-sum security game between an attacker and a defender with
homogeneous resources and additive utility functions. We characterize the Nash Equilibrium (NE)
strategies. We provide structural properties of the NE, and we show the optimal solutions are of
two types. Using a variational approach, we propose a polynomial time algorithm to compute the
NE. Moreover, we provide a closed form expressions along with feasibility conditions for each type
of the solution. In addition to the interchangeability property, which is proved in Korzhyk et al.
(2011a), we show that the optimal solution is unique and in some special cases, a continuum of
solutions with similar outcome exists. Uniqueness and interchangeability properties are important
notions in this context because in non-cooperative games there is no guarantee that players take
the same equilibrium, and consequently it leads to non-optimal outcome.
85
CHAPTER 7. FUTURE WORKS
In this work, we have explored some of the challenges facing power grids and introduced new and
unconventional approaches to address these challenges. We anticipate that ideas, concepts and
algorithms introduced in this thesis can be used to improve the resilience of other kind of
infrastructure networks. Exploring these applications and extending some of the results in this
thesis can be of interest to researchers in power engineering or computer network or vehicular
networks. In this chapter, we provide a few directions for future work.
7.1 Rank deficiency and approximation security games
We proved that security games are rank deficient, and specifically, we showed that the rank of
cost matrix is m in Chapter 2 in Theorem 29. Lipton et al. (2003) propose a concept of
approximate ε-Nash equilibrium. They showed that for any two-person game there exists an
equilibrium with only logarithmic support (in the number of available pure strategies). Moreover,
they provide a quasi-polynomial algorithm for computing such an approximate equilibrium.
Lipton et al. (2003) also show that a special class of games have small support. They prove that if
the payoff matrices are low rank then the exact optimal solution has small support. This leads to
the efficient algorithms for computing Nash Equilibrium in low rank games. In Theorem 29, we
showed that a zero-sum security games with additive utility are rank deficient games. This can be
extended to general-sum security games, and consequently the approximation methods can be
deployed on such security games.
7.2 Security Games with non-additive utility function
In general, existing security game models assume additive utility functions, or only one resource is
considered for the attacker to attack. In this work, we consider additive property of the utility
86
function to alleviate the computational complexity of computing the Nash Equilibrium.
Although, these assumptions lead to a tractable solutions, it cannot be implemented in some real
scenarios in which the targets have inherent dependencies. Dependency of the targets in some
complex networks lead to models with non-additive utilities.
Authors in Wang et al. (2017) show that, in general, solving a security game with non-additive
utility is an NP-hard problem. An interesting direction of future research is to explore
approximation techniques that provide practical solutions for the security games with general
utility functions.
7.3 Node attack model in a power network
In this work, we assumed that the targets are edges of the network or transmission links in a
power grid. However, considering each node as a target leads to a more realistic cyber-attack
scenario for the following reason. When an attacker breaches the security of a network, he is able
to control the network from that specific access point, and consequently he can trip every relay
that is connected to the access point. In this formulation, we can define γi =∑
j∈Diφj as the
impact of node i when it is attacked but not protected. Di is defined as the set of all edges
connected to node i, and φj denotes the disturbance value or the impact of losing edge j.
Although, this formulation is more realistic and it does not change the security game problem
formulation, it gives rise to connectivity issue in the problem. An added layer of complexity arises
due to islanding issues that can arise due to disconnection of node. The additivity assumption is
relaxed in such scenarios.
7.4 Incomplete-information security games
In this work, we assumed that both players have complete information about the network and
each others capabilities. In reality, players only have partial information about the game. In
general, attackers do not have complete information of the network, and on the other hand,
defender does not have the information about the attacker regarding the incentives, knowledge or
87
resources. Considering this notion leads to the concept of Bayesian Games in which players have
incomplete information which is another direction of future research.
88
BIBLIOGRAPHY
Ashok, A., Govindarasu, M., and Wang, J. (2017). Cyber-physical attack-resilient wide-areamonitoring, protection, and control for the power grid. Proceedings of the IEEE,105(7):1389–1407.
Basar, T. (1973). On the relative leadership property of stackelberg strategies. Journal ofOptimization Theory and Applications, 11(6):655–661.
Basar, T. and Olsder, G. J. (1999). Dynamic noncooperative game theory, volume 23. Siam.
Bagheri, A. and Zhao, C. (2019). Distributionally robust reliability assessment for transmissionsystem hardening plan under n− k security criterion. IEEE Transactions on Reliability,68(2):653–662.
Balcan, M.-F., Blum, A., Haghtalab, N., and Procaccia, A. D. (2015). Commitment withoutregrets: Online learning in stackelberg security games. In Proceedings of the sixteenth ACMconference on economics and computation, pages 61–78.
Bhattacharya, S., Conitzer, V., and Munagala, K. (2011). Approximation algorithm for securitygames with costly resources. In International Workshop on Internet and Network Economics,pages 13–24. Springer.
Bienstock, D. and Verma, A. (2010). The nk problem in power grids: New models, formulations,and numerical experiments. SIAM Journal on Optimization, 20(5):2352–2380.
Boudko, S. and Abie, H. (2018). An evolutionary game for integrity attacks and defences foradvanced metering infrastructure. In Proceedings of the 12th European Conference on SoftwareArchitecture: Companion Proceedings, pages 1–7.
Boudko, S. and Abie, H. (2019). Adaptive cybersecurity framework for healthcare internet ofthings. In 2019 13th International Symposium on Medical Information and CommunicationTechnology (ISMICT), pages 1–6. IEEE.
Brown, M., An, B., Kiekintveld, C., Ordonez, F., and Tambe, M. (2012). Multi-objectiveoptimization for security games. In Proceedings of the 11th International Conference onAutonomous Agents and Multiagent Systems-Volume 2, pages 863–870. InternationalFoundation for Autonomous Agents and Multiagent Systems.
Che, L., Liu, X., Wen, Y., and Li, Z. (2017). A mixed integer programming model for evaluatingthe hidden probabilities of n-k line contingencies in smart grids. IEEE Transactions on SmartGrid, 10(1):1036–1045.
89
Chen, J., Thorp, J. S., and Dobson, I. (2005). Cascading dynamics and mitigation assessment inpower system disturbances via a hidden failure model. International Journal of ElectricalPower & Energy Systems, 27(4):318–326.
Chen, Q. and McCalley, J. D. (2005). Identifying high risk n-k contingencies for online securityassessment. IEEE Transactions on Power Systems, 20(2):823–834.
Chen, R. L.-Y., Cohn, A., Fan, N., and Pinar, A. (2014). Contingency-risk informed powersystem design. IEEE Transactions on Power Systems, 29(5):2087–2096.
Chen, X., Deng, X., and Teng, S.-H. (2009). Settling the complexity of computing two-playernash equilibria. Journal of the ACM (JACM), 56(3):1–57.
Chertkov, M., Pan, F., and Stepanov, M. G. (2010). Predicting failures in power grids: The caseof static overloads. IEEE Transactions on Smart Grid, 2(1):162–172.
Coelho, E. P. R., Paiva, M. H. M., Segatto, M. E. V., and Caporossi, G. (2018). A new approachfor contingency analysis based on centrality measures. IEEE Systems Journal, 13(2):1915–1923.
Conitzer, V. and Sandholm, T. (2006). Computing the optimal strategy to commit to. InProceedings of the 7th ACM conference on Electronic commerce, pages 82–90. ACM.
Daskalakis, C., Goldberg, P. W., and Papadimitriou, C. H. (2009). The complexity of computinga nash equilibrium. SIAM Journal on Computing, 39(1):195–259.
Davis, C. and Overbye, T. (2009). Linear analysis of multiple outage interaction. In 2009 42ndHawaii International Conference on System Sciences, pages 1–8. IEEE.
Davis, C. M. and Overbye, T. J. (2010). Multiple element contingency screening. IEEETransactions on Power Systems, 26(3):1294–1301.
Du, Y., Li, F., Li, J., and Zheng, T. (2019). Achieving 100x acceleration for n-1 contingencyscreening with uncertain scenarios using deep convolutional neural network. IEEE Transactionson Power Systems, 34(4):3303–3305.
Emadi, H. and Bhattacharya, S. (2019). On security games with additive utility.IFAC-PapersOnLine, 52(20):351–356.
Emadi, H. and Bhattacharya, S. (2020). On the characterization of saddle point equilibrium forsecurity games with additive utility. In International Conference on Decision and Game Theoryfor Security, pages 21–32. Springer.
Emadi, H., Clanin, J., Hyder, B., Khanna, K., Govindarasu, M., and Bhattacharya, S. (2021). Anefficient computationalstrategy for cyber-physical contingency analysis in smart grids. IEEEPower and Energy Society General Meeting (PESGM).
90
Enns, M. K., Quada, J. J., and Sackett, B. (1982). Fast linear contingency analysis. IEEETransactions on Power Apparatus and Systems, (4):783–791.
Eppstein, M. J. and Hines, P. D. (2012). A “random chemistry” algorithm for identifyingcollections of multiple contingencies that initiate cascading failure. IEEE Transactions onPower Systems, 27(3):1698–1705.
Fei, F., Jiang, A., and Tambe, M. (2018). Optimal patrol strategy for protecting moving targetswith multiple mobile resources. US Patent 9,931,573.
Fujishige, S. (2005). Submodular functions and optimization. Elsevier.
Grossklags, J., Christin, N., and Chuang, J. (2008). Secure or insure? A game-theoretic analysisof information security games. In Proceedings of the 17th international conference on WorldWide Web, pages 209–218.
Guo, Q., Gan, J., Fang, F., Tran-Thanh, L., Tambe, M., and An, B. (2018). On the inducibilityof stackelberg equilibrium for security games. arXiv preprint arXiv:1811.03823.
Hamdi, M. and Abie, H. (2014). Game-based adaptive security in the internet of things forehealth. In 2014 IEEE International Conference on Communications (ICC), pages 920–925.IEEE.
Hasan, S., Ghafouri, A., Dubey, A., Karsai, G., and Koutsoukos, X. (2017). Heuristics-basedapproach for identifying critical n-k contingencies in power systems. In 2017 Resilience Week(RWS), pages 191–197. IEEE.
Hines, P. D., Dobson, I., Cotilla-Sanchez, E., and Eppstein, M. (2013). ” dual graph” and”random chemistry” methods for cascading failure analysis. In 2013 46th Hawaii InternationalConference on System Sciences, pages 2141–2150. IEEE.
Hyder, B. and Govindarasu, M. (2020). Optimization of cybersecurity investment strategies inthe smart grid using game-theory. In 2020 IEEE Power & Energy Society Innovative SmartGrid Technologies Conference (ISGT), pages 1–5. IEEE.
Jain, M., Tsai, J., Pita, J., Kiekintveld, C., Rathi, S., Tambe, M., and Ordonez, F. (2010).Software assistants for randomized patrol planning for the lax airport police and the federal airmarshal service. Interfaces, 40(4):267–290.
Jiang, S., Song, Z., Weinstein, O., and Zhang, H. (2020). Faster dynamic matrix inverse for fasterLPs. arXiv preprint arXiv:2004.07470.
Johnson, B., Grossklags, J., Christin, N., and Chuang, J. (2010). Uncertainty in interdependentsecurity games. In International Conference on Decision and Game Theory for Security, pages234–244. Springer.
91
Kaplunovich, P. and Turitsyn, K. (2016). Fast and reliable screening of n-2 contingencies. IEEETransactions on Power Systems, 31(6):4243–4252.
Kaplunovich, P. and Turitsyn, K. S. (2014). Statistical properties and classification of n-2contingencies in large scale power grids. In 2014 47th Hawaii International Conference onSystem Sciences, pages 2517–2526. IEEE.
Karlin, S. (2003). Mathematical methods and theory in games, programming, and economics,volume 2. Courier Corporation.
Khouzani, M., Mardziel, P., Cid, C., and Srivatsa, M. (2015). Picking vs. guessing secrets: Agame-theoretic analysis. In 2015 IEEE 28th Computer Security Foundations Symposium, pages243–257. IEEE.
Kiekintveld, C., Jain, M., Tsai, J., Pita, J., Ordonez, F., and Tambe, M. (2009). Computingoptimal randomized resource allocations for massive security games. In Proceedings of The 8thInternational Conference on Autonomous Agents and Multiagent Systems-Volume 1, pages689–696. International Foundation for Autonomous Agents and Multiagent Systems.
Kleinberg, J. and Tardos, E. (2006). Algorithm design. Pearson Education India.
Korzhyk, D., Conitzer, V., and Parr, R. (2010). Complexity of computing optimal stackelbergstrategies in security resource allocation games. In Twenty-Fourth AAAI Conference onArtificial Intelligence.
Korzhyk, D., Conitzer, V., and Parr, R. (2011a). Security games with multiple attacker resources.In Twenty-Second International Joint Conference on Artificial Intelligence.
Korzhyk, D., Conitzer, V., and Parr, R. (2011b). Solving stackelberg games with uncertainobservability. In The 10th International Conference on Autonomous Agents and MultiagentSystems-Volume 3, pages 1013–1020. International Foundation for Autonomous Agents andMultiagent Systems.
Korzhyk, D., Yin, Z., Kiekintveld, C., Conitzer, V., and Tambe, M. (2011c). Stackelberg vs. nashin security games: An extended investigation of interchangeability, equivalence, and uniqueness.Journal of Artificial Intelligence Research, 41:297–327.
Laszka, A., Vorobeychik, Y., and Koutsoukos, X. (2018). A game-theoretic approach for integrityassurance in resource-bounded systems. International Journal of Information Security,17(2):221–242.
Law, Y. W., Alpcan, T., and Palaniswami, M. (2014). Security games for risk minimization inautomatic generation control. IEEE Transactions on Power Systems, 30(1):223–232.
92
Li, X., Balasubramanian, P., Sahraei-Ardakani, M., Abdi-Khorsand, M., Hedman, K. W., andPodmore, R. (2016). Real-time contingency analysis with corrective transmission switching.IEEE Transactions on Power Systems, 32(4):2604–2617.
Lim, H.-S., Ghinita, G., Bertino, E., and Kantarcioglu, M. (2012). A game-theoretic approach forhigh-assurance of data trustworthiness in sensor networks. In 2012 IEEE 28th InternationalConference on Data Engineering, pages 1192–1203. IEEE.
Lipton, R. J., Markakis, E., and Mehta, A. (2003). Playing large games using simple strategies.In Proceedings of the 4th ACM Conference on Electronic Commerce, EC ’03, pages 36–41, NewYork, NY, USA. ACM.
Manshaei, M. H., Zhu, Q., Alpcan, T., Bacsar, T., and Hubaux, J.-P. (2013). Game theory meetsnetwork security and privacy. ACM Computing Surveys (CSUR), 45(3):25.
Mehta, R. (2014). Constant rank bimatrix games are ppad-hard. In Proceedings of the Forty-sixthAnnual ACM Symposium on Theory of Computing, STOC ’14, pages 545–554, New York, NY,USA. ACM.
Nash, J. (1951). Non-cooperative games. Annals of mathematics, pages 286–295.
Ogryczak, W. and Tamir, A. (2003). Minimizing the sum of the k largest functions in linear time.Information Processing Letters, 85(3):117–122.
Paruchuri, P., Kraus, S., Pearce, J. P., Marecki, J., Tambe, M., and Ordonez, F. (2008). Playinggames for security: An efficient exact algorithm for solving bayesian stackelberg games.
Pita, J., Jain, M., Marecki, J., Ordonez, F., Portway, C., Tambe, M., Western, C., Paruchuri, P.,and Kraus, S. (2008). Deployed armor protection: the application of a game theoretic model forsecurity at the los angeles international airport. In Proceedings of the 7th international jointconference on Autonomous agents and multiagent systems: industrial track, pages 125–132.International Foundation for Autonomous Agents and Multiagent Systems.
Poudel, S., Ni, Z., and Sun, W. (2016). Electrical distance approach for searching vulnerablebranches during contingencies. IEEE Transactions on Smart Grid, 9(4):3373–3382.
Shan, X. G. and Zhuang, J. (2020). A game-theoretic approach to modeling attacks and defensesof smart grids at three levels. Reliability Engineering & System Safety, 195:106683.
Soltan, S., Loh, A., and Zussman, G. (2017). Analyzing and quantifying the effect of k-linefailures in power grids. IEEE Transactions on Control of Network Systems.
Soltan, S., Loh, A., and Zussman, G. (2018). Analyzing and quantifying the effect of k-linefailures in power grids. IEEE Transactions on Control of Network Systems, 5(3):1424–1433.
93
Stein, N. D., Ozdaglar, A., and Parrilo, P. A. (2008). Separable and low-rank continuous games.International Journal of Game Theory, 37(4):475–504.
Stein, N. D., Parrilo, P. A., and Ozdaglar, A. (2011). Correlated equilibria in continuous games:Characterization and computation. Games and Economic Behavior, 71(2):436–455.
Sundar, K., Coffrin, C., Nagarajan, H., and Bent, R. (2018). Probabilistic n-kfailure-identification for power systems. Networks, 71(3):302–321.
Tambe, M. (2011). Security and game theory: algorithms, deployed systems, lessons learned.Cambridge university press.
Trajanovski, S., Kuipers, F. A., Hayel, Y., Altman, E., and Van Mieghem, P. (2015). Designingvirus-resistant networks: a game-formation approach. In 2015 54th IEEE Conference onDecision and Control (CDC), pages 294–299. IEEE.
Tsai, J., Rathi, S., Kiekintveld, C., Ordonez, F., and Tambe, M. (2009). Iris-a tool for strategicsecurity allocation in transportation networks. AAMAS (Industry Track), pages 37–44.
Turitsyn, K. S. and Kaplunovich, P. (2013). Fast algorithm for n-2 contingency problem. In 201346th Hawaii International Conference on System Sciences, pages 2161–2166. IEEE.
Wang, S., Liu, F., and Shroff, N. (2017). Non-additive security games. In Thirty-First AAAIConference on Artificial Intelligence.
Wang, S. and Shroff, N. (2017). Security game with non-additive utilities and multiple attackerresources. Proceedings of the ACM on Measurement and Analysis of Computing Systems,1(1):13.
Wood, A. J., Wollenberg, B. F., and Sheble, G. B. (2013). Power generation, operation, andcontrol. John Wiley & Sons.
Xu, H. (2016). The mysteries of security games: Equilibrium computation becomes combinatorialalgorithm design. In Proceedings of the 2016 ACM Conference on Economics and Computation,pages 497–514. ACM.
Xu, H., Fang, F., Jiang, A. X., Conitzer, V., Dughmi, S., and Tambe, M. (2014). Solvingzero-sum security games in discretized spatio-temporal domains. In AAAI, pages 1500–1506.
Yin, Z., Korzhyk, D., Kiekintveld, C., Conitzer, V., and Tambe, M. (2010). Stackelberg vs. nashin security games: Interchangeability, equivalence, and uniqueness. In Proceedings of the 9thInternational Conference on Autonomous Agents and Multiagent Systems: volume 1-Volume 1,pages 1139–1146. International Foundation for Autonomous Agents and Multiagent Systems.
94
Zhao, Y., Yuan, C., Liu, G., and Grinberg, I. (2018). Graph-based preconditioning conjugategradient algorithm for n-1 contingency analysis. In 2018 IEEE Power & Energy SocietyGeneral Meeting (PESGM), pages 1–5. IEEE.
95
APPENDIX. RANK DEFICIENCY OF SECURITY GAMES
In the following theorem, we show for the value of ka = kd = k, the matrix A is a low rank matrix
and rank(A) = m. First, we show linear dependence of columns of A.
Lemma 28. If Di = C ∪ D′i, Dj = C ∪ D′j,Dl = C ∪ D′l, and Dk = C ∪ D′k such that D′i,D′j are
disjoint and D′l,D′k are disjoint(i.e.D′i ∩ D′j = φ,D′l ∩ D′k = φ), and Di ∪ Dj = Dl ∪ Dk, then
ci + cj = cl + ck, (.1)
where ci is ith column of A.
Proof. rth entry of ci is cri =∑
s∈Ar∩(C∪D′i)c φs. Since
Ar ∩ (C ∪ D′i)c = (Ar ∩ D′i)c) \ C (.2)
cri + crj =∑
s∈Ar\(C∪D′i∪D′j)
2φs +∑
s∈Ar∩(D′i∪D′j)
φs.
In a similar way
crl + crk =∑
s∈Ar\(C∪D′l∪D′k)
2φs +∑
s∈Ar∩(D′l∪D′k)
φs.
Since (D′i ∪ D′j) = (D′l ∪ D′k), the above expressions are equal. Therefore ci + cj = cl + ck.
Theorem 29. In the security game of (A,D, A), rank(A) = m .
Proof. First, we show that rank(A) ≥ m. In order to show this, we sort Ai,Di’s such that the
first m×m block in A has full rank. Let I = {1, . . .m}, so each Ai and Di is a k-tuple subset of
96
I. We sort both Ai,Di’s in the following way:
A1 = {1, . . . , k}, (.3)
A2 = {1, . . . , k − 1, k + 1},...
Am−k+1 = {1, . . . , k − 1,m}
Am−k+2 = {k + 1, 2, . . . , k}
Am−k+3 = {1, k + 1, 3, . . . , k}...
Am = {1, . . . , k − 2, k + 1, k}.
Each entry of A can be written explicitly in the following form:
If i ≤ m− k + 1 and j > m− k + 1, then (i, j)th entry of A is
Ai ∩ Djc =
{j −m+ k − 1, k + i− 1} i ≥ 3
{j −m+ k − 1} i = 1, 2(.4)
If i > m− k + 1 and j ≤ m− k + 1, then (i, j)th entry of A is
Ai ∩ Djc =
{k, k + 1} j ≥ 3
{k + 2− j} j = 1, 2(.5)
If i > m− k + 1 and j > m− k + 1, then (i, j)th entry of A is
Ai ∩ Djc =
{j −m+ k − 1} i 6= j
0 i = j(.6)
If i ≤ m− k + 1 and j ≤ m− k + 1, then (i, j)th entry of A is
Ai ∩ Djc =
{i+ k − 1} i 6= j
0 i = j(.7)
Consequently, the first m×m block of A denoted by A can be written in the following form
A =
A11 A12
A21 A22
, (.8)
97
where,
A11 =
0 φk · · · φk
φk+1 0 φk+1 . . . φk+1
.... . .
...
φm−1 . . . φm−1 0 φm−1
φm . . . φm 0
, (.9)
and, A12 is
A12 =
φ1 φ2 · · · φk−1
φ1 φ2 · · · φk−1
φ1 + φk+2 φ2 + φk+2 · · · φk−1 + φk+2
......
...
φ1 + φm φ2 + φm . . . φk−1 + φm
,
and, A21 is
A21 =
φk+1 φk φk + φk+1 · · · φk + φk+1
......
......
φk+1 φk φk + φk+1 · · · φk + φk+1
,and, A22 is
A22 =
0 φ2 · · · φk−1
φ1 0 · · · φk−1
φ1 φ2 0 · · · φk−1
......
. . ....
φ1 φ2 . . . 0
.
Next, we show that rank(A) = m. Columns of A are linearly independent if Aα = 0 holds only
when α = 0. We subtract the first and second row of A from the last k− 1 row of A. So A21 turns
98
into a block containing only zero, and A22 turns into
A22 = −
2φ1 φ2 · · · φk−1
φ1 2φ2 · · · φk−1
φ1 φ2 2φ3 · · · φk−1
......
. . ....
φ1 φ2 . . . 2φk−1
.
The above matrix is a non-singular matrix, since it can be written as −Φ(I + 11T )T . So the last
k − 1 entries in α are zero. Moreover, A11 is non-singular , and consequently the remaining
elements of α are zero. Therefore α = 0 and rank(A) = m.
Next, we need to show that all last N −m columns of A can be written as a linear combination of
first m columns of A.
First, assume i > m, and Di ∩ {1, . . . , k} has k − 1 elements. Without loss of generality, we
assume that Di = {d1, 2, . . . , k},and d1 /∈ {1, . . . , k}. Now let
Dj = {1, . . . , k − 1, k + 1},Dk = {1, . . . , k − 1, d1}, and Dl = {k + 1, 2, . . . , k}. Note that based on
the sorting of D in the previous part, j, k, l ≤ m. From Lemma 3, we conclude that
ci = ck + cl − cj , which is a linear combination of cj , ck, cl. By the same approach we can show
that if cardinality of Di ∩ {1, . . . , k} is less than k − 1, by choosing Dj = {1, . . . , k}, and choosing
an appropriate Dk,Dl, column of ci can be written as a linear combination of first m columns of
A.