perfect secrecy and information theory

20

Upload: others

Post on 01-Apr-2022

3 views

Category:

Documents


0 download

TRANSCRIPT

Topics

Probability Theory

Perfect Secrecy

Information Theory

3

Some Terms

(P,C,K,E,D)

Computational Security

Computational effort required to break cryptosystem

Provable Security

Relative to another, difficult problem

Unconditional Security

Oscar (adversary) can do whatever he wants, as much as

he wants

Probability Review, pg. 1 A random variable (event) is an experiment whose

outcomes are mapped to real numbers.

Probability: We denote pX(x) = Pr(X = x).

For a subset A,

Joint Probability: Sometimes we want to consider more

than two events at the same time, in which we case we

lump them together into a joint random variable, e.g. Z =

(X,Y).

Independence: We say that two events are independent if

yY,xXPry,xY,Xp Y,X

Ax

X xp)A(p

ypxpy,xY,Xp YXY,X

Probability Review, pg. 2 Conditional Probability: We will often ask questions

about the probability of events Y given that we have

observed X=x. In particular, we define the conditional

probability of Y=y given X=x by

Independence: We immediately get

Bayes’s Theorem: If pX(x)>0 and pY(y)>0 then

)y(p)x|y(p YY

)x(p

)y,x(p)x|y(p

X

XYY

pX (x | y) =pX (x)pY (y | x)

pY (y)

Example

7

Perfect Secrecy Defined

A cryptosystem (P,C,K,E,D) has perfect secrecy if

“Ciphertext yields no information

about plaintext”

Perfect Secrecy

p(M): (a priori) probability that plaintext M is sent.

p(C): probability that ciphertext C was received.

P(M|C): probability that plaintext M was sent, given that ciphertext C was received.

P(C|M): probability that ciphertext C was received, given that plaintext M was sent.

P(K): probability that key K was chosen.

A cryptosystem provides perfect secrecy if p(M|C) = p(M)

for all M and C with p(C) > 0

Implications

Perfect secrecy means exactly that the random

variables on M and C are independent.

Ciphertext C gives us no information about M

A cryptosystem provides perfect secrecy if and only if

p(C|M) = p(C)

for all M;C with p(M) > 0 and p(C) > 0

Proof using

Bayes’s theorem

Theorem

If a cryptosystem has perfect secrecy, then |𝐾| ≥ |𝑀|.

Proof idea?

There is some message M such that for a given

ciphertext C; no key K encrypts M to C

P(C|M)=0 but P(C)>0

One-Time Pad

The one-time pad, which is a provably secure cryptosystem, was developed by Gilbert Vernam in 1918.

The message is represented as a binary string (a sequence of 0’s and 1’s using a coding mechanism such as ASCII coding.

The key is a truly random sequence of 0’s and 1’s of the same length as the message.

The encryption is done by adding the key to the message modulo 2, bit by bit. This process is often called exclusive or, and is denoted by XOR. The symbol is used

exclusive or Operator

a b c = a b

0 0 0

0 1 1

1 0 1

1 1 0

Example

message =‘IF’

then its ASCII code =(1001001 1000110)

key = (1010110 0110001)

Encryption: 1001001 1000110 plaintext 1010110 0110001 key 0011111 1110110 ciphertext

Decryption: 0011111 1110110 ciphertext 1010110 0110001 key 1001001 1000110 plaintext

OTP Security

The security depends on the randomness of the key.

It is hard to define randomness.

In cryptographic context, we seek two fundamental properties in a binary random key sequence: Unpredictability:

Balanced (Equal Distribution):

OTP Security

Unpredictability: Independent of the number of the bits of a

sequence observed, the probability of guessing the next bit is not better than ½. Therefore, the probability of a certain bit being 1 or 0 is exactly equal to ½.

Balanced (Equal Distribution): The number of 1’s and 0’s should be equal.

16

Entropy

Want to be able to measure the “uncertainty” or

“information” of some random variable X.

Entropy

Information theory captures the amount of information

in a piece of text.

“How much information or uncertainty is in a

cryptosystem?”

Entropy and Source Coding

Theory

There is a close relationship between entropy and

representing information.

Entropy captures the notion of how many “Yes-No”

questions are needed to accurately identify a piece of

information… that is, how many bits are needed!

One of the main focus areas in the field of information

theory is on the issue of source-coding:

How to efficiently (“Compress”) information into as few

bits as possible.

One such technique, Huffman Coding.

Entropy and Uncertainty We are concerned with how much uncertainty a random

event has, but how do we define or measure uncertainty?

We want our measure to have the following properties:

1. To each set of nonnegative numbers

with , we define the uncertainty by

.

2. should be a continuous function: A slight change in p

should not drastically change

3. for all n>0. Uncertainty increases

when there are more outcomes.

n21 p,,p,pp

1n

11n

1n1

n1 ,,H,,H

)p(H

1ppp n21

)p(H

)p(H

Entropy, pg. 2 We define the entropy of a random variable by

Example: Consider a fair coin toss. There are two

outcomes, with probability ½ each. The entropy is

Example: Consider a non-fair coin toss X with probability

p of getting heads and 1-p of getting tails. The entropy is

bit12

1log

2

1

2

1log

2

122

x

2 )x(plogxpXH

p1logp1plogpXH 22

Entropy, pg. 3 Entropy may be thought of as the number of yes-no

questions needed to accurately determine the outcome of a random event.

Example: Flip two coins, and let X be the number of heads. The possibilities are {0,1,2} and the probabilities are {1/4, 1/2, 1/4}. The Entropy is

So how can we relate this to questions?

Half the time you needed one question, half you needed two

bits2

3

4

1log

4

1

2

1log

2

1

4

1log

4

1222