pennsylvania state university & google brain nicolas...

Post on 01-Aug-2020

0 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Security and Privacyin Machine LearningNicolas PapernotPennsylvania State University & Google Brain

Lecture for Prof. Trent Jaeger’s CSE 543 Computer Security Class

November 2017 - Penn State

Patrick McDaniel(Penn State)

Ian Goodfellow(Google Brain)

Martín Abadi (Google Brain)Pieter Abbeel (Berkeley)Michael Backes (CISPA)Dan Boneh (Stanford)Z. Berkay Celik (Penn State)Yan Duan (OpenAI)Úlfar Erlingsson (Google Brain)Matt Fredrikson (CMU)Kathrin Grosse (CISPA)Sandy Huang (Berkeley)Somesh Jha (U of Wisconsin)

Thank you to my collaborators

2

Alexey Kurakin (Google Brain)Praveen Manoharan (CISPA)Ilya Mironov (Google Brain)Ananth Raghunathan (Google Brain)Arunesh Sinha (U of Michigan) Shuang Song (UCSD)Ananthram Swami (US ARL)Kunal Talwar (Google Brain)Florian Tramèr (Stanford)Michael Wellman (U of Michigan)Xi Wu (Google)

3

Machine Learning Classifier

[0.01, 0.84, 0.02, 0.01, 0.01, 0.01, 0.05, 0.01, 0.03, 0.01]

[p(0|x,θ), p(1|x,θ), p(2|x,θ), …, p(7|x,θ), p(8|x,θ), p(9|x,θ)] f(x,θ)x

Classifier: map inputs to one class among a predefined set

4

Machine Learning Classifier

[0 1 0 0 0 0 0 0 0 0][0 1 0 0 0 0 0 0 0 0]

[1 0 0 0 0 0 0 0 0 0][0 0 0 0 0 0 0 1 0 0]

[0 0 0 0 0 0 0 0 0 1][0 0 0 1 0 0 0 0 0 0]

[0 0 0 0 0 0 0 0 1 0][0 0 0 0 0 0 1 0 0 0]

[0 1 0 0 0 0 0 0 0 0][0 0 0 0 1 0 0 0 0 0]

Learning: find internal classifier parameters θ that minimize a cost/loss function (~model error)

Outline of this lecture

5

1

2

Security in ML

Privacy in ML

Part I

Security in machine learning

6

Attacker may see the model: bad even if an attacker needs to know details of the machine

learning model to do an attack --- aka a white-box attacker

Attacker may not need the model: worse if attacker who knows very little (e.g. only gets to

ask a few questions) can do an attack --- aka a black-box attacker

Attack Models

7

ML

ML

Papernot et al. Towards the Science of Security and Privacy in Machine Learning

Attacker may see the model: bad even if an attacker needs to know details of the machine

learning model to do an attack --- aka a white-box attacker

Attacker may not need the model: worse if attacker who knows very little (e.g. only gets to

ask a few questions) can do an attack --- aka a black-box attacker

Attack Models

8

ML

ML

Papernot et al. Towards the Science of Security and Privacy in Machine Learning

Adversarial examples(white-box attacks)

9

Jacobian-based Saliency Map Approach (JSMA)

10Papernot et al. The Limitations of Deep Learning in Adversarial Settings

11

Jacobian-Based Iterative Approach: source-target misclassification

Papernot et al. The Limitations of Deep Learning in Adversarial Settings

Evading a Neural Network Malware Classifier

DREBIN dataset of Android applications

Add constraints to JSMA approach: - only add features: keep malware behavior - only features from manifest: easy to modify

“Most accurate” neural network- 98% accuracy, with 9.7% FP and 1.3% FN - Evaded with a 63.08% success rate

12Grosse et al. Adversarial Perturbations Against Deep Neural Networks for Malware Classification

P[X=Malware] = 0.90P[X=Benign] = 0.10

P[X*=Malware] = 0.10P[X*=Benign] = 0.90

Supervised vs. reinforcement learning

13

Supervised learning Reinforcement learning

Model inputs Observation (e.g., traffic sign, music, email) Environment & Reward function

Model outputsClass

(e.g., stop/yield, jazz/classical, spam/legitimate)

Action

Training “goal”(i.e., cost/loss)

Minimize class prediction errorover pairs of (inputs, outputs)

Maximize reward by exploring the environment and

taking actions

Example

Adversarial attacks on neural network policies

14Huang et al. Adversarial Attacks on Neural Network Policies

Adversarial examples(black-box attacks)

15

Threat model of a black-box attack

Training dataModel architecture Model parameters

Model scores

Adversarial capabilities

(limited) oracle access: labels

Adversarial goal Force a ML model remotely accessible through an API to misclassify

16

Example

Our approach to black-box attacks

17

Alleviate lack of knowledge about model

Alleviate lack of training data

Adversarial example transferability

Adversarial examples have a transferability property:

samples crafted to mislead a model A are likely to mislead a model B

These property comes in several variants:

● Intra-technique transferability:○ Cross model transferability○ Cross training set transferability

● Cross-technique transferability

18

ML A

Szegedy et al. Intriguing properties of neural networks

Adversarial examples have a transferability property:

samples crafted to mislead a model A are likely to mislead a model B

These property comes in several variants:

● Intra-technique transferability:○ Cross model transferability○ Cross training set transferability

● Cross-technique transferability

19Szegedy et al. Intriguing properties of neural networks

ML A

ML B Victim

Adversarial example transferability

Adversarial examples have a transferability property:

samples crafted to mislead a model A are likely to mislead a model B

20

Adversarial example transferability

Cross-technique transferability

21Papernot et al. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples

Cross-technique transferability

22Papernot et al. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples

Our approach to black-box attacks

23

Adversarial example transferability from a substitute model to

target model

Alleviate lack of knowledge about model

Alleviate lack of training data

Attacking remotely hosted black-box models

24

Remote ML sys

“no truck sign”“STOP sign”

“STOP sign”

(1) The adversary queries remote ML system for labels on inputs of its choice.

25

Remote ML sys

Local substitute

“no truck sign”“STOP sign”

“STOP sign”

(2) The adversary uses this labeled data to train a local substitute for the remote system.

Attacking remotely hosted black-box models

26

Remote ML sys

Local substitute

“no truck sign”“STOP sign”

(3) The adversary selects new synthetic inputs for queries to the remote ML system based on the local substitute’s output surface sensitivity to input variations.

Attacking remotely hosted black-box models

27

Remote ML sys

Local substitute

“yield sign”

(4) The adversary then uses the local substitute to craft adversarial examples, which are misclassified by the remote ML system because of transferability.

Attacking remotely hosted black-box models

Our approach to black-box attacks

28

Adversarial example transferability from a substitute model to

target model

Synthetic data generation

+

Alleviate lack of knowledge about model

Alleviate lack of training data

Results on real-world remote systems

29

All remote classifiers are trained on the MNIST dataset (10 classes, 60,000 training samples)

Remote Platform ML technique Number of queriesAdversarial examples

misclassified (after querying)

Deep Learning 6,400 84.24%

Logistic Regression 800 96.19%

Unknown 2,000 97.72%

[PMG16a] Papernot et al. Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples

Benchmarking progress in the adversarial MLcommunity

30

31

32

Growing community

1.3K+ stars340+ forks

40+ contributors

33

Adversarial examples represent worst-case distribution drifts

[DDS04] Dalvi et al. Adversarial Classification (KDD)

34

Adversarial examples are a tangible instance of hypothetical AI safety problems

Image source: http://www.nerdist.com/wp-content/uploads/2013/07/Space-Odyssey-4.jpg

Part II

Privacy in machine learning

35

Types of adversaries and our threat model

36

In our work, the threat model assumes:

- Adversary can make a potentially unbounded number of queries- Adversary has access to model internals

Model inspection (white-box adversary)Zhang et al. (2017) Understanding DL requires rethinking generalization

Model querying (black-box adversary)Shokri et al. (2016) Membership Inference Attacks against ML ModelsFredrikson et al. (2015) Model Inversion Attacks

?Black-box

ML

A definition of privacy

37

Randomized Algorithm

Randomized Algorithm

Answer 1Answer 2

...Answer n

Answer 1Answer 2

...Answer n

????}

Our design goals

38

Preserve privacy of training data when learning classifiers

Differential privacy protection guarantees

Intuitive privacy protection guarantees

Generic* (independent of learning algorithm)Goals

Problem

*This is a key distinction from previous work, such as Pathak et al. (2011) Privacy preserving probabilistic inference with hidden markov models Jagannathan et al. (2013) A semi-supervised learning approach to differential privacy Shokri et al. (2015) Privacy-preserving Deep Learning Abadi et al. (2016) Deep Learning with Differential Privacy Hamm et al. (2016) Learning privately from multiparty data

The PATE approach

39

Teacher ensemble

40

Partition 1

Partition 2

Partition n

Partition 3

...

Teacher 1

Teacher 2

Teacher n

Teacher 3

...

Training

Sensitive Data

Data flow

Aggregation

41

Count votes Take maximum

Intuitive privacy analysis

42

If most teachers agree on the label, it does not depend on specific partitions, so the privacy cost is small.

If two classes have close vote counts, the disagreement may reveal private information.

Noisy aggregation

43

Count votes Add Laplacian noise Take maximum

Teacher ensemble

44

Partition 1

Partition 2

Partition n

Partition 3

...

Teacher 1

Teacher 2

Teacher n

Teacher 3

...

Aggregated Teacher

Training

Sensitive Data

Data flow

Student training

45

Partition 1

Partition 2

Partition n

Partition 3

...

Teacher 1

Teacher 2

Teacher n

Teacher 3

...

Aggregated Teacher Student

Training

Available to the adversaryNot available to the adversary

Sensitive Data

Public Data

Inference Data flow

Queries

Why train an additional “student” model?

46

Each prediction increases total privacy loss.Privacy budgets create a tension between the accuracy and number of predictions.

Inspection of internals may reveal private data.Privacy guarantees should hold in the face of white-box adversaries.

1

2

The aggregated teacher violates our threat model:

Student training

47

Partition 1

Partition 2

Partition n

Partition 3

...

Teacher 1

Teacher 2

Teacher n

Teacher 3

...

Aggregated Teacher Student

Training

Available to the adversaryNot available to the adversary

Sensitive Data

Public Data

Inference Data flow

Queries

Deployment

48

Inference

Available to the adversary

QueriesStudent

Differential privacy:A randomized algorithm M satisfies ( , ) differential privacy if for all pairs of neighbouring datasets (d,d’), for all subsets S of outputs:

Application of the Moments Accountant technique (Abadi et al, 2016)

Strong quorum ⟹ Small privacy cost

Bound is data-dependent: computed using the empirical quorum

Differential privacy analysis

49

Experimental results

50

Experimental setup

51

Dataset Teacher Model Student Model

MNIST Convolutional Neural Network Generative Adversarial Networks

SVHN Convolutional Neural Network Generative Adversarial Networks

UCI Adult Random Forest Random Forest

UCI Diabetes Random Forest Random Forest

/ /models/tree/master/differential_privacy/multiple_teachers

Aggregated teacher accuracy

52

Trade-off between student accuracy and privacy

53

Trade-off between student accuracy and privacy

54

UCI Diabetes

1.44

10-5

Non-private baseline 93.81%

Student accuracy

93.94%

Synergy between privacy and generalization

55

www.papernot.fr56@NicolasPapernot

Some online ressources:

Blog on S&P in ML (joint work w/ Ian Goodfellow) www.cleverhans.ioML course https://coursera.org/learn/machine-learningDL course https://coursera.org/learn/neural-networks

Assigned reading and more in-depth technical survey paper:

Machine Learning in Adversarial SettingsPatrick McDaniel, Nicolas Papernot, Z. Berkay Celik

Towards the Science of Security and Privacy in Machine LearningNicolas Papernot, Patrick McDaniel, Arunesh Sinha, and Michael Wellman

57

Gradient masking

58Tramèr et al. Ensemble Adversarial Training: Attacks and Defenses

Gradient masking

59Tramèr et al. Ensemble Adversarial Training: Attacks and Defenses

top related