andrej karpathy - 텐서 플로우 블로그 (tensor · convnets are everywhere… whale...

Post on 14-Jul-2020

2 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Andrej KarpathyBay Area Deep Learning School, 2016

So far...

So far...

Some input vector (very few assumptions made).

In many real-world applications input vectors have structure.

Spectrograms

ImagesText

Convolutional Neural Networks:A pinch of history

Hubel & Wiesel,1959RECEPTIVE FIELDS OF SINGLE NEURONES INTHE CAT'S STRIATE CORTEX

1962RECEPTIVE FIELDS, BINOCULAR INTERACTIONAND FUNCTIONAL ARCHITECTURE INTHE CAT'S VISUAL CORTEX

1968...

A bit of history:

Neurocognitron[Fukushima 1980]

“sandwich” architecture (SCSCSC…)simple cells: modifiable parameterscomplex cells: perform pooling

Gradient-based learning applied to document recognition[LeCun, Bottou, Bengio, Haffner 1998]

LeNet-5

car 99%

ComputerVision2011

ComputerVision2011

Page 1

ComputerVision2011

Page 2

ComputerVision2011

Page 3+ code complexity :(

ImageNet Classification with Deep Convolutional Neural Networks[Krizhevsky, Sutskever, Hinton, 2012]

“AlexNet”Deng et al.Russakovsky et al.

NVIDIA et al.

(slide from Kaiming He’s recent presentation)

“What I learned from competing against a ConvNet on ImageNet” (karpathy.github.io)

“What I learned from competing against a ConvNet on ImageNet” (karpathy.github.io)

TLDR: Human accuracy is somewhere 2-5%. (depending on how much training or how little life you have)

[224x224x3]

f 1000 numbers, indicating class scores

Feature Extraction

vector describing various image statistics

[224x224x3]

f 1000 numbers, indicating class scores

training

training

“Run the image through 20 layers of 3x3 convolutions and train the filters with SGD.”*

* to the first order

Transfer Learning

1. Train on Imagenet

3. Medium dataset:finetuning

more data = retrain more of the network (or all of it)

2. Small dataset:feature extractor

Freeze these

Train this

Freeze these

Train this

Transfer LearningCNN Features off-the-shelf: an Astounding Baseline for Recognition[Razavian et al, 2014]

DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition[Donahue*, Jia*, et al., 2013]

e.g. with keras.io

The power is easily accessible.

ConvNets are everywhere…

e.g. Google Photos search

Face Verification, Taigman et al. 2014 (FAIR)

Self-driving cars[Goodfellow et al. 2014]

Ciresan et al. 2013

Turaga et al 2010

ConvNets are everywhere…

Whale recognition, Kaggle Challenge Satellite image analysisMnih and Hinton, 2010

Galaxy Challenge Dielman et al. 2015

WaveNet, van den Oord et al. 2016 Image captioning, Vinyals et al. 2015

ATARI game playing, Mnih 2013

ConvNets are everywhere…

AlphaGo, Silver et al 2016

VizDoom StarCraft

….

ConvNets are everywhere…

DeepDream reddit.com/r/deepdream

NeuralStyle, Gatys et al. 2015deepart.io, Prisma, etc.

Deep Neural Networks Rival the Representation of Primate IT Cortex for Core Visual Object Recognition[Cadieu et al., 2014]

ConvNets ←→ Visual Cortex

Convolutional Neural Networks

</history></context>

<explanation>

[224x224x3]

f 1000 numbers, indicating class scores

training

Only two basic operations are involved throughout:1. Dot products w^Tx2. Max operations max(.)

[224x224x3]

f 1000 numbers, indicating class scores

training

Only two basic operations are involved throughout:1. Dot products w^Tx2. Max operations max(.) parameters

(~10M of them)

preview:

e.g. 200K numbers e.g. 10 numbers

32

32

3

Convolution Layer32x32x3 image

width

height

depth

32

32

3

Convolution Layer

5x5x3 filter

32x32x3 image

Convolve the filter with the imagei.e. “slide over the image spatially, computing dot products”

32

32

3

Convolution Layer

5x5x3 filter

32x32x3 image

Convolve the filter with the imagei.e. “slide over the image spatially, computing dot products”

Filters always extend the full depth of the input volume

32

32

3

Convolution Layer32x32x3 image5x5x3 filter

1 number: the result of taking a dot product between the filter and a small 5x5x3 chunk of the image(i.e. 5*5*3 = 75-dimensional dot product + bias)

32

32

3

Convolution Layer32x32x3 image5x5x3 filter

convolve (slide) over all spatial locations

activation map

1

28

28

32

32

3

Convolution Layer32x32x3 image5x5x3 filter

convolve (slide) over all spatial locations

activation maps

1

28

28

consider a second, green filter

32

32

3

Convolution Layer

activation maps

6

28

28

For example, if we had 6 5x5 filters, we’ll get 6 separate activation maps:

We stack these up to get a “new image” of size 28x28x6!

32

32

3

Convolution Layer

activation maps

6

28

28

For example, if we had 6 5x5 filters, we’ll get 6 separate activation maps:

We processed [32x32x3] volume into [28x28x6] volume.Q: how many parameters would this be if we used a fully connected layer instead?

32

32

3

Convolution Layer

activation maps

6

28

28

For example, if we had 6 5x5 filters, we’ll get 6 separate activation maps:

We processed [32x32x3] volume into [28x28x6] volume.Q: how many parameters would this be if we used a fully connected layer instead?A: (32*32*3)*(28*28*6) = 14.5M parameters, ~14.5M multiplies

32

32

3

Convolution Layer

activation maps

6

28

28

For example, if we had 6 5x5 filters, we’ll get 6 separate activation maps:

We processed [32x32x3] volume into [28x28x6] volume.Q: how many parameters are used instead?

32

32

3

Convolution Layer

activation maps

6

28

28

For example, if we had 6 5x5 filters, we’ll get 6 separate activation maps:

We processed [32x32x3] volume into [28x28x6] volume.Q: how many parameters are used instead? --- And how many multiplies?A: (5*5*3)*6 = 450 parameters

32

32

3

Convolution Layer

activation maps

6

28

28

For example, if we had 6 5x5 filters, we’ll get 6 separate activation maps:

We processed [32x32x3] volume into [28x28x6] volume.Q: how many parameters are used instead?A: (5*5*3)*6 = 450 parameters, (5*5*3)*(28*28*6) = ~350K multiplies

example 5x5 filters(32 total)

We call the layer convolutional because it is related to convolution of two signals:

elementwise multiplication and sum of a filter and the signal (image)

one filter => one activation map

Preview: ConvNet is a sequence of Convolution Layers, interspersed with activation functions

32

32

3

28

28

6

CONV,ReLUe.g. 6 5x5x3 filters

Preview: ConvNet is a sequence of Convolutional Layers, interspersed with activation functions

32

32

3

CONV,ReLUe.g. 6 5x5x3 filters 28

28

6

CONV,ReLUe.g. 10 5x5x6 filters

CONV,ReLU

….

10

24

24

two more layers to go: POOL/FC

Pooling layer- makes the representations smaller and more manageable - operates over each activation map independently:

1 1 2 4

5 6 7 8

3 2 1 0

1 2 3 4

Single depth slice

x

y

max pool with 2x2 filters and stride 2 6 8

3 4

MAX POOLING

Fully Connected Layer (FC layer)- Contains neurons that connect to the entire input volume, as in ordinary Neural

Networks

http://cs.stanford.edu/people/karpathy/convnetjs/demo/cifar10.html

[ConvNetJS demo: training on CIFAR-10]

Visualizing Activationshttp://yosinski.com/deepvis

YouTube videohttps://www.youtube.com/watch?v=AgkfIQ4IGaM(4min)

Convolutional Neural Networks:Case Study

Case Study: AlexNet[Krizhevsky et al. 2012]

Input: 227x227x3 images

First layer (CONV1): 96 11x11 filters applied at stride 4=>Q: what is the output volume size? Hint: (227-11)/4+1 = 55

Case Study: AlexNet[Krizhevsky et al. 2012]

Input: 227x227x3 images

First layer (CONV1): 96 11x11 filters applied at stride 4=>Output volume [55x55x96]

Q: What is the total number of parameters in this layer?

Case Study: AlexNet[Krizhevsky et al. 2012]

Input: 227x227x3 images

First layer (CONV1): 96 11x11 filters applied at stride 4=>Output volume [55x55x96]Parameters: (11*11*3)*96 = 35K

Case Study: AlexNet[Krizhevsky et al. 2012]

Input: 227x227x3 imagesAfter CONV1: 55x55x96

Second layer (POOL1): 3x3 filters applied at stride 2

Q: what is the output volume size? Hint: (55-3)/2+1 = 27

Case Study: AlexNet[Krizhevsky et al. 2012]

Input: 227x227x3 imagesAfter CONV1: 55x55x96

Second layer (POOL1): 3x3 filters applied at stride 2Output volume: 27x27x96

Q: what is the number of parameters in this layer?

Case Study: AlexNet[Krizhevsky et al. 2012]

Input: 227x227x3 imagesAfter CONV1: 55x55x96

Second layer (POOL1): 3x3 filters applied at stride 2Output volume: 27x27x96Parameters: 0!

Case Study: AlexNet[Krizhevsky et al. 2012]

Input: 227x227x3 imagesAfter CONV1: 55x55x96After POOL1: 27x27x96...

Case Study: AlexNet[Krizhevsky et al. 2012]

Full (simplified) AlexNet architecture:[227x227x3] INPUT[55x55x96] CONV1: 96 11x11 filters at stride 4, pad 0[27x27x96] MAX POOL1: 3x3 filters at stride 2[27x27x96] NORM1: Normalization layer[27x27x256] CONV2: 256 5x5 filters at stride 1, pad 2[13x13x256] MAX POOL2: 3x3 filters at stride 2[13x13x256] NORM2: Normalization layer[13x13x384] CONV3: 384 3x3 filters at stride 1, pad 1[13x13x384] CONV4: 384 3x3 filters at stride 1, pad 1[13x13x256] CONV5: 256 3x3 filters at stride 1, pad 1[6x6x256] MAX POOL3: 3x3 filters at stride 2[4096] FC6: 4096 neurons[4096] FC7: 4096 neurons[1000] FC8: 1000 neurons (class scores)

Case Study: AlexNet[Krizhevsky et al. 2012]

Full (simplified) AlexNet architecture:[227x227x3] INPUT[55x55x96] CONV1: 96 11x11 filters at stride 4, pad 0[27x27x96] MAX POOL1: 3x3 filters at stride 2[27x27x96] NORM1: Normalization layer[27x27x256] CONV2: 256 5x5 filters at stride 1, pad 2[13x13x256] MAX POOL2: 3x3 filters at stride 2[13x13x256] NORM2: Normalization layer[13x13x384] CONV3: 384 3x3 filters at stride 1, pad 1[13x13x384] CONV4: 384 3x3 filters at stride 1, pad 1[13x13x256] CONV5: 256 3x3 filters at stride 1, pad 1[6x6x256] MAX POOL3: 3x3 filters at stride 2[4096] FC6: 4096 neurons[4096] FC7: 4096 neurons[1000] FC8: 1000 neurons (class scores)

Compared to LeCun 1998:

1 DATA:- More data: 10^6 vs. 10^32 COMPUTE:- GPU (~20x speedup)3 ALGORITHM:- Deeper: More layers (8 weight layers)- Fancy regularization (dropout)- Fancy non-linearity (ReLU)4 INFRASTRUCTURE:- CUDA

Case Study: AlexNet[Krizhevsky et al. 2012]

Full (simplified) AlexNet architecture:[227x227x3] INPUT[55x55x96] CONV1: 96 11x11 filters at stride 4, pad 0[27x27x96] MAX POOL1: 3x3 filters at stride 2[27x27x96] NORM1: Normalization layer[27x27x256] CONV2: 256 5x5 filters at stride 1, pad 2[13x13x256] MAX POOL2: 3x3 filters at stride 2[13x13x256] NORM2: Normalization layer[13x13x384] CONV3: 384 3x3 filters at stride 1, pad 1[13x13x384] CONV4: 384 3x3 filters at stride 1, pad 1[13x13x256] CONV5: 256 3x3 filters at stride 1, pad 1[6x6x256] MAX POOL3: 3x3 filters at stride 2[4096] FC6: 4096 neurons[4096] FC7: 4096 neurons[1000] FC8: 1000 neurons (class scores)

Details/Retrospectives: - first use of ReLU- used Norm layers (not common anymore)- heavy data augmentation- dropout 0.5- batch size 128- SGD Momentum 0.9- Learning rate 1e-2, reduced by 10manually when val accuracy plateaus- L2 weight decay 5e-4- 7 CNN ensemble: 18.2% -> 15.4%

Case Study: ZFNet [Zeiler and Fergus, 2013]

AlexNet but:CONV1: change from (11x11 stride 4) to (7x7 stride 2)CONV3,4,5: instead of 384, 384, 256 filters use 512, 1024, 512

ImageNet top 5 error: 15.4% -> 14.8%

Case Study: VGGNet[Simonyan and Zisserman, 2014]

best model

Only 3x3 CONV stride 1, pad 1and 2x2 MAX POOL stride 2

11.2% top 5 error in ILSVRC 2013->7.3% top 5 error

INPUT: [224x224x3] memory: 224*224*3=150K params: 0CONV3-64: [224x224x64] memory: 224*224*64=3.2M params: (3*3*3)*64 = 1,728CONV3-64: [224x224x64] memory: 224*224*64=3.2M params: (3*3*64)*64 = 36,864POOL2: [112x112x64] memory: 112*112*64=800K params: 0CONV3-128: [112x112x128] memory: 112*112*128=1.6M params: (3*3*64)*128 = 73,728CONV3-128: [112x112x128] memory: 112*112*128=1.6M params: (3*3*128)*128 = 147,456POOL2: [56x56x128] memory: 56*56*128=400K params: 0CONV3-256: [56x56x256] memory: 56*56*256=800K params: (3*3*128)*256 = 294,912CONV3-256: [56x56x256] memory: 56*56*256=800K params: (3*3*256)*256 = 589,824CONV3-256: [56x56x256] memory: 56*56*256=800K params: (3*3*256)*256 = 589,824POOL2: [28x28x256] memory: 28*28*256=200K params: 0CONV3-512: [28x28x512] memory: 28*28*512=400K params: (3*3*256)*512 = 1,179,648CONV3-512: [28x28x512] memory: 28*28*512=400K params: (3*3*512)*512 = 2,359,296CONV3-512: [28x28x512] memory: 28*28*512=400K params: (3*3*512)*512 = 2,359,296POOL2: [14x14x512] memory: 14*14*512=100K params: 0CONV3-512: [14x14x512] memory: 14*14*512=100K params: (3*3*512)*512 = 2,359,296CONV3-512: [14x14x512] memory: 14*14*512=100K params: (3*3*512)*512 = 2,359,296CONV3-512: [14x14x512] memory: 14*14*512=100K params: (3*3*512)*512 = 2,359,296POOL2: [7x7x512] memory: 7*7*512=25K params: 0FC: [1x1x4096] memory: 4096 params: 7*7*512*4096 = 102,760,448FC: [1x1x4096] memory: 4096 params: 4096*4096 = 16,777,216FC: [1x1x1000] memory: 1000 params: 4096*1000 = 4,096,000

(not counting biases)

INPUT: [224x224x3] memory: 224*224*3=150K params: 0CONV3-64: [224x224x64] memory: 224*224*64=3.2M params: (3*3*3)*64 = 1,728CONV3-64: [224x224x64] memory: 224*224*64=3.2M params: (3*3*64)*64 = 36,864POOL2: [112x112x64] memory: 112*112*64=800K params: 0CONV3-128: [112x112x128] memory: 112*112*128=1.6M params: (3*3*64)*128 = 73,728CONV3-128: [112x112x128] memory: 112*112*128=1.6M params: (3*3*128)*128 = 147,456POOL2: [56x56x128] memory: 56*56*128=400K params: 0CONV3-256: [56x56x256] memory: 56*56*256=800K params: (3*3*128)*256 = 294,912CONV3-256: [56x56x256] memory: 56*56*256=800K params: (3*3*256)*256 = 589,824CONV3-256: [56x56x256] memory: 56*56*256=800K params: (3*3*256)*256 = 589,824POOL2: [28x28x256] memory: 28*28*256=200K params: 0CONV3-512: [28x28x512] memory: 28*28*512=400K params: (3*3*256)*512 = 1,179,648CONV3-512: [28x28x512] memory: 28*28*512=400K params: (3*3*512)*512 = 2,359,296CONV3-512: [28x28x512] memory: 28*28*512=400K params: (3*3*512)*512 = 2,359,296POOL2: [14x14x512] memory: 14*14*512=100K params: 0CONV3-512: [14x14x512] memory: 14*14*512=100K params: (3*3*512)*512 = 2,359,296CONV3-512: [14x14x512] memory: 14*14*512=100K params: (3*3*512)*512 = 2,359,296CONV3-512: [14x14x512] memory: 14*14*512=100K params: (3*3*512)*512 = 2,359,296POOL2: [7x7x512] memory: 7*7*512=25K params: 0FC: [1x1x4096] memory: 4096 params: 7*7*512*4096 = 102,760,448FC: [1x1x4096] memory: 4096 params: 4096*4096 = 16,777,216FC: [1x1x1000] memory: 1000 params: 4096*1000 = 4,096,000

(not counting biases)

TOTAL memory: 24M * 4 bytes ~= 93MB / image (only forward! ~*2 for bwd)TOTAL params: 138M parameters

INPUT: [224x224x3] memory: 224*224*3=150K params: 0CONV3-64: [224x224x64] memory: 224*224*64=3.2M params: (3*3*3)*64 = 1,728CONV3-64: [224x224x64] memory: 224*224*64=3.2M params: (3*3*64)*64 = 36,864POOL2: [112x112x64] memory: 112*112*64=800K params: 0CONV3-128: [112x112x128] memory: 112*112*128=1.6M params: (3*3*64)*128 = 73,728CONV3-128: [112x112x128] memory: 112*112*128=1.6M params: (3*3*128)*128 = 147,456POOL2: [56x56x128] memory: 56*56*128=400K params: 0CONV3-256: [56x56x256] memory: 56*56*256=800K params: (3*3*128)*256 = 294,912CONV3-256: [56x56x256] memory: 56*56*256=800K params: (3*3*256)*256 = 589,824CONV3-256: [56x56x256] memory: 56*56*256=800K params: (3*3*256)*256 = 589,824POOL2: [28x28x256] memory: 28*28*256=200K params: 0CONV3-512: [28x28x512] memory: 28*28*512=400K params: (3*3*256)*512 = 1,179,648CONV3-512: [28x28x512] memory: 28*28*512=400K params: (3*3*512)*512 = 2,359,296CONV3-512: [28x28x512] memory: 28*28*512=400K params: (3*3*512)*512 = 2,359,296POOL2: [14x14x512] memory: 14*14*512=100K params: 0CONV3-512: [14x14x512] memory: 14*14*512=100K params: (3*3*512)*512 = 2,359,296CONV3-512: [14x14x512] memory: 14*14*512=100K params: (3*3*512)*512 = 2,359,296CONV3-512: [14x14x512] memory: 14*14*512=100K params: (3*3*512)*512 = 2,359,296POOL2: [7x7x512] memory: 7*7*512=25K params: 0FC: [1x1x4096] memory: 4096 params: 7*7*512*4096 = 102,760,448FC: [1x1x4096] memory: 4096 params: 4096*4096 = 16,777,216FC: [1x1x1000] memory: 1000 params: 4096*1000 = 4,096,000

(not counting biases)

TOTAL memory: 24M * 4 bytes ~= 93MB / image (only forward! ~*2 for bwd)TOTAL params: 138M parameters

Note:

Most memory is in early CONV

Most params arein late FC

Case Study: GoogLeNet [Szegedy et al., 2014]

Inception module

ILSVRC 2014 winner (6.7% top 5 error)

Case Study: GoogLeNet

Fun features:

- Only 5 million params!(Removes FC layers completely)

Compared to AlexNet:- 12X less params- 2x more compute- 6.67% (vs. 16.4%)

Slide from Kaiming He’s recent presentation https://www.youtube.com/watch?v=1PGLj-uKT1w

Case Study: ResNet [He et al., 2015]

ILSVRC 2015 winner (3.6% top 5 error)

(slide from Kaiming He’s recent presentation)

Case Study: ResNet[He et al., 2015]

224x224x3

spatial dimension only 56x56!

Identity Mappings in Deep Residual Networks, He et al. 2016

Deep Networks with Stochastic Depth, Huang et al., 2016

“We start with very deep networks but during training, for each mini-batch, randomly drop a subset of layers and bypass them with the identity function.”

x

yThink of layers more like vector fields, nudging the input to the label

Wide Residual Networks, Zagoruyko and Komodakis, 2016

- wide networks with only 16 layers can significantly outperform 1000-layer deep networks- main power of residual networks is in residual blocks, and not in extreme depth - wide residual networks are several times faster to train

Swapout: Learning an ensemble of deep architectures, Singh et al., 2016

- 32 layer wider model performs similar to a 1001 layer ResNet model

FractalNet: Ultra-Deep Neural Networks without Residuals, Larsson et al. 2016

Still an active area of research...Densely Connected Convolutional Networks, Huang et al.ResNet in ResNet, Targ et al.Deeply-Fused Nets, Wang et al. Weighted Residuals for Very Deep Networks, Shen et al.Residual Networks of Residual Networks: Multilevel Residual Networks, Zhang et al....

In large part likely due to open source code available, e.g.:

ASIDE: arxiv-sanity.com plug

Addressing other tasks...

Addressing other tasks...

image CNN features

224x224x3

A block of compute with a few million parameters.

7x7x512

Addressing other tasks...

image CNN features

224x224x3

A block of compute with a few million parameters.

7x7x512

predicted thing

desired thing

Addressing other tasks...

image CNN features

224x224x3

A block of compute with a few million parameters.

7x7x512

predicted thing

desired thing

this part changes from task to task

Image Classificationthing = a vector of probabilities for different classes

image CNN features

224x224x37x7x512

e.g. vector of 1000 numbers giving probabilities for different classes.

fully connected layer

Image Captioning

image CNN features

224x224x37x7x512

A sequence of 10,000-dimensional vectors giving probabilities of different words in the caption.

RNN

Localization

image CNN features

224x224x37x7x512

fully connected layer

Class probabilities(as before)

4 numbers: - X coord- Y coord- Width- Height

Reinforcement Learning

image CNN features

160x210x3

fully connected

e.g. vector of 8 numbers giving probability of wanting to take any of the 8 possible ATARI actions.

Mnih et al. 2015

Segmentation

image CNN features

224x224x37x7x512

deconv layers

224x224x20array of class probabilities at each pixel.

image class “map”

Autoencoders

image CNN features

224x224x37x7x512

deconv layers

224x224x3original image

Variational Autoencoders

image CNN features

224x224x37x7x512

deconv layers

224x224x3original image

reparameterization layer

[Kingma et al.], [Rezende et al.], [Salimans et al.]

Detection

image CNN features

224x224x37x7x512

1x1 CONV

E.g. YOLO: You Only Look Once (Demo: http://pjreddie.com/darknet/yolo/)

7x7x(5*B+C)For each of 7x7 locations:

- [x,y,width,height,confidence]*B- class

Dense Image Captioning

image CNN features

224x224x37x7x512

1x1 CONV

7x7x(5*B+[C,..])For each of 7x7 locations:

- x,y,width,height,confidence- sequence of wordsDenseCap: Fully Convolutional Localization Networks for Dense

Captioning, Johnson et al. 2016

Practical considerations when applying ConvNets

What hardware do I use?

Buy your own machine:- NVIDIA DIGITS DevBox (TITAN X GPUs)- NVIDIA DGX-1 (P100 GPUs)

Build your own machine:https://graphific.github.io/posts/building-a-deep-learning-dream-machine/

GPUs in the cloud:- Amazon AWS (GRID K520 :( )- Microsoft Azure (soon); 4x K80 GPUs- Cirrascale (“rent-a-box”)

What framework do I use?

Caffe TorchTheano

Lasagne

Keras

TensorFlow

MxnetchainerNervana’s NeonMicrosoft’s CNTKDeeplearning4j...

What framework do I use?

Caffe TorchTheano

Lasagne

Keras

TensorFlow

12,3

MxnetchainerNervana’s NeonMicrosoft’s CNTKDeeplearning4j...

Q: How do I know what architecture to use?

Q: How do I know what architecture to use?

A: don’t be a hero. 1. Take whatever works best on ILSVRC (latest ResNet)2. Download a pretrained model3. Potentially add/delete some parts of it4. Finetune it on your application.

Q: How do I know what hyperparameters to use?

Q: How do I know what hyperparameters to use?

A: don’t be a hero.

- Use whatever is reported to work best on ILSVRC. - Play with the regularization strength (dropout rates)

ConvNets in practice: Distributed training

VGG: ~2-3 weeks training with 4 GPUsResNet 101: 2-3 weeks with 4 GPUs

~$1K each

ConvNets in practice: Distributed training

Model parallelismData parallelism

[Large Scale Distributed Deep Networks, Jeff Dean et al., 2013]

ConvNets in practice: pre-fetching threads

CPU-disk bottleneckHard disk is slow to read from=> Pre-processed images stored contiguously in files, read asraw byte stream from SSD disk

CPU-GPU bottleneckCPU data prefetch+augment thread runningwhileGPU performs forward/backward pass

Moving parts lol

Learn more!CS231n

- lecture videos on YouTube- slides- notes- assignments

cs231n.stanford.edu

Thank you!

top related