machine learning and art hack day

52
SOME EXPERIMENTS I HAVE DONE WITH ART + DEEP LEARNING Jason Toy

Upload: jtoy

Post on 12-Jan-2017

752 views

Category:

Technology


3 download

TRANSCRIPT

SOME EXPERIMENTS I HAVE DONE WITH ART + DEEP LEARNING

by Jason Toy

Jason Toy

THE PRESENTATION

show some of my experiments, both testing the limits of other people’s models and training my own models

overview of how the models work

for artists and machine learners in the audience, i tried to make it so you will all learn a little

hopefully inspire some of you to go out and build something awesome

MY BACKGROUND - JASON TOY

my main passion is general artificial intelligence

studied math and computer science

generalists, program a little of everything, master of nothing

founded a couple of companies: rubynow, socmetrics - using ML for mining social media

CEO of filepicker,sold in beginning of 2016

exploring the intersection of machine learning,art, entrepreneurship

DEEP LEARNING VS TRADITIONAL MACHINE LEARNING

mostly automated feature extraction

DEEP LEARNING VS TRADITIONAL MACHINE LEARNING

much better at learning nonlinear relationships

WHAT IS GENERATIVE MODELING

generative vs discriminative

architect of models

GM around for a long time - used in architect, design,games,etc

miniature systems that mimic something in real life, “artist in a box”

more fun; I'm not as interested to increase ad clickthrough rates

DISCRIMINATIVE VS GENERATIVE

generative:

naive bayes

LDA

deep learning

discriminative:

SVM

random forest

linear/logistic regression

GENERATIVE DEEP MODELStweak-able output w/ vectors

EXPERIMENTS

CHAR-RNN

“I'm not going anywhere. I will bring the poorly educated back bigger and better. It's an incredible movement. ”

“We're losing companies, the economy. We are going to save it. We're going to bring the party. Let's Make America Great Again”

“I want to thank the volunteers. They've been unbelievable, they work like endlessly, you know, they don't want to die. My leadership is good”

CHAR-RNN

1vanilla (image classification) 2 sequence output (image -> text) 3 sequence input ( sentiment analysis)

4 seq2seq (machine translation) 5 synced seq2seq (video classification)

CHAR-RNN

RNN - recurrent because they perform the same task for every element of a sequence; typically 2-3 layers

LSTM - long short term memory

similar, state is calculated differently

MY CHAR-RNN EXPERIMENTS

what does Hellen Keller think?

seeing is like or inspirents of a kiss licks, in child, for the last decting of accomplish with me for the mistakes in silence is to keep the moments filled whiter, the chaps of the house language was sends a humanise.

i wish i could presepred its repepenting and the days like the poor discuss of language of the poem in the letters, dotiment in the endless good and eager and over the charicality of the hall of rubbings that I hapmende the comprehend, the birds like your mind to perhaps the not wind I should do?

MY CHAR-RNN EXPERIMENTS

“i love you. Now her before it just numberse idevening with the press over. I was probably ever need to ever admit? Right” - Trump

“life is an economy. I was in the LGBT communities can to the worst of the gun not only the fight are of us safe and I start up these are not grow…” - Hillary

FUTURE CHAR-RNN EXPERIMENTS

train a model to talk like a person with little data? transfer learning?

could we train a model off of a standard “human” model ?

could we train a model to talk in different emotions/styles?

DEEP DREAMING / INCEPTION

A MACHINE LEARNING IMAGE CLASSIFIER

LAYERS LEARNED FEATURES

architects:

imagenet

googlenet

alexnet

GOOGLENET

LAYER AMPLIFICATION

objective function: activate as many neurons in a layer

key trick: push back to image

feedback loop

choose different layers for different effects: conv2/3x3,inception_3a,etc

TEST IMAGE

–Johnny Appleseed

“Type a quote here.”

TRAINING MY OWN DREAMS

INCEPTION FUTURE EXPERIMENTS

train with different image sets - sea life, reptiles?

different objective function - activate only 1 group of neurons?

selective regions of hallucinating?

testing different network architects

NEURALSTYLE

Paint images in the style of any painting

A NEURAL ALGORITHM OF ARTISTIC STYLE

paper: http://arxiv.org/abs/1508.06576

The key finding of this paper is that the representations of content and style in the CNNs are separable.

CNNs - convolutional Neural Network

high layers in the network act as the content of the image

style computed from multiple layers’ filter responses

]?{;./ΠK;

NEURALSTYLE FUTURE EXPERIMENTS

can we automatically find the “good” images from a combination?

can we know beforehand if a combo style/content will look good?

currently trained on vggnet data, what happens if we train it on a different data set, will the art look different?

will a different architect make better art?

MULTIMODAL! STORYTELLING

I ACCIDENTALLY GAVE THE ANIMAL BACK OF MY HEAD , BREATHING DEEPLY . THERE WAS NO DOUBT IN HER EYES , AND I COULD TELL BY THE LOOK ON HIS FACE THAT HE DID N'T APPROVE OF WHAT WAS HAPPENING TO ME . IN FACT , IT MUST HAVE BEEN ONE OF THOSE RARE OCCASIONS , AS WELL AS A PET ANIMAL . HER SCENT FILLED THE AIR . THAT 'S WHAT SHE WAS LOOKING FOR , AND NOW SHE HAD TO STAY AWAKE LONG ENOUGH TO DIG UP THE LEASH

SKIP-THOUGHT VECTORS

sentence -> vectors

TRUMP STORYTELLER

FUTURE NEURAL STORY EXPERIMENTS

train with different text

a “seeing” Hellen Keller version

train on different visual features

AND MANY OTHER EXPERIMENTS…….HOPEFULLY INSPIRING

DATA IS ESSENTIAL

many of these models are built on public datasets

always has been a problem; bigger problem for DL and general models

very hard to get data; how can this be solved?

constantly on my mind ; lets connect me if interested

DL IS NOT ALL FUN AND UNICORNS

data issue

specialized software/hardware pipelines; GPUs

be prepared to wait; think weeks, not hours

model tuning

architect tuning

techniques and architects changing everyday

WHY?

I dream of building larger models

AGI and multi modal models

larger experiments

want to collaborate with cool artists and coders

fun? lets talk!

LINK APPENDIX

STUDY LINKS

what is deep learning: http://www.jtoy.net/2016/02/14/opening-up-deep-learning-for-everyone.html

generative models: https://en.wikipedia.org/wiki/Generative_model

discriminative models: https://en.wikipedia.org/wiki/Discriminative_model

TEST LIVE MODEL LINKS

trump char-rnn model: http://somatic.io/models/WZmmBjZ9

neural style model: http://www.somatic.io/models/5BkaqkMR

neural talk model: http://somatic.io/models/qoEGanRe

romance story telling: http://somatic.io/models/2n6g7RZQ

LINKS

VGG net data used: http://www.robots.ox.ac.uk/~vgg/research/very_deep/

tensorflow version: https://github.com/anishathalye/neural-style

neural style paper: http://arxiv.org/abs/1508.06576

char-rnn code: https://github.com/somaticio/char-rnn-tensorflow

mscoco: http://mscoco.org

imagenet: http://image-net.org/

LINKS

char-rnn: https://github.com/somaticio/char-rnn-tensorflow

tensorflow char-rnn tutorial: https://www.tensorflow.org/versions/r0.9/tutorials/seq2seq/index.html#recurrent-neural-networks

neuralstyle: https://github.com/anishathalye/neural-style

–John Dewey

“Every great advance in science has issued from a new audacity of imagination.”

Jason [email protected]

I write here:http://jtoy.net http://somatic.io/bogmy models here: http://somatic.io

@jtoy

QUESTIONS?