artificial intelligence is back, deep learning networks and quantum possibilities

22
History of AI and future possibilities AI is hot again John Mathon MV Speaker Series 2015 - 03 - 03

Upload: john-mathon

Post on 14-Jul-2015

262 views

Category:

Engineering


4 download

TRANSCRIPT

History of AI and future possibilities

AI is hot againJohn Mathon

MV Speaker Series 2015-03-03

80s - Genesis

When I started my career at MIT AI was thought to be on the verge of discovering the secret of human intelligence. My MIT programming contest victory, mini-max rethought

The initial successes were things like blocks world, some initial neural network work, Mathematica

The idea was that if you combined enough smart things like mathematica and blocks world you would eventually get intelligence

Neural networks didn’t go very far. Lots of problems getting the networks to converge.

I saw through this and abandoned the study quickly. I called it the chicken problem. The problem of deeplearning had not really been attempted

Marvin Minsky called the bluff publicly and the industry collapsed quickly after that

90s – Rule based systems

In the late 80s a revitalization of AI occurred and a new heyday based on rule based systems that allowed you to operate on knowledge systems. Primary among these was KEE (Knowledge Engineering Environment) from Intellicorp

This ultimately failed again and seemed to put AI into a deep funk

00s - Statistics

Machine Learning - Brute force statistical approach

This is a statistical approach to learning

Find an “algorithm” or “model” or “formula” which mimics the data close enough to get paid, >10 different statistical techniques in use including neural nets

Pass lots of “labeled” data through the system

Watson

Like Mathematica a hard-core machine learning approach

2010 Convoluted Neural Nets

Neural Net called Convoluted Neural Network(CNN) behind the scenes advancements Invariance built-in (Convolution)

Back propagation and various gradient descent algorithms

2 layers at a time “learn” abstraction and filter – filter layer reduces computation

Brain is composed of layers and vision system seems to be similar in learning abstractions in layers

First 2 layers learn lip, ear, nose, brow, tire, window, … Next 2 layers combinations make face, truck

Next 2 Layers – Female, Male, Mustang, ..

Large data sets of “labeled data” is same problem as Machine Learning

Deep Belief NetworksCNN +

The holy grail of learning is not having to use “labeled” datasets. DBN gets around this by using a Markov probabilistic approach to neuron evolution. No “labeled” data initially makes it much easier to use.

After initial training we pass “labeled” data through to reinforce learned pathways and do better selection of best abstractions.

Training individual layers is much faster and results in better abstractions

Recurrent (Feed lowest layer back into top) Unlimited layers

Addition of Memory (New Neuron type) – secret sauce Made cursive much better

Achievements of DeepLearning

The best at cursive recognition (DBN have beaten all others)

The best at text recognition

The best at object recognition (54% to 64%, google+)

The best at facial recognition (facebook 97% accurate better than FBI, 9-layer DBN)

The best at voice recognition (100% penetration DBN)

Used in Watson now

Possibly other purposes at Google

DeepMind acquisition gives Google 2 of the 4 geniuses in the field, facebook 1 and only 1 left in academia

Why is the AI problem hard?

Brain is remarkable at invariance: translation, scale, distortion, blocking, color, quality, background Very hard to replicate anything close to flexibility, uses many clues

Brain applies abstractions across different disciplines There is almost never an attempt to learn more than one “thing”

because different input tend to create instability and less precise recognition

Brain creates abstractions upon abstractions in a stable way across dozens/hundreds of layers or levels With little problems with back propagation problems or unlearning

Have not been able to go beyond a few layers

Fears of DeepLearning

Elon Musk, Stephan Hawking, Bill Gates

All fear DeepLearning from DeepMind

Elon says what he knows/has seen of DeepLearning gives him fear

Google did establish a “ethics panel” to make sure to use AI safely and was key to DeepMind agreeing to be bought by Google

It is a love/hate relationship. We fear AI being smarter and malevolent taking our jobs away or killing like we fear aliens instinctively. We fear losing our “special” status and place. We also want to like and be fair to any sentient creatures. There is a natural curiosity that seems inevitably will get us there.

Fear of accidentally creating intelligence, some simple thing replicates like a virus in a computer and takes over. In Hyperion (Dan Simmons) an 88-byte code fragment that replicates and evolves eventually becomes AI.

Isaac Asimov established the 3 rules of Robot AI… Cannot harm a human … How would you put such a rule in a CNN based AI?

Why is AI finding interest again?

Social is very much about pictures, voice, video, etc… which is greatly enhanced by image and facial recognition

Making things intelligent or at least a little intelligent, for instance recognizing voice better – siri, skyvi makes things easier to use, provide much greater value.

Self-driving cars recognizing signs, people, etc…

Smarter is better even a little bit smarter … see google

BIG DATA SkyNetworks, H2O.ai, 16z, Azure Machine Learning Studio,

Google…

Can DeepLearning become “really intelligent?”

Required

1) Plasticity

2) Many levels of abstraction

3) Planning

4) Memory in general

5) Self-perception

6) Creativity

Unsure but associated

7) Common sense

8) Emotions

9) Self-preservation

10) Qualia

11) Experiential Tagged Memory

12) Consciousness

CNN lack critical features of “intelligence”, “sentience” or “consciousness”

Quantum Computers could be a pathway to AI

Performance could be log the performance of traditional computers, i.e instead of a million computers 6 would be enough, instead of a trillion computers 12 would be enough. For some problems simply x ^ (n/2) improvement so instead of 10^60 operations, 10^30 operations.

Many of the things we think a brain does seem to be the kinds of things quantum computers would be good at. Pattern recognition, searching databases, optimization, i.e. neural weighting optimization

The applications of Quantum computers would be in optimization, recognition problems and security applications.

Quantum MechanicsRichard Feynman: If you think you understand quantum mechanics, you don't understand quantum mechanics.

Wave/Particle Duality and the Measurement Problem:

A particle acts like a self-interfering wave which when you look at it collapses from being anywhere in space to one location breaking the speed of light.

12 current theories of “why” collapse seems to happen including the newest one called quantum darwinism in which space itself has memory and evolution mimicking genetic evolution. Many Worlds, Copenhagen, Quantum Gravity

Quantum foam : virtual particles the infinite possibilities, Higgs Particle, non-zero vacuum state

Quantum Superposition: Particles when not being observed seem to occupy ALL possible states and take all possible paths simultaneously. Yet when “measured” they choose with probability varying by the energy consumed in the whole process. All paths are possible and appear but the least energy path is predominant. This seems simple but how does NATURE figure this out? It’s nontrivial.

Calculate the solution to a 3 body problem in quantum mechanics is nearly impossible nearly an infinite number of possible loops a task that takes a year of supercomputer time … yet nature does this 10^40 times a second for 10^75 particles Many worlds is popular because nature doesn’t compute anything, it just splits for every possible choice and the highest probability is the universe we happen to statistically find ourselves in. Also eliminates collapse problems but introduces the problem of an infinite number of universes created every second.

Quantum Computers are way different than regular computers

Basic Quantum Operations:

Hadamard Operation: Put a qubit into multiple states with equal probability

Result a true random number generator

Do Hadamard operation twice get a NOT operation

Schor Algorithm – optimization, factorization

A particle traverses a puzzle we set up

Grovers algorithm – database search

QCL – Quantum computing language (D-Wave)

In a quantum computer we set up the experiment, then run it, the answer is whatever nature does.

How practical are Quantum Computers?

D-Wave-2 512 qubit used by Google to demonstrate superior performance to any existing computer (up to 5x or more) … However not suitable to all problems

D-Wave-3 releases 1152 qubit computer in March, doubling capacity every year and improving entanglement.

2 ^ 1152 patterns of 1152 bits in superposition simultaneously.

NOT doubling computing SQUARING computing capability!

D-Wave 6 might only by 8000 qubit computer but more powerful than all computers on the earth existent today

Brain is a quantum computer?

Roger Penrose believes this. If he believes it then it is worth looking at.

Evidence that nature “Uses” quantum mechanics to solve difficult problems

Taking a single photon from the sun to build plants

For Birds to sense direction

In the eyes, ears, nose creating ultra-sensitive senses

If nature uses it for these functions it could also use it for many other functions

We see that quantum mechanics is good at solving problems a brain has to do: pattern recognition, efficient operation, searching for information, neuron weighting calculations

Brain science has not discovered how the brain stores all the experiential information we consume, nor the process of pattern recognition, reasoning or incredibly complex weighting optimization characteristic of learning

Recent evidence of quantum effects in microtubules of dendrites of nerves. Recent evidence of molecular process of phosphorylating microtubules to encode memories.

If Brain uses QM we are far away from “intelligence”

The human brain could be composed of up to 1 trillion quantum computers or a trillion trillion qubit quantum computer or some combination

Compared to a 1152 qubit D-wave at $10 million quite a deal

We know that Elephants with the largest neural structure compared to humans (1/2 the size) are not as intelligent as humans.

The largest CNN we have built is 650,000 virtual neurons, which is between 1/10,000 -> 1/1,000,000 of the brain

AI still probably a distance away but extremely useful

AI is extremely useful and getting better They can and will be able to do some things better than humans

They will be able to make our lives easier

They will in some cases remove people’s jobs

They make programs more intuitive, helpful, efficient

Fears of AI overtaking humans are overblown They make stupid errors, they have no common sense

They will not be gaining consciousness soon

They show no creativity

They show no planning ability

They show no ability to learn multiple disciplines

How Can WSO2 benefit

Machine Learning Adapter and Connectivity

Provide easy interfacing to machine learning systems

Wizard-like simplicity to setup bigdata systems

More and more in bigdata usage

D-wave adapters to funnel data and programming to and from d-waves to solve problems

WSO2 Deep Learning Server

Configurable Layers and parameters

Autoscaling

Roger Penrose and Consciousness

Roger Penrose invented Spinors and Twistor Space.

He was slow learner. His high school teacher had to give him two classes to finish tests. He worked everything out from basics. He was extremely visual.

He did what physicists told him he shouldn’t

Twistor Space is discrete NOT continuous. When you calculate from one vertex to another you get a result with different space-time coordinates. Intermediate space-time coordinates don’t exist.What seems like “fuzzy spooky foam probabilistic action at a distance ” in space-time appears as simply non-existant points in a Twistor lattice.Collapse is not a problem (Measurement problem) because the motion of particles is simply moving from vertex to vertex with different probabilities.

All Physicists say: “You can’t possibly imagine the quantum world so give up. Just study the math, forget trying to visualize it.”

Roger invented Spinors to visualize spinning particles involved “making real” imaginary numbers.

Imaginary numbers are central to quantum physics. They appear everywhere. So, Roger created Twistor space: 5 dimensions, 2 complex

Quantum ConsciousnessOrchestrated Objective Reduction

Strong Evidence of Quantum processes in nature and in microtubules of dendrites in brains

Penrose calculates decoherence time at approx 40 times/second which corresponds to brain waves – otherwise completely unmotivated

Using Turing machines and some Godel theorems Penrose shows the human brain does things no computer can EVER DO.

He says the human brain and consciousness is not only a quantum computer but that there are capabilities of this quantum computer that exceed even what we know about quantum mechanics. I.e. New physics.

The Theory says that human consciousness is in the quantum fuzz similar to Quantum Darwinism and the brain is a transducer. Evidence that something makes decisions before the human brain is even aware of it. Which means that our consciousness may transcend our bodies