henry shevlin, leverhulme cfi, university of cambridge · status of ai: thriving, with many...
TRANSCRIPT
Henry Shevlin, Leverhulme CFI, University of Cambridge
▪ Status of AI: thriving, with many striking successes, especially from deep learning.
▪ Challenges to overcome; Moore’s Law slowing.
▪ Important to manage expectations – two previous “AI winters” (1974-1980, 1987-1993). New fears.
Introduction: where we are, where we’re going
Introduction: where we are, where we’re going
▪ However, still grounds for optimism. Business and academia heavily invested, conditions right for a ‘forcing economy’ in AI.
▪ Datasets growing rapidly; 4.4 trillion GB in 2013, at least 18 trillion GB today.
▪ Awareness among companies of limitations of current paradigms, openness to interdisciplinary contributions.
▪ Key to recognize, though, that existing AI still lacks general intelligence.
General Intelligence
▪ Famous distinction by Isaiah Berlin betweenfoxes and hedgehogs.
▪ The fox knows lots of things, the hedgehog one big thing.
▪ Most AI is currently at the hedgehog stage. In other words, AI has specialized intelligence but not general intelligence.
▪ Compare: chess grand master and home cleaner.
General Intelligence
▪ We can distinguish three aspects of general intelligence
Robustness: resistant to interference.Flexibility: capable of learning/transferring skills.Autonomy: capable of extended independent operation.
▪ Note that a wide variety of animals across multiple phyla excel according to these benchmarks.
▪ Suggests that the key to general AI may lie with copying nature’s tricks.
General Intelligence (I) – Robustness
▪ Animals can accomplish feeding, mating, navigating, etc. in a wide range of environments/conditions.
▪ The Western honey bee, for example, inhabits every continent except Antarctica, and has adapted foraging strategies for different habitats.
▪ Birds (e.g. great snipe) manage very long migrations (>5500km) and make smart stopovers to reflect remaining distance.
▪ In short: even complex animal behavior is reliable.
General Intelligence (I) – Robustness
▪ By contrast, the performance of current artificial systems is generally extremely brittle.
▪ Rare to see animals lose control of basic motor functions, fall over, get stuck in loops, ‘glitch out’, etc.
▪ These problems ubiquitous in artificial systemsand robots (as any owner of a robot vacuumcleaner will know).
General Intelligence (I) – Robustness
▪ Likewise, outside of tightly-regulated training environments, artificial systems are vulnerable to a snowball effect of small errors, and can often be tricked into making spectacular errors, e.g., adversarial examples.
‘Dog’ Filter ‘Ostrich’
General Intelligence (I) – Robustness
▪ Key goal for future AI: eliminate simple failure modes, make systems reliable and resilient. This may require new leaps in understanding of biological intelligence.
▪ But what would a more robust AI look like? Imagine a system that ensures it doesn’t run out of power; doesn’t fall over going upstairs; can fix its own software glitches.
▪ However, perhaps harder to control – fewer natural ‘off switches’ or points of intervention. Control systems vital for malware or runaway AI.
General Intelligence (II) – Flexibility
▪ A second key aspect of general intelligence where machines fail and animal excel is flexibility.
▪ We are very flexible thinkers. Even animals can learn to do lots of different jobs (think of dogs with jobs).
▪ Lots of other demonstrations of flexible intelligencein animals – bumblebees engaging in social learning,forward planning in rats, ingenious tool use by birds…
General Intelligence (II) – Flexibility
▪ Most AIs limited to learning a single set of tasksdue to problems with transfer learning andcatastrophic forgetting – e.g., Frostbite challenge.
▪ Also severe problems with ‘one shot’ learning.
General Intelligence (II) – Flexibility
▪ Fast and flexible learning is currently a target of intense interest in AI research. So what would a more flexible AI look like?
▪ Imagine just one program that can do your taxes, order pizza, respond emails, diagnose your medical condition, and run a bath how you like it.
▪ Immediately raises worries about automation.
▪ However, also likely to shift how we think about AIs – no longer tools, but agents in the world like us capable of diverse tasks.
General Intelligence (III) – Autonomy
▪ A final key aspect of general intelligence is autonomy – the ability to manage one’s needs and goals without external support.
▪ Autonomy requires the ability to keep track of priorities, model the environment, and exploit resources, as well as perhaps self-replicate.
▪ Almost all animals manage this – if they didn’t, they wouldn’t be alive. Even hive/swarm creatures are ‘collectively autonomous’.
General Intelligence (III) – Autonomy
▪ By contrast, we’re decades away from robotic systems that can provide their own power, repair damage, or self-replicate.
▪ Even on the more limited challenge of ‘functional autonomy’ (e.g., driverless cars), there have been major obstacles (including fatal crashes).
▪ Autopilots only work reliably in predictable conditions – and the real world environments that humans and animals inhabit are anything but predictable.
General Intelligence (III) – Autonomy
▪ Nonetheless, truly autonomous systems would be a gamechanger – from truly effective autonomous vehicles and drones to space probes and rovers capable of repairing damage and using environmental resources.
▪ However, also a threat – autonomous systems are by definition those that can operate without human inputs, raising worries about loss of control.
▪ Consider computer viruses, arguably a simple autonomous system already in existence (but also imagine an autonomous ‘digital immune system’).
Conclusion
▪ AI has surpassed humans in many ways: maths, logic, go and Jeopardy.
▪ However, there are many areas where they still fall short of the performance of simple animals – robustness, flexibility, and autonomy.
▪ While we can go a long way with existing systems (e.g., medical diagnosis), next great leap in AI will require us to meet these challenges.
▪ Luckily we know it can be done, and cognitive science can show us the way – we just need to learn from nature.
Thanks for your attention! For more, see:
Shevlin, H., Vold, K., Crosby, M., &
Halina, M. (2019). The limits of
machine intelligence.
EMBO Reports, e49177.
https://doi.org/10.15252/embr.20
1949177