the robot and the philosopher: charting progress at the … robot and the philosop… · ·...
TRANSCRIPT
The robot and the philosopher: charting progress at the Turing centenary
Anders Sandberg
Future of Humanity Institute, Oxford University
Outline
• Robotics and philosophy
– What do they have to do with each other?
• Where have we been going?
– How well has robotics progressed?
• Where will we be going?
– Future of ideas of robotics
• Where should we be going?
– Taking robotics seriously
Philosophy: It can happen to anybody!
“Alan Turing (1912–1954) never described himself as a philosopher, but his 1950 paper “Computing Machinery and Intelligence” is one of the most frequently cited in modern philosophical literature.”
-Stanford Encyclopedia of Philosophy, Alan Turing
Robotics: it can happen to anybody!
“There is only one condition in which we can imagine managers not needing subordinates, and masters not needing slaves. This condition would be that each instrument could do its own work, at the word of command or by intelligent anticipation, like the statues of Daedalus or the tripods made by Hephaestus, of which Homer relates that "Of their own motion they entered the conclave of Gods on Olympus", as if a shuttle should weave of itself, and a plectrum should do its own harp playing.” - Aristotle, Politics (ca. 322 BC, book 1, part 4)
What are philosophers good for?
• Conversation, coffee, passing tools…
• Analyse assumptions
• Thinking clearly about unclear things
• Philosophy deals with the limits of the possible, science deals with the limits of the real, engineering deals with the limits of the feasible
• Interdisciplinary glue
• Unexpected links to other topics
What are roboticists good for?
• Creating useful tools, models… or real embodied minds
• Real tests of theories: if you build it and it works, then you have understood
• Discover unexpected problems
Past outside ideas important for AI (after Russell and Norvig)
• Philosophy
– Dualism
– Materialism
– Empiricism
– Induction
– Logical positivism
– Observation sentences
– Confirmation theory
• Economics
– Utility
– Decision theory
– Game theory
– Operations research
– Markov decision processes
– Complexity
– Satisficing
• Mathematics
– Algorithm
– Incompleteness theorem
– Intractability
– NP-completeness
– Probability
• Neuroscience – Neurons
• Psychology – Behaviorism
– Cognitive psychology
– Cognitive science
• Control theory – Objective function
• Linguistics – Computational linguistics
– Natural language processing
– Knowledge representation
Frame problem
• GOFAI provided a major headache for the philosophers
• McCarthy and Hayes: How does one keep track of the frame of reference of an operation (transformation)? In particular, what changes and what stays the same when an operator is applied to the representation of a state?
• Colour(x, c) holds after Paint(x, c), Position(x, p) holds after Move(x, p)
• Colour(x, c) holds after Move(x, p) if Colour(x, c) held beforehand, Position(x, p) holds after Paint(x, c) if Position(x, p) held beforehand
• But what about all the irrelevant non-actions?
The “mentalese problem”
• Fodor and Pylyshyn (and others) have argued that cognitive processes are mechanical, rule-governed computations done over the “language of thought” (with syntactic structure and compositional semantics).
• What is the representational format of this “mentalese”? Is it like language or formal logic, or could it be pictures and maps?
• Michael Rescorla, Cognitive Maps and the Language of Thought , The British Journal for the Philosophy of Science: extended Kalman filter SLAM provides a powerful counterexample to claims that mental representations have to be logical structure.
• Representations like Bayesian state vectors in EKF-SLAM have metric properties like a real map (despite being implemented deep down by symbols or gates) and they are associated with real things and actions (if the landmark map is wrong, so will the robot behavior be)
• The robot is thinking in pictures (or at least maps)!
• What does it mean for something to be a mental map?
Overlap robotics-philosophy
• The problem of mind/intelligence
• Humanness
• Embodiment
• Ethics of robotics
• Many fundamental robotics principles are also big deal in philosophy – Rational action with limited brains in
uncertain world
– Knowledge-how vs. knowledge-that
– Emergence
– Conceptual spaces
– Perception
– …
Where do new ideas come
from?
• Comparison with physics • Observation/speculation
• Physicomathematics
• Rational mechanics
• Experimental physics
• Optics
• Thermodynamics
• Electromagnetism
• Relativity
• Quantum theory
• Outside fields provide new ideas
– Mathematics, astronomy
• New tools/technologies provide new ideas
– Thermodynamics, particle accelerators
• Conceptual jumps
– Newton, unification, Einstein, symmetry laws
• Discovery of anomalies
– Electromagnetism, Radioactivity, Quantum physics
Past outside ideas important for AI (after Russell and Norvig)
• Philosophy
– Dualism
– Materialism
– Empiricism
– Induction
– Logical positivism
– Observation sentences
– Confirmation theory
• Economics
– Utility
– Decision theory
– Game theory
– Operations research
– Markov decision processes
– Complexity
– Satisficing
• Mathematics
– Algorithm
– Incompleteness theorem
– Intractability
– NP-completeness
– Probability
• Neuroscience – Neurons
• Psychology – Behaviorism
– Cognitive psychology
– Cognitive science
• Control theory – Objective function
• Linguistics – Computational linguistics
– Natural language processing
– Knowledge representation
Trends
THE WORLD
• Demographics: a bigger and ageing population
• Urbanisation: most of us live in cities
• Wealthier: we are getting richer
• More globalized: we are getting more integrated
• Resource change: we need more and different resources
• Climate change: the geosystem is changing together with the technosystem
Trends SOCIETY
• Security: people are desiring more safety and less risk
• Education: we need more education than ever (longer lifespans, accelerating change)
• Renegotiation: new rules have to be negotiated between classes, generations, subcultures, cultures etc.
• Democratization: people want individual control over their own lives and are increasingly willing to pay or fight for it.
Trends TECHNOLOGY
• Several ”Moore’s laws”
• Converging technologies: micro, nano, info, cogno, bio, identity, …
• Big data: acquiring enormous amounts of data
• We are wirelessly connected 24/7
• Internet of things: more and more objects are smart and connected.
• Surveillance: more and more devices and data that allow observation and surveillance.
• Robot-ready: more and more devices are not just smart, but they have built in sensors and actuators that allow them to become robots.
• Low power, cheap electronics (because of wireless, smart objects and resource)
• Automation: people become more expensive, machines smarter and cheaper
• Customization: Due to individualization, smarter and cheaper technology
Future idea sources?
• Outside fields
– Security
– Globalized world
– Ecotech
– 3D printing
– Nanotechnology
– Medicine
– Risk management
• New tools
• Conceptual jumps
• Discovery of anomalies
Future idea sources?
• Outside fields
• New tools
– Extreme, ubiquitous hardware
– Extreme, ubiquitous information
– Cheap hardware
– New sensors: swarms, lifelogging
– Open source/crowdsourcing/crowdfunding
– Neuroinformatics
– Reconfigurable hardware?
– Quantum computers?
– Biotechnology?
• Conceptual jumps
• Discovery of anomalies
Future idea sources?
• Outside fields
• New tools
• Conceptual jumps
– From robot as thing/being to robot as systems
– Good representations?
– Neuromorphic insights?
– Philosophy of mind?
– New ways of interdisciplinary research?
• Discovery of anomalies
Future idea sources?
• Outside fields
• New tools
• Conceptual jumps
• Discovery of anomalies
– New hard problems?
– Really non-von Neumann architectures?
– Hyperturing computation?
If we haven't the brains to choose the best track
we should choose the track to better brains.
Bradley Felton
Upcoming challenges
• Living with robots – Trust, validation, safety
– Relationships
– Handling robots outside their “zone of comfort”
– Building robots for a human-shaped world
– Cyborgs
• Radically transformative industries
• Understanding opaque autonomous systems
• Setting the “morality” of autonomous systems
AI principal agent problem
• AI principal agent problem
– Stupid systems misbehave because we
cannot explain what we want
(and they are bad at doing it)
– Smarter systems misbehave because we
misunderstand each other
– Smart systems misbehave because
different goals from ours
(and they are good at achieving them)
– Less misbehavior the smarter they are,
but more dangerous
• Programming in goals is a form of
communication
Maximizing utility might be *dangerous*
• Single goal-optimizers tend to optimize away things not related to their goals
– Setting goals to fit human values is hard
– More powerful optimizers are less safe
• Making things good is hard
– Gustaf Arrhenius, An Impossibility Theorem for Welfarist Axiologies, Economics and Philosophy, 16 (2000), 247-266: ”There is no welfarist axiology that satisfies the Dominance, the Addition, and the Minimal Non-Extreme Priority Principle and avoids the Repugnant, the Sadistic and the Anti-egalitarian Conclusion.”
The need for human detectors
• Ethicists agree : humans matter a lot, and hence how they are treated matters a lot.
• To a robot a human is just moving furniture
• It might be very important to make humans a deep primitive in robots
• Agency detection, theory of mind…
100 years after the birth of Turing we have learned:
• Minds and machines go really well together
• Making smart systems is hard but seems doable
• We are going to be surprised by how hard or easy problems turn out – and what we missed
• Robotics matters, and not just for its practical benefits
• Staying in one discipline rarely pays off
• To control technologies, look for levers: – The ordering of technologies
– General purpose techs
– Design sets the rules, protocols become the constitution
– Visions drive research