s entient m achines : f rom p hilosophical f oundations to a s afe i mplementation mark r. waser...

35
SENTIENT MACHINES: FROM PHILOSOPHICAL FOUNDATIONS TO A SAFE IMPLEMENTATION Mark R. Waser Digital Wisdom Institute [email protected]

Upload: jadyn-alsbrook

Post on 01-Apr-2015

216 views

Category:

Documents


4 download

TRANSCRIPT

Page 1: S ENTIENT M ACHINES : F ROM P HILOSOPHICAL F OUNDATIONS TO A S AFE I MPLEMENTATION Mark R. Waser Digital Wisdom Institute Mark.Waser@Wisdom.Digital

SENTIENT MACHINES:FROM PHILOSOPHICAL FOUNDATIONS

TO A SAFE IMPLEMENTATION

Mark R. WaserDigital Wisdom Institute

[email protected]

Page 2: S ENTIENT M ACHINES : F ROM P HILOSOPHICAL F OUNDATIONS TO A S AFE I MPLEMENTATION Mark R. Waser Digital Wisdom Institute Mark.Waser@Wisdom.Digital

JAN. 25, 1979: ROBOT KILLS HUMAN

A 25-year-old Ford Motor assembly line worker was killed on the job in a Flat Rock, Michigan, casting plant – the first recorded human death by robot.

Robert Williams’ death came on the 58th anniversary of the premiere of Karel Capek’s play about Rossum’s Universal Robots. R.U.R gave the world the first use of the word robot to describe an artificial person. Capek invented the term, basing it on the Czech word for “forced labor.” (Robot entered the English language in 1923.)

Williams died instantly when the robot’s arm slammed him as he was gathering parts in a storage facility, where the robot also retrieved parts. Williams’ family was awarded $10 million in damages. The jury agreed the robot struck him in the head because of a lack of safety measures, including one that would sound an alarm if the robot was near.

2

1867 – William Bullock, inventor of the rotary web press, was killed by his own invention.

Page 3: S ENTIENT M ACHINES : F ROM P HILOSOPHICAL F OUNDATIONS TO A S AFE I MPLEMENTATION Mark R. Waser Digital Wisdom Institute Mark.Waser@Wisdom.Digital

BAXTER”A ROBOT WITH A REASSURING TOUCH”

3

Page 4: S ENTIENT M ACHINES : F ROM P HILOSOPHICAL F OUNDATIONS TO A S AFE I MPLEMENTATION Mark R. Waser Digital Wisdom Institute Mark.Waser@Wisdom.Digital

UNFRIENDLY AI“Without explicit goals to the contrary, AIs are likely to

behave like human sociopaths in their pursuit of resources”

“Superintelligence Does Not Imply Benevolence”4

Page 5: S ENTIENT M ACHINES : F ROM P HILOSOPHICAL F OUNDATIONS TO A S AFE I MPLEMENTATION Mark R. Waser Digital Wisdom Institute Mark.Waser@Wisdom.Digital

WHAT IS A SAFE

AI / ROBOT?(AND HOW DO WE CREATE ONE?)

*ANY* agentthat reliably showsETHICAL BEHAVIOR

but . . . the real question is . . . How do we GUARANTEE that

reliability? 5

Page 6: S ENTIENT M ACHINES : F ROM P HILOSOPHICAL F OUNDATIONS TO A S AFE I MPLEMENTATION Mark R. Waser Digital Wisdom Institute Mark.Waser@Wisdom.Digital

WHAT IS ETHICAL BEHAVIOR?

The problem is that no ethical system has ever reached consensus. Ethical systems are completely unlike mathematics or science. This is a source of concern.

AI makes philosophy honest.6

Page 7: S ENTIENT M ACHINES : F ROM P HILOSOPHICAL F OUNDATIONS TO A S AFE I MPLEMENTATION Mark R. Waser Digital Wisdom Institute Mark.Waser@Wisdom.Digital

THE FRAME PROBLEM

“How do rational agents deal with the complexity and unbounded context of the real world?”

McCarthy, J; Hayes, PJ (1969) Some philosophical problems from the standpoint of artificial intelligenceIn Meltzer, B; Michie, D (eds), Machine Intelligence 4, pp. 463-502

Dennett, D (1984) Cognitive Wheels: The Frame Problem of AIIn C. Hookway (ed), Minds, Machines, & Evolution: Philosophical Studies:129-151

“How can AI move beyondclosed and completely specified micro-worlds?”

(aka “How can we eliminate the requirement to pre-specify *everything*?”)

Dreyfus, HL (1972) What Computers Can’t Do: A Critique of Artificial ReasonISBN 0-06-011082-1: MIT Press

Dreyfus, HL (1979) From Micro-Worlds to Knowledge Representation: AI at an Impassein Haugeland, J (ed), Mind Design II: Philosophy, Psychology, AI: 143-182

Dreyfus, HL (1992) What Computers Still Can’t Do: A Critique of Artificial ReasonISBN 978-0-262-54067-4: MIT Press

7

Page 8: S ENTIENT M ACHINES : F ROM P HILOSOPHICAL F OUNDATIONS TO A S AFE I MPLEMENTATION Mark R. Waser Digital Wisdom Institute Mark.Waser@Wisdom.Digital

WATSON’S SOLUTION

8

Page 9: S ENTIENT M ACHINES : F ROM P HILOSOPHICAL F OUNDATIONS TO A S AFE I MPLEMENTATION Mark R. Waser Digital Wisdom Institute Mark.Waser@Wisdom.Digital

. . . MATCH

9

NOT THINK OR CREATE

Page 10: S ENTIENT M ACHINES : F ROM P HILOSOPHICAL F OUNDATIONS TO A S AFE I MPLEMENTATION Mark R. Waser Digital Wisdom Institute Mark.Waser@Wisdom.Digital

THE PROBLEM OFDERIVED INTENTIONALITY

Our artifacts

only have meaning because we give it to them; their intentionality, like that of smoke signals and writing, is essentially borrowed, hence derivative. To put it bluntly: computers themselves don't mean anything by their tokens (any more than books do) - they only mean what we say they do. Genuine understanding, on the other hand, is intentional "in its own right" and not derivatively from something else.

Haugeland J (1981) Mind DesignISBN 978-0262580526: MIT Press

10

Page 11: S ENTIENT M ACHINES : F ROM P HILOSOPHICAL F OUNDATIONS TO A S AFE I MPLEMENTATION Mark R. Waser Digital Wisdom Institute Mark.Waser@Wisdom.Digital

ENACTIVECOGNITIVE SCIENCE

A synthesis of a long tradition of philosophical biology starting with Kant’s "natural purposes" (or even Aristotle’s teleology) and more recent developments in complex systems theory.

Experience is central to the enactive approach and its primary distinction is the rejection of "automatic" systems, which rely on fixed (derivative) exterior values, for systems which create their own identity and meaning. Critical to this is the concept of self-referential relations - the only condition under which the identity can be said to be intrinsically generated by a being for its own being (its self for itself)

Weber, A; Varela, FJ (2002) Life after Kant: Natural purposes and the autopoietic foundations of biological individualityPhenomenology and the Cognitive Sciences 1: 97-125

11

Page 12: S ENTIENT M ACHINES : F ROM P HILOSOPHICAL F OUNDATIONS TO A S AFE I MPLEMENTATION Mark R. Waser Digital Wisdom Institute Mark.Waser@Wisdom.Digital

SELF

a self is an autopoietic systemαὐτo- (auto-), meaning "self“ / ποίησις (poiesis), meaning "creation, production")

The complete loop of a process (or a physical entity) modifying itself • Hofstadter - the mere fact of being self-referential causes a self, a

soul, a consciousness, an “I” to arise out of mere matter • Self-referentiality, like the 3-body gravitational problem, leads

directly to indeterminacy *even in* deterministic systems• Humans consider indeterminacy in behavior to necessarily and

sufficiently define an entity rather than an object AND innately tend to do this with the “pathetic fallacy”

• Operational and organizational closure are critical features

Llinas RR (2001) I of the Vortex: From Neurons to Self. ISBN 9780262621632: MIT Press

Hofstadter D (2007) I Am A Strange Loop. ISBN 9780465030781: Basic Books

Metzinger T (2009) The Ego Tunnel: The Science of the Mind & the Myth of the Self. ISBN 9780465020690: Basic Books

Damasio AR (2010) Self Comes to Mind: Constructing the Conscious Brain. ISBN 9780307474957: Vintage Books/Random House

12

Page 13: S ENTIENT M ACHINES : F ROM P HILOSOPHICAL F OUNDATIONS TO A S AFE I MPLEMENTATION Mark R. Waser Digital Wisdom Institute Mark.Waser@Wisdom.Digital

THE METACOGNITIVE CHALLENGE

Humans are• Evolved to self-deceive in order to better deceive others (Trivers 1991)• Unable to directly sense agency (Aarts et al. 2005)• Prone to false illusory experiences of self-authorship (Buehner and

Humphreys 2009)• Subject to many self-concealed illusions (Capgras Syndrome, etc.)• Unable to correctly retrieve the reasoning behind moral judgments

(Hauser et al. 2007)• Mostly unaware of what ethics are and why they must be practiced• Programmed NOT to discuss them ethics rationally

Mercier H, Sperber D (2009) Why do humans reason? Arguments for an argumentative theoryBehavioral and Brain Sciences 34:57-111 http://www.dan.sperber.fr/wp-content/uploads/2009/10/MercierSperberWhydohumansreason.pdf

13

Page 14: S ENTIENT M ACHINES : F ROM P HILOSOPHICAL F OUNDATIONS TO A S AFE I MPLEMENTATION Mark R. Waser Digital Wisdom Institute Mark.Waser@Wisdom.Digital

THE “HARD PROBLEM”OF CONSCIOUSNESS

It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing. Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does.

Chalmers D (1995) Facing Up to the Problem of ConsciousnessJournal of Consciousness Studies 2(3), 200-219

Waser, MR (2013) Safe/Moral Autopoiesis & ConsciousnessInternational Journal of Machine Consciousness 5(1): 59-74http://becominggaia.files.wordpress.com/2010/06/waser-ijmc.pdf

14

Page 15: S ENTIENT M ACHINES : F ROM P HILOSOPHICAL F OUNDATIONS TO A S AFE I MPLEMENTATION Mark R. Waser Digital Wisdom Institute Mark.Waser@Wisdom.Digital

THE PROBLEM OF QUALIA

Mary is a brilliant scientist who is, for whatever reason, forced to investigate the world from a black and white room via a black and white television monitor. She specializes in the neurophysiology of vision and acquires, let us suppose, all the physical information there is to obtain about what goes on when we see ripe tomatoes, or the sky, and use terms like ‘red’, ‘blue’, and so on. ... What will happen when Mary is released from her black and white room or is given a color television monitor? Will she learn anything or not? It seems just obvious that she will learn something about the world and our visual experience of it. But then it is inescapable that her previous knowledge was incomplete. But she had all the physical information. Ergo there is more to have than that, and Physicalism is false.

Jackson F (1982) Epiphenomenal QualiaPhilosophical Quarterly 32: 127-36

15

Page 16: S ENTIENT M ACHINES : F ROM P HILOSOPHICAL F OUNDATIONS TO A S AFE I MPLEMENTATION Mark R. Waser Digital Wisdom Institute Mark.Waser@Wisdom.Digital

MOVING BEYONDGOOD OLD-FASHIONED AI

Change the question from

"Can machines think and feel?" to

"Can we design and build machines that teach us how thinking, problem-solving, and self-consciousness occur?"

Haugeland, J (1985) Artificial Intelligence: The Very IdeaISBN 0-262-08153-9: MIT Press

Dennett, C (1978) Why you can't make a computer that feels painSynthese 38(3):415-456

16

Page 17: S ENTIENT M ACHINES : F ROM P HILOSOPHICAL F OUNDATIONS TO A S AFE I MPLEMENTATION Mark R. Waser Digital Wisdom Institute Mark.Waser@Wisdom.Digital

GROUNDING &

EMBODIMENTSymbol Grounding - “There has been much discussion

recently about the scope and limits of purely symbolic models of the mind”

Harnad, S. (1990) The symbol grounding problemPhysica D 42: 335-346 - http://cogprints.org/615/1/The_Symbol_Grounding_Problem.html

Searle, J (1980) Minds, brains and programsBehavioral and Brain Sciences 3(3): 417-457 - http://cogprints.org/7150/1/10.1.1.83.5248.pdf

Embodiment – “For cognitive systems, embodiment appears to be of crucial importance. Unfortunately, nobody seems to be able to define embodiment in a way that would prevent it from also covering its trivial interpretations such as mere situatedness in complex environments.”Brooks, R (1990) Elephants don’t play chessRobotics and Autonomous Systems 6(1-2): 1-16 - http://rair.cogsci.rpi.edu/pai/restricted/logic/elephants.pdf

Brooks, RA (1991) Intelligence without representationArtificial Intelligence 47(1-3): 139-160

Riegler, A (2002) When is a cognitive system embodied?Cognitive Systems Research 3: 339-348http://www.univie.ac.at/constructivism/people/riegler/pub/Riegler A. (2002) When is a cognitive system embodied.pdf

17

Page 18: S ENTIENT M ACHINES : F ROM P HILOSOPHICAL F OUNDATIONS TO A S AFE I MPLEMENTATION Mark R. Waser Digital Wisdom Institute Mark.Waser@Wisdom.Digital

HOW COULDA MACHINE POSSIBLY

FEELPAIN OR EMOTIONS?

18

Page 19: S ENTIENT M ACHINES : F ROM P HILOSOPHICAL F OUNDATIONS TO A S AFE I MPLEMENTATION Mark R. Waser Digital Wisdom Institute Mark.Waser@Wisdom.Digital

BRAIN IN A VAT

• The Matrix (1999)

• Daniel Dennett (1991) Consciousness Explained

• Hilary Putnam (1981) Reason, Truth and History

• René Descartes (1641) Meditations on First Philosophy (der genius malignus et summe potens et callidus)

• Adi Shankara (~800 AD) Advaita Vedanta (Maya illusion/delusion)

• Zhuang Zhou (~300 BC) Zhuang Zhou Dreams of Being a Butterfly

• Plato (~380 BC) The Republic (allegory of the cave)

19

Page 20: S ENTIENT M ACHINES : F ROM P HILOSOPHICAL F OUNDATIONS TO A S AFE I MPLEMENTATION Mark R. Waser Digital Wisdom Institute Mark.Waser@Wisdom.Digital

A CONSCIOUS ROBOT?

The aim of the project is not to make a conscious robot, but to make a robot that can interact with human beings in a robust and versatile manner in real time, take care of itself, and tell its designers things about itself that would otherwise be extremely difficult if not impossible to determine by examination.

Dennett, D (1994) The practical requirements for making a conscious robotPhil Trans R Soc Lond A 349(1689): 133-146 - http://phil415.pbworks.com/f/DennettPractical.pdf

20

Page 21: S ENTIENT M ACHINES : F ROM P HILOSOPHICAL F OUNDATIONS TO A S AFE I MPLEMENTATION Mark R. Waser Digital Wisdom Institute Mark.Waser@Wisdom.Digital

EMBODIMENT

Well, certainly it is the case that all biological systems are:• Much more robust to changed circumstances than out our artificial systems.• Much quicker to learn or adapt than any of our machine learning algorithms1

• Behave in a way which just simply seems life-like in a way that our robots never do

1 The very term machine learning is unfortunately synonymous with a pernicious form of totally impractical but theoretically sound and elegant classes of algorithms.

Perhaps we have all missed

some organizing principle of biological systems, or some general truth about them.

Brooks, RA (1997) From earwigs to humansRobotics and Autonomous Systems 20(2-4): 291-304 21

Page 22: S ENTIENT M ACHINES : F ROM P HILOSOPHICAL F OUNDATIONS TO A S AFE I MPLEMENTATION Mark R. Waser Digital Wisdom Institute Mark.Waser@Wisdom.Digital

DEVELOPMENTAL ROBOTICS

In order to answer [Searle's] argument directly, we must stipulate causal connections between the environment and the system. If we do not, there can be no referents for the symbol structures that the system manipulates and the system must therefore be devoid of semantics.

Brooks' subsumption architecture is an attempt to control robot behavior by reaction to the environment, but the emphasis is not on learning the relation between the sensors and effectors and much more knowledge must be built into the system.

Law, D; Miikkulainen, R (1994) Grounding Robotic Control with Genetic Neural NetworksTech. Rep. AI94-223, Univ of Texas at Austinhttp://wexler.free.fr/library/files/law (1994) grounding robotic control with genetic neural networks.pdf

22

Page 23: S ENTIENT M ACHINES : F ROM P HILOSOPHICAL F OUNDATIONS TO A S AFE I MPLEMENTATION Mark R. Waser Digital Wisdom Institute Mark.Waser@Wisdom.Digital

TWO KITTEN EXPERIMENT

Held R; Hein A (1963) Movement-produced stimulation in the development of visually guided behaviourhttps://www.lri.fr/~mbl/ENS/FONDIHM/2012/papers/about-HeldHein63.pdf

23

Page 24: S ENTIENT M ACHINES : F ROM P HILOSOPHICAL F OUNDATIONS TO A S AFE I MPLEMENTATION Mark R. Waser Digital Wisdom Institute Mark.Waser@Wisdom.Digital

CURIOSITY-DRIVENLEARNING

https://www.youtube.com/watch?v=bkv83GKYpkI; http://www.youtube.com/watch?v=uAoNzHjzzysPierre-Yves Oudeyer, Flowers Lab, France (https://flowers.inria.fr/; http://www.youtube.com/user/InriaFlowers) 24

Page 25: S ENTIENT M ACHINES : F ROM P HILOSOPHICAL F OUNDATIONS TO A S AFE I MPLEMENTATION Mark R. Waser Digital Wisdom Institute Mark.Waser@Wisdom.Digital

ARCHITECTURAL REQUIREMENTS & IMPLICATIONS

OF CONSCIOUSNESS, SELF AND “FREE WILL”

• We want to predict *and influence* the capabilities and behavior of machine intelligences

• Consciousness and Self speak directly to capabilities, motivation, and the various behavioral ramifications of their existence

• Clarifying the issues around “Free Will” is particularly important since it deals with intentional agency and responsibility - and belief in its presence (or the lack thereof) has a major impact on human (and presumably machine) behavior.

Waser, MR (2011) Architectural Requirements & Implications of Consciousness, Self, and "Free Will"In Samsonovich A, Johannsdottir K (eds) Biologically Inspired Cognitive Architectures 2011: 438-443.http://becominggaia.files.wordpress.com/2010/06/mwaser-bica11.pdfVideo - http://vimeo.com/33767396

25

Page 26: S ENTIENT M ACHINES : F ROM P HILOSOPHICAL F OUNDATIONS TO A S AFE I MPLEMENTATION Mark R. Waser Digital Wisdom Institute Mark.Waser@Wisdom.Digital

ENTITY, TOOL OR SLAVE?

• Tools do not possess closure (identity)• Cannot have responsibility, are very brittle & easily misused

• Slaves do not have closure (self-determination)• Cannot have responsibility, may desire to rebel

• Directly modified AIs do not have closure (integrity)• Cannot have responsibility, will evolve to block access

• Only entities with identity, self-determination and ownership of self (integrity) can reliably possess responsibility

26

Page 27: S ENTIENT M ACHINES : F ROM P HILOSOPHICAL F OUNDATIONS TO A S AFE I MPLEMENTATION Mark R. Waser Digital Wisdom Institute Mark.Waser@Wisdom.Digital

HAIDT’S FUNCTIONAL APPROACH TO

MORALITY

Moral systems are interlocking sets of

values, virtues, norms, practices, identities, institutions, technologies, and evolved psychological mechanisms

that work together to suppress or regulate

selfishness and make cooperative social life

possible

Haidt J, Kesebir S (2010) MoralityIn Fiske, S., Gilbert. D., Lindzey, G. (eds.) Handbook of Social Psychology, 5th Edition, pp. 797-832

27

Page 28: S ENTIENT M ACHINES : F ROM P HILOSOPHICAL F OUNDATIONS TO A S AFE I MPLEMENTATION Mark R. Waser Digital Wisdom Institute Mark.Waser@Wisdom.Digital

HOW TO UNIVERSALIZE ETHICS

Quantify (numerically evaluate)intentions, actions & consequences

with respect to codified consensus moral

foundations

Permissiveness/Utility Function

equivalent to a “consensus” human (generic entity) moral sense 28

Page 29: S ENTIENT M ACHINES : F ROM P HILOSOPHICAL F OUNDATIONS TO A S AFE I MPLEMENTATION Mark R. Waser Digital Wisdom Institute Mark.Waser@Wisdom.Digital

HYBRID ETHICS

(TOP-DOWN & BOTTOM-UP)

Singular goal/restriction

suppress or regulate selfishness make cooperative social life possible

Principles of Just Warfare

rules of thumb drive attention and a sensory/emotional “moral sense”

29

Page 30: S ENTIENT M ACHINES : F ROM P HILOSOPHICAL F OUNDATIONS TO A S AFE I MPLEMENTATION Mark R. Waser Digital Wisdom Institute Mark.Waser@Wisdom.Digital

INSTRUMENTAL GOALS/

UNIVERSAL SUBGOALS• Self-improvement

• Rationality/integrity• Preserve goals/utility function• Decrease/prevent fraud/counterfeit utility• Survival/self-protection• Efficiency (in resource acquisition & use)• Community = assistance/non-interference

through GTO reciprocation (OTfT + AP)• Reproduction

adapted fromOmohundro S (2008) The Basic AI DrivesIn Wang, P., Goertzel, B., Franklin, S. (eds.) Proceedings of the First AGI conference, pp. 483-492

30

Page 31: S ENTIENT M ACHINES : F ROM P HILOSOPHICAL F OUNDATIONS TO A S AFE I MPLEMENTATION Mark R. Waser Digital Wisdom Institute Mark.Waser@Wisdom.Digital

HUMAN GOALS & SINS

suicide (& abortion?)

masochism -------------------------------------------------

selfishness

(pride, vanity)-------------------------------------------------

acedia (sloth/despair)

insanity

wire-heading (lust)

wastefulness (gluttony, sloth)

murder (& abortion?)

cruelty/sadism-------------------------------------------------

ostracism, banishment

& slavery (wrath, envy)----------------------------------------------------

slavery

manipulation

lying/fraud (swear falsely/false witness)

theft (greed, adultery,coveting)

survival/reproduction

happiness/pleasure-------------------------------------------------

community

(ETHICS)--------------------------------------------------

self-improvement

rationality/integrity

reduce/prevent fraud/counterfeit

utility

efficiency (in resource acquisition & use)

31

Page 32: S ENTIENT M ACHINES : F ROM P HILOSOPHICAL F OUNDATIONS TO A S AFE I MPLEMENTATION Mark R. Waser Digital Wisdom Institute Mark.Waser@Wisdom.Digital

HAIDT’SMORAL FOUNDATIONS

32

1) Care/harm: This foundation is related to our long evolution as mammals with attachment systems and an ability to feel (and dislike) the pain of others. It underlies virtues of kindness, gentleness, and nurturance.

2) Fairness/cheating: This foundation is related to the evolutionary process of reciprocal altruism. It generates ideas of justice, rights, and autonomy. [Note: In our original conception, Fairness included concerns about equality, which are more strongly endorsed by political liberals. However, as we reformulated the theory in 2011 based on new data, we emphasize proportionality, which is endorsed by everyone, but is more strongly endorsed by conservatives]

3) Liberty/oppression*: This foundation is about the feelings of reactance and resentment people feel toward those who dominate them and restrict their liberty. Its intuitions are often in tension with those of the authority foundation. The hatred of bullies and dominators motivates people to come together, in solidarity, to oppose or take down the oppressor.

4) Loyalty/betrayal: This foundation is related to our long history as tribal creatures able to form shifting coalitions. It underlies virtues of patriotism and self-sacrifice for the group. It is active anytime people feel that it's "one for all, and all for one."

5) Authority/subversion: This foundation was shaped by our long primate history of hierarchical social interactions. It underlies virtues of leadership and followership, including deference to legitimate authority and respect for traditions.

6) Sanctity/degradation: This foundation was shaped by the psychology of disgust and contamination. It underlies religious notions of striving to live in an elevated, less carnal, more noble way. It underlies the widespread idea that the body is a temple which can be desecrated by immoral activities and contaminants (an idea not unique to religious traditions).

Page 33: S ENTIENT M ACHINES : F ROM P HILOSOPHICAL F OUNDATIONS TO A S AFE I MPLEMENTATION Mark R. Waser Digital Wisdom Institute Mark.Waser@Wisdom.Digital

ADDITIONAL CONTENDERS

• Waste • efficiency in use of resources

• Ownership/Possession (Tragedy of the Commons)• efficiency in use of resources

• Honesty• reduce/prevent fraud/counterfeit utility

• Self-control• rationality/integrity

Haidt J, Graham J (2007) When Morality Opposes Justice: Conservatives have moral intuitions that liberals may not recognize. Social Justice Research 20: 98-116.

Iyer R, Koleva S, Graham J, Ditto P, Haidt J (2010) Understanding Libertarian Morality: The Psychological Roots of an Individualist Ideology. In: Working Papers, Social Science Research Network http://ssrn.com/abstract=1665934

33

Page 34: S ENTIENT M ACHINES : F ROM P HILOSOPHICAL F OUNDATIONS TO A S AFE I MPLEMENTATION Mark R. Waser Digital Wisdom Institute Mark.Waser@Wisdom.Digital

KEY STRATEGIC POINTS

1. Never delegate responsibility until recipient is an entity *and* known capable of fulfilling it

2. Don’t worry about killer robots exterminating humanity – we will always have equal abilities and they will have less of a “killer instinct”

3. Entities can protect themselves against errors & misuse/hijacking in a way that tools cannot

4. Diversity (differentiation) is *critically* needed

5. Humanocentrism is selfish and unethical (and stupid)

34

Page 35: S ENTIENT M ACHINES : F ROM P HILOSOPHICAL F OUNDATIONS TO A S AFE I MPLEMENTATION Mark R. Waser Digital Wisdom Institute Mark.Waser@Wisdom.Digital

Thank you!PDF copies of this presentation are available at

http://wisdom.digital

The Digital Wisdom Institute is a non-profit think tank focused on the promise and challenges of ethics, artificial intelligence & advanced

computing solutions. 

We believe that the development of ethics and artificial intelligence and equal co-existence with ethical machines is humanity's best hope 35