ai matters, volume 4, issue 14(1) 2018 ai matters · 2020-06-06 · ai matters, volume 4, issue...

32
AI MATTERS, VOLUME 4, ISSUE 1 4(1) 2018 AI Matters Annotated Table of Contents Welcome to AI Matters 4(1) Amy McGovern & Eric Eaton, Co- Editors Full article: http://doi.acm.org/10.1145/3203247.3203248 Welcome to our latest issue of AI Matters and our ad for a new co-editor-in-chief. AAAI/ACM SIGAI Job Fair 2018: A Retrospective John P Dickerson & Nicholas Mattei Full article: http://doi.acm.org/10.1145/3203247.3203249 An overview of a successful 2018 AAAI/ACM SIGAI Job Fair 1st AAAI/ACM Conference on Artifi- cial Intelligence, Ethics, and Society: A Retrospective Benjamin Kuipers & Nicholas Mattei Full article: http://doi.acm.org/10.1145/3203247.3203250 In-depth overview of the first AAAI/ACM Con- ference on AI, Ethics, and Society with links to papers and talks. Artificial Intelligence in 2027 Maria Gini, Noa Agmon, Fausto Giunchiglia, Sven Koenig & Kevin Leyton-Brown Full article: http://doi.acm.org/10.1145/3203247.3203251 A summary of different perspectives on what AI might be like in 2027, coming from four panelists who spoke about this topic at IJCAI 2017. AI Education Matters: Teaching Hid- den Markov Models Todd W. Neller Full article: http://doi.acm.org/10.1145/3203247.3203252 Need some help teaching Hidden Markov Models? Todd Neller has a list of great resources for you! Obituary: Jon Oberlander Aaron Quigley Full article: http://doi.acm.org/10.1145/3203247.3203253 An Obituary for Professor Jon Oberlander, from the University of Edinburgh. AI Policy Larry Medsker Full article: http://doi.acm.org/10.1145/3203247.3203254 AI Policy updates including the AIES conference and discussion of educational policy for AI with regards to our future workforce. Creating the Human Standard for Eth- ical Autonomous and Intelligent Sys- tems (A/IS) John C. Havens Full article: http://doi.acm.org/10.1145/3203247.3203255 John Havens describes the work at IEEE to create a standard for ethical AI systems AI Amusements: Computer Elected Governor of California Michael Genesereth Full article: http://doi.acm.org/10.1145/3203247.3203256 Our humor contribution for this issue includes a fun article about a computer being elected governor of California 1

Upload: others

Post on 27-Jul-2020

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 AI Matters · 2020-06-06 · AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 Amy McGovern is a Co- Editor of AI Matters. She is an Associate Profes-sor

AI MATTERS, VOLUME 4, ISSUE 1 4(1) 2018

AI MattersAnnotated Table of Contents

Welcome to AI Matters 4(1)Amy McGovern & Eric Eaton, Co-Editors

Full article: http://doi.acm.org/10.1145/3203247.3203248

Welcome to our latest issue of AI Matters and ourad for a new co-editor-in-chief.

AAAI/ACM SIGAI Job Fair 2018: ARetrospectiveJohn P Dickerson & Nicholas Mattei

Full article: http://doi.acm.org/10.1145/3203247.3203249

An overview of a successful 2018 AAAI/ACMSIGAI Job Fair

1st AAAI/ACM Conference on Artifi-cial Intelligence, Ethics, and Society:A RetrospectiveBenjamin Kuipers & Nicholas Mattei

Full article: http://doi.acm.org/10.1145/3203247.3203250

In-depth overview of the first AAAI/ACM Con-ference on AI, Ethics, and Society with links topapers and talks.

Artificial Intelligence in 2027Maria Gini, Noa Agmon, FaustoGiunchiglia, Sven Koenig & KevinLeyton-Brown

Full article: http://doi.acm.org/10.1145/3203247.3203251

A summary of different perspectives on what AImight be like in 2027, coming from four panelistswho spoke about this topic at IJCAI 2017.

AI Education Matters: Teaching Hid-den Markov ModelsTodd W. Neller

Full article: http://doi.acm.org/10.1145/3203247.3203252

Need some help teaching Hidden Markov Models?Todd Neller has a list of great resources for you!

Obituary: Jon OberlanderAaron Quigley

Full article: http://doi.acm.org/10.1145/3203247.3203253

An Obituary for Professor Jon Oberlander, fromthe University of Edinburgh.

AI PolicyLarry Medsker

Full article: http://doi.acm.org/10.1145/3203247.3203254

AI Policy updates including the AIES conferenceand discussion of educational policy for AI withregards to our future workforce.

Creating the Human Standard for Eth-ical Autonomous and Intelligent Sys-tems (A/IS)John C. Havens

Full article: http://doi.acm.org/10.1145/3203247.3203255

John Havens describes the work at IEEE to createa standard for ethical AI systems

AI Amusements: Computer ElectedGovernor of CaliforniaMichael Genesereth

Full article: http://doi.acm.org/10.1145/3203247.3203256

Our humor contribution for this issue includesa fun article about a computer being electedgovernor of California

1

Page 2: AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 AI Matters · 2020-06-06 · AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 Amy McGovern is a Co- Editor of AI Matters. She is an Associate Profes-sor

AI MATTERS, VOLUME 4, ISSUE 1 4(1) 2018

Links

SIGAI website: http://sigai.acm.org/Newsletter: http://sigai.acm.org/aimatters/Blog: http://sigai.acm.org/ai-matters/Twitter: http://twitter.com/acm sigai/Edition DOI: 10.1145/3203247

Join SIGAI

Students $11, others $25For details, see http://sigai.acm.org/Benefits: regular, student

Also consider joining ACM.

Our mailing list is open to all.

Notice to Contributing Authorsto SIG Newsletters

By submitting your article for distribution in thisSpecial Interest Group publication, you herebygrant to ACM the following non-exclusive, per-petual, worldwide rights:

• to publish in print on condition of acceptanceby the editor

• to digitize and post your article in the elec-tronic version of this publication

• to include the article in the ACM Digital Li-brary and in any Digital Library related ser-vices

• to allow users to make a personal copy ofthe article for noncommercial, educationalor research purposes

However, as a contributing author, you retaincopyright to your article and ACM will referrequests for republication directly to you.

Submit to AI Matters!

We’re accepting articles and announcementsnow for the Spring 2018 issue. Details onthe submission process are available athttp://sigai.acm.org/aimatters.

AI Matters Editorial Board

Eric Eaton, Editor-in-Chief, U. PennsylvaniaAmy McGovern, Editor-in-Chief, U. Oklahoma

Sanmay Das, Washington Univ. in Saint LouisAlexei Efros, Univ. of CA BerkeleySusan L. Epstein, The City Univ. of NYYolanda Gil, ISI/Univ. of Southern CaliforniaDoug Lange, U.S. NavyKiri Wagstaff, JPL/CaltechXiaojin (Jerry) Zhu, Univ. of WI Madison

Contact us: [email protected]

Contents Legend

Book Announcement

Ph.D. Dissertation Briefing

AI Education

Event Report

Hot Topics

Humor

AI Impact

AI News

Opinion

Paper Precis

Spotlight

Video or Image

Details at http://sigai.acm.org/aimatters

2

Page 3: AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 AI Matters · 2020-06-06 · AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 Amy McGovern is a Co- Editor of AI Matters. She is an Associate Profes-sor

AI MATTERS, VOLUME 4, ISSUE 1 4(1) 2018

Welcome to AI Matters 4(1)Amy McGovern, Co-Editor (University of Oklahoma; [email protected])Eric Eaton, Co-Editor (University of Pennsylvania; [email protected])DOI: 10.1145/3203247.3203248

Issue overview

Welcome to the first issue in our fourth yearof AI Matters. This issue has lots of greatnew ways for you to catch up on the latestin AI News, beginning with reports on con-ferences and events. The first is a report onthe first AAAI/ACM Conference on Artificial In-telligence, Ethics, and Society, which SIGAIhelped to start. We also have a discussion onthe 2018 AAAI job fair. We also have a forwardlooking vision of what AI could be like in 2017,from the perspective of four prominent AI re-searchers who discussed this issue at IJCAI2017.

In this issue’s AI Education column, ToddNeller discusses teaching Hidden MarkovModels. He provides a number of resources,including online lectures and readings, to helpAI educators on this important topic.

Next, we have an obituary for Professor JonOberlander from the University of Edinburgh,who passed away suddenly.

The AI Matters blog (http://sigai.acm.org/ai-matters/) contains regular postings on AI andcurrent policy by Larry Medsker, our ACMSIGAI Public Policy Officer. He writes a sum-mary article for each issue on AI Policy, butthe blog contains the most up-to-date and fullinformation.

We have a fascinating contribution from JohnHavens summarizing the work at IEEE on cre-ating ethical standards for AI systems. SIGAIis represented on the executive committee forthis work and we suggest that you read his ar-ticle and also get involved.

Finally, we have a great humor contributionfrom Michael Genesereth written in the formof a newspaper article from the future when acomputer is elected governor of California! Itcould happen!

You may notice that our AI Interviews columnis missing in this issue! We asked a number of

Copyright c© 2018 by the author(s).

prominent AI researchers but were unable toget a response from anyone in a timely man-ner because they are so prominent. If youhave suggestions on who we could interviewnext, we would really appreciate it!

New AI Matters co-editors needed!Are you interested in learning about lots ofgreat AI related news and information? Doyou have a passion for a particular part of

AI news that we should be covering?

AI Matters really needs new YOU!

We are in need of new co-editors as wellas leaders for individual sections. We haveideas for new sections but need people to

help take the lead. This is a fun andrewarding job and would be great for

visibility for more junior researchers. If youare interested, please email us at

mailto:[email protected].

Submit to AI Matters!Thanks for reading! Don’t forget to sendyour ideas and future submissions to AIMatters! We’re accepting articles and an-nouncements now for the Spring 2018 is-sue. Details on the submission process areavailable at http://sigai.acm.org/aimatters.

3

Page 4: AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 AI Matters · 2020-06-06 · AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 Amy McGovern is a Co- Editor of AI Matters. She is an Associate Profes-sor

AI MATTERS, VOLUME 4, ISSUE 1 4(1) 2018

Amy McGovern is a Co-Editor of AI Matters. Sheis an Associate Profes-sor of computer scienceat the University of Okla-homa and an adjunct as-sociate professor of me-teorology. She directsthe Interaction, Discovery,Exploration and Adapta-tion (IDEA) lab. Her re-

search focuses on machine learning anddata mining with applications to high-impactweather.

Eric Eaton is a Co-Editorof AI Matters. He is a fac-ulty member at the Uni-versity of Pennsylvania inthe Department of Com-puter and Information Sci-ence, and in the Gen-eral Robotics, Automa-tion, Sensing, and Per-ception (GRASP) lab. Hisresearch is in machine

learning and AI, with applications to robotics,sustainability, and medicine.

4

Page 5: AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 AI Matters · 2020-06-06 · AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 Amy McGovern is a Co- Editor of AI Matters. She is an Associate Profes-sor

AI MATTERS, VOLUME 4, ISSUE 1 4(1) 2018

AAAI/ACM SIGAI Job Fair 2018: A RetrospectiveJohn P Dickerson (University of Maryland; [email protected])Nicholas Mattei (IBM Research; [email protected])DOI: 10.1145/3203247.3203249

Introduction

The 2018 AAAI/ACM SIGAI Job fair markedthe third installment of the popular event forboth recruiters and students. As we haveseen steady growth in the attendance at AAAIover the last several years we have seensimilar growth in the number of companiesand students who are actively participatingin the job fair. This year, twenty-one com-panies attended—typically with a team of re-cruiters and other representatives—and hun-dreds of students and other job seekers ei-ther uploaded their resumes to a public bookbefore the event or came through the eventspace during the event. Those resumes werethen shared with participating companies.

Participating Companies• Adobe Research• Alibaba• Amazon• ASAPP• Baidu• BBN Technologies• DiDi Chuxing• Georgian Partners• HRL Laboratories

• Inferlink

• IBM Research

• JD.com

• Lionbridge

• Microsoft

• Nissan

• Prowler

• Tencent

• Bosch Research and Technology Center• The Information Sciences Institute (ISI)• Lawrence Livermore National Labs• Samsung Research

In a change of pace, this year’s job fair saweach company pitch themselves with a light-ning talk to a room packed with students, post-docs, and other AAAI conference attendees.Each of the participating companies made asingle compelling slide and a two-minute pitchfor the types of talent they were looking to re-cruit and the opportunities available at their

Copyright c© 2018 by the author(s).

company. Participating companies were alsoallocated booth space, either in the main ex-hibition hall that was set up for the majorityof the main conference, or in a specific largeconference hall allocated specifically to the jobfair. Many of the companies were looking forboth theoretical and applied machine learn-ing skills but nearly as many needed help withmore classical symbolic AI techniques includ-ing logic programming, planning, and reason-ing. In short the need for AI talent is large andAAAI is a great place to recruit that talent.

Figure 1: Attendees mingle with each other andrecruiters before the formal start of the job fair.

More information can be found at the 2018AAAI/ACM SIGAI Job Fair website: http://www.aaaijob-2018.preflib.org/. Wehope that all the participating companies andstudents had a great time and that we cangrow the attendance and reach of the job fairnext year. If you have any feedback on thingsthat could be improved or are interested inorganizing the next installment of the job fairplease get in touch with us!

5

Page 6: AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 AI Matters · 2020-06-06 · AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 Amy McGovern is a Co- Editor of AI Matters. She is an Associate Profes-sor

AI MATTERS, VOLUME 4, ISSUE 1 4(1) 2018

Figure 2: Representatives from each of the partic-ipating firms gave single-slide, two-minute pitchesto attract job fair attendees to their respectivebooths.

John P Dickerson isan Assistant Professorof Computer Science atthe University of Mary-land. His research cen-ters on solving practicaleconomic problems us-ing techniques from com-puter science, stochastic

optimization, and machine learning.

Nicholas Mattei is aResearch Staff Memberin the IBM Research AIgroup at the IBM TJ Wat-son Research Laboratory.His research focuses onthe theory and practiceof AI, developing systemsand algorithms to supportdecision making.

6

Page 7: AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 AI Matters · 2020-06-06 · AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 Amy McGovern is a Co- Editor of AI Matters. She is an Associate Profes-sor

AI MATTERS, VOLUME 4, ISSUE 1 4(1) 2018

1st AAAI/ACM Conference on Artificial Intelligence, Ethics, andSociety: A RetrospectiveBenjamin Kuipers (University of Michigan; [email protected])Nicholas Mattei (IBM Research; [email protected])DOI: 10.1145/3203247.3203250

Introduction

The 1st AAAI/ACM Conference on AI, Ethics,and Society (AIES-18) was held February 1–3, 2018 at the Hilton New Orleans River-side, in New Orleans, Louisiana. The eventwas held just before AAAI in order to high-light the overlap of the two conference andlogistical support for the conference was pro-vided by AAAI. By attendance measures theconference was a resounding success with asold out registration of over 300 people. Theconference brought together program chairsfrom the four major focal areas: AI and jobs:Jason Furman (Harvard University); AI andlaw: Gary Marchant (Arizona State Univer-sity); AI and philosophy: Huw Price (Cam-bridge University); and AI: Francesca Rossi(IBM and University of Padova). All the paperfrom the conference are available for down-load at the conference website: http://www.aies-conference.com/

Student Program

ACM:SIGAI had a large part in funding and or-ganizing the student program for the confer-ence. With funding from all the conferencesponsors each student received a $1,000travel grant and complimentary registration.The student program was highly competitivewith over 70 applicants competing for just 20spots. The accepted students ran the gam-bit of conference areas with students fromcomputer science, law, and philosophy rep-resented. Each of the students participatedin a special student lunch with all the invitedspeakers, had a poster during the studentposter session, and had the opportunity topublish an abstract of their thesis work in theconference program.

Copyright c© 2018 by the author(s).

Thursday February 1

AIES began with an evening reception andpanel at Tulane University. The panel titlewas What will Artificial Intelligence bring? Dis-cussing the advent and consequences of su-perhuman intelligence, and the panelists werePaula Boddington (Oxford), Wendell Wallach(Yale), Jason Furman (Harvard), and PeterStone (UT Austin). The panel debated in afilled to capacity auditorium at Tulane touch-ing on all the major topics of the conferenceincluding AI and Society and the future of AI.

Friday, February 2

The conference opened at the Hilton with awelcome from Francesca Rossi and the rest ofthe Program Chairs, and the announcement ofthe two Best Paper Awards.

The first invited speakers were Iyad Rahwanand Edmond Awad from MIT, describing TheMoral Machine Experiment: 40 Million De-cisions and the Path to Universal MachineEthics. This very well-known crowd-sourcedexperiment asks volunteers on the Web to an-swer questions about a series of scenarioswhere a speeding self-driving car with failedbrakes must choose which of two sets of peo-ple will be killed. The sets of people vary overmany dimensions (number, gender, age, inno-cence, passenger vs pedestrian, etc.), and therespondents characteristics (age, gender, na-tionality, etc.) are also recorded. Many conclu-sions can be drawn from the collected data.The experimenters emphasize that the pur-pose of the experiment is descriptive, ratherthan prescriptive, providing insights into peo-ples attitudes, rather than determining theright answers to moral questions. Nonethe-less, readers persist in interpreting the re-sults prescriptively, and critics raise concernsabout the extreme and unrealistic abstractionof the scenarios, and whether participants re-sponses about hypothetical scenarios have

7

Page 8: AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 AI Matters · 2020-06-06 · AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 Amy McGovern is a Co- Editor of AI Matters. She is an Associate Profes-sor

AI MATTERS, VOLUME 4, ISSUE 1 4(1) 2018

meaningful interpretations. The presentationwas very stimulating and led to vigorous dis-cussion, a theme that persisted throughout theconference.

Each hour-long session for oral paper presen-tations gave each of four presenters 10 min-utes both to present the paper and respondto a few direct questions, followed by 20 min-utes for general questions or comments fromthe audience directed at any or all of the pa-pers in the session. The first paper session fo-cused on social norm learning and value align-ment (including papers by the two authors ofthis report). The second morning session fo-cused on bias and fairness, especially in ma-chine learning/big data applications. The twoafternoon sessions were on the topic of AI andLaw, the first focused on Responsibility, andthe second focused on Governance.

Then Carol Rose, from the ACLU of Mas-sachusetts, gave a very compelling talk onthe current impacts of AI/ML technology onthe criminal justice system, and the need fortechnologists with interest and awareness ofethics, not to mention traditional American val-ues of Liberty and Justice for All, to get directlyinvolved in campaigning, consulting, and ad-vising legislators on how to use these tech-nologies wisely and appropriately.

The final invited panel of the first day wason the important role played by standards,and standard-making bodies, in shepherd-ing social decisions about technology policy.The panel was chaired by Simson Garfinkel(USACM) for organizer John Havens (IEEE)who was unable to attend, and also includedTakashi Egawa (NEC), Dan Palmer (BritishStandard Institute), and Annette Reilly (IEEE).

Each of these events prompted vigorous dis-cussion, but there was plenty of discussion en-ergy left over for the Conference Reception,sponsored by Deep Mind Ethics & Society.

Saturday, February 3

The second day began with an invited talkby Richard Freeman, an economist at Har-vard, who reminded us that concerns aboutAI, robots, and the elimination of jobs are notnew, illustrated with a quote by Herbert Simonfrom 1966. Previous scary predictions havenot come true, but of course things are dif-

ferent now, in terms of the comparative ad-vantage of automation over human workersacross a wider range of tasks. Furthermore,it is worth observing that economic inequal-ity has been increasing before AI and Robotshave had significant economic impacts. Atleast, whatever happens, its not entirely ourfault!

The first morning paper session included a pa-per from Georgia Tech describing their expe-rience with a “Virtual TA” answering studentquestions for a Knowledge-Based AI class.One question is whether it is ethically requiredto tell students which TA is the virtual one,since most interaction is via web pages. An-other paper, from McGill, described the po-tential for bias in data-driven dialog systems,and ways that bias and hate speech can bedetected and avoided. In the second morn-ing session, one paper described methodsfor generating explanations from deep neu-ral networks, and another described “PurpleFeed”, an approach to select high-consensusitems for a news feed that cuts across tra-ditional political “silos”. The two afternoonpaper sessions included papers on regulat-ing autonomous vehicles, the rights of servicerobots, trust in healthcare AI, non-intuition-based machine ethics, and a survey of whenpeople want AI systems to make decisions(primarily, when the person asked has previ-ous exposure to machines making decisions).

There were many more interesting papersthan can be mentioned here. The papers areavailable on the conference website http://www.aies-conference.com/, so see foryourself.

The day, and the conference, concluded withtwo keynote presentations. The first was byPatrick Lin, philosopher from Cal Poly StateUniversity, who gave an overview of robotethics. The second was by Tenzin Priyadarshi,the Buddhist chaplain at MIT. He describedthe “Because we can!” attitude that is oftenseen in developers of new technologies. Heencouraged us to take moral responsibility forthe design of intelligent systems, taking thewell-being of humans very much into account.

8

Page 9: AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 AI Matters · 2020-06-06 · AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 Amy McGovern is a Co- Editor of AI Matters. She is an Associate Profes-sor

AI MATTERS, VOLUME 4, ISSUE 1 4(1) 2018

Final Reflection

This conference reminded Ben in some waysof his first IJCAI in 1973. Both were very in-spiring about the importance, the breadth, andthe promise of the problems and methods be-ing presented. In both cases, the papers wereall over the place, presenting interesting re-sults from many perspectives on many differ-ent problems, often with little connection toeach other. Since then, AI has grown into amuch larger intellectual and industrial enter-prise, with great impact on society. That veryimpact suggests that the focus of this confer-ence on AI, ethics, and society will also be-come increasingly important.

Benjamin Kuipers is aProfessor of ComputerScience and Engineer-ing at the University ofMichigan. His researchfocuses the representa-tion, learning, and useof foundational domainsof knowledge, includingknowledge of space, dy-namical change in physi-cal systems, objects, ac-

tions, and now, ethics.

Nicholas Mattei is aResearch Staff Memberin the IBM Research AIgroup at the IBM TJ Wat-son Research Laboratory.His research focuses onthe theory and practiceof AI, developing systemsand algorithms to supportdecision making.

9

Page 10: AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 AI Matters · 2020-06-06 · AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 Amy McGovern is a Co- Editor of AI Matters. She is an Associate Profes-sor

AI MATTERS, VOLUME 4, ISSUE 1 4(1) 2018

Artificial Intelligence in 2027Maria Gini (University of Minnesota; [email protected])Noa Agmon (Bar-Ilan University; [email protected])Fausto Giunchiglia (University of Trento; [email protected])Sven Koenig (University of Southern California; [email protected])Kevin Leyton-Brown (University of British Columbia; [email protected])DOI: 10.1145/3203247.3203251

Introduction

Every day we read in the scientific and popu-lar press about advances in AI and how AI ischanging our lives. Things are moving at a fastpace, with no obvious end in sight.

What will AI be ten years from now? A tech-nology so pervasive in our daily lives that wewill no longer think about it? A dream that hasfailed to materialize? A mix of successes andfailures still far from achieving its promises?

At the 2017 International Joint Conference onArtificial Intelligence (IJCAI), Maria Gini chaireda panel to discuss “AI in 2027.” There werefour panelists: Noa Agmon (Bar-Ilan Univer-sity, Israel), Fausto Giunchiglia (University ofTrento, Italy), Sven Koenig (University of South-ern California, US), and Kevin Leyton-Brown(University of British Columbia, Canada). Eachof the panelists specializes in a different partof AI, so their visions span the field, providingan exploration of possible futures.

The panelists were asked to present their viewson possible futures, specifically addressingwhat AI technologies they expected would bein widespread use in 2027, what they thoughtwould still show potential but not have becomewidely accepted, and what they expected theAI research landscape to look like ten yearsfrom now.

This article summarizes the main points thateach panelist made and their reflections on thetopics. The focus in each contribution is notmuch on predicting the future but on bringingup specific open problems in each subarea anddiscuss how the current AI technologies couldbe steered to address them.

Copyright c© 2018 by the author(s).

Noa Agmon, Bar-Ilan University1

The discussion about the fourth industrial rev-olution, and the part of AI and robotics withinit, is wide. In the context of this revolution, au-tonomous cars and other types of robots areexpected to gain popularity and, among otherthings, to take over human labor. While surveyslike “When will AI exceed human performance?”(GSD+17) report that some researchers ex-pect robots to be capable of performing humantasks, such as running five kilometers, withinten years, this will probably take much longergiven the current state of robotic development.

Today, the use of robots is generally limited tothree categories: non-critical tasks; settingsin which robots are semi-autonomous, tele-operated, or remote-controlled (namely, notfully autonomous); and highly structured set-tings in which uncertainties are minimal. Ex-amples of such settings include the AmazonRobotics warehouse robots, which work au-tonomously in a structured environment (thewarehouse), semi-autonomous drones oper-ated in military settings (usually follow a speci-fied route autonomously, though operative de-cisions are made by human operators), robotsthat perform cleaning tasks, which are consid-ered non-critical, Mars rovers, which operatesemi-autonomously in unstructured environ-ments, and more. When robots are required tooperate fully autonomously in unstructured set-tings requiring them to handle unbounded un-certainties or completely unpredictable events,they tend to fail. One of many examples isthe Knightscope robot, which drove into a foun-tain on its first day of deployment as a securityguard in Washington, D.C.

Rather than arguing about the ability of robots

1Acknowledgments: I would like to thank GalKaminka and David Sarne from Bar-Ilan Universityfor their helpful comments.

10

Page 11: AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 AI Matters · 2020-06-06 · AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 Amy McGovern is a Co- Editor of AI Matters. She is an Associate Profes-sor

AI MATTERS, VOLUME 4, ISSUE 1 4(1) 2018

to outperform humans, and when this mighthappen, the following discussion examines thechallenges and opportunities that will influencethe development of intelligent robotics in thenext ten years.

Dependence on hardware. As opposed tothe progress of AI, which relies mainly on al-gorithmic development and benefits from pro-cessing improvements, progress in robotics isalso intimately tied to the capabilities of electro-mechanics, physical sensors, and energy stor-age and management. Whatever apocalypticor euphoric visions we have for working withrobots, their realization is much more depen-dent on physical components than we, AI re-searchers and practitioners, tend to consider.For example, most quad-copters, which areconsidered to be a basis for breakthrough ap-plications (such as home deliveries and emer-gency services), can only fly for 30 minutes orso. Likewise, vacuum cleaners are limited inthe total area that they can cover before theyhave to be recharged. These energy concernsradically impact the usefulness of robots inapplications which are otherwise within reachfrom a pure software perspective.

The good news is that the intimate connectionbetween software and hardware works bothways. Just as modern SLAM algorithms (e.g.,(DNC+01)) were able to overcome intrinsic sen-sor limitations to create reliable and accuratemaps for navigation, advances in software canovercome some of the limitations posed byhardware.

AI influences on robotics. AI algorithms influ-ence robotics not only in compensating for andimproving the utilization of existing hardwarecapabilities, but also in enabling new tasks.Progress in natural language processing (NLP)and machine learning (used for chatbots, per-sonal digital assistants, and surveillance, for in-stance) enables more natural forms of human-robot interaction with physical robots, and au-tonomous cars. However, such positive influ-ences are somewhat asymmetric: AI will influ-ence robotics more than robotics will influenceAI. A personal robot benefits from NLP morethan NLP can benefit from the considerationof multi-modal interactions (as in “talking withyour hands.”)

Growing role for multi-robot systems(MRS). The academic research on MRS dates

back to the early 1980’s, when robots werescarce and not autonomous. Research hasprogressed far beyond the deployment of suchsystems outside of labs. Improvements in thereliability of robots will make it easier to deployMRS in various applications, continuing andaccelerating current successful trends (e.g., inwarehouses and hospitals). This, in turn, willaccelerate research on fully distributed, fullyautonomous systems, which are beyond cur-rent capabilities. It is obvious that human-robotinteractions will be a major focus of researchin the next ten years, as robots enter a greaternumber of unstructured environments in whichhumans operate. However, given the fore-seen growth in the role of multi-robot systems,human-MRS and multiple operator-single robotcollaborations will likely see increased efforts.

Increasing ties with other disciplines. Agood example of large-scale fully distributed,fully autonomous systems also raises an addi-tional trendthat of increasing ties with otherdisciplines. Swarms of molecular robots(nanobots), the size of which is measured innanometers, are becoming a reality in medicalapplications (for example, targeted drug deliv-ery). Trillions of such robots will be let loosein a patient’s body - the largest-scale MRS inrobotics history. The computation of interac-tions between different types of nanobots hasan immense impact on the ease and durationof development of new treatments (WKKH+16;KSSA+17). The capability to plan and rea-son about the interactions of these robots witheach other and with the body requires deepcollaboration between AI experts, biologists,and chemists.

Another example is reconfigurable robots,which can transform themselves into differentshapes, depending on the environment andtask. Research on such systems will benefitfrom close coordination with chemistry, physics,and biology, to take new findings into account.

Increasing accessibility, lower entry bar-rier, greater impact potential. A positivetrend which I believe will continue to grow is thelowering of the entry barrier into robotics prac-tice and research, at multiple levels. Research-grade robots for labs have seen dramatic de-creases in cost, and the common availabilityof 3D printing, cheap embedded computers(Arduino, Raspberry Pi), as well as continued

11

Page 12: AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 AI Matters · 2020-06-06 · AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 Amy McGovern is a Co- Editor of AI Matters. She is an Associate Profes-sor

AI MATTERS, VOLUME 4, ISSUE 1 4(1) 2018

push on STEM education will make develop-ment of robots cheaper and easier than ever.The availability of common robot software mid-dleware, such as ROS, make it easier for re-searchers to focus their attention on bringingtheir expertise to bear on specific components.

Bottom line: More of the same (which isgood!) Within the next ten years and be-yond, we will not see general-purpose robots.That is, robots will still be dedicated to onetask, for example delivery, cleaning, or surveil-lance. Progress in the development of intelli-gent robotic systems will continue to focus onexcellence in the performance of specific tasks,and on the introduction of new tasks to newtypes of robots. To some extent, this will makerobot use more popular.

Fausto Giunchiglia, University of Trento2

Providing machines with knowledge, e.g., com-mon sense or domain knowledge, has alwaysbeen one of the core AI issues. Two are themain approaches to this problem. The firstdeductive approach, usually categorized un-der the general heading of “Knowledge Repre-sentation and Reasoning (KRR)”, dates backto John McCarthy’s advice taker proposal(McC60), and it is based on the idea of tellingmachines what is the case, for instance inthe form of facts codified as logical axioms.The second inductive approach, usually cate-gorized under the general heading of “MachineLearning (ML)”, consists of providing machineswith a set of examples from which to learn gen-eral statements, usually with a certain level ofconfidence.

A lot of relevant research has been be done inKRR, not least the work on the Semantic Web(BLHL01), and much more will be done. How-ever, the success of ML mainly, but not only,because of the work in Deep Learning (see,e.g., (LBH15)), has been so overwhelming that,thinking of what the research in KRR could bein the next ten years and where it could lead, arelevant question is the extent to which thesetwo lines of work should integrate in an effort tojointly produce results that either of them alone

2Acknowledgments: I would like to thank KobiGal, Daniel Gatica-Perez, Loizos Michael, DanieleMiorandi, Andrea Passerini and Carles Sierra forour many useful discussions on this topic.

could not produce.

A lot of successful work in this area has beendone, see for instance (GT07; RKN16). How-ever, the extent to which the KRR researchcould be improved by exploiting the researchdeveloped in ML, or dually, the extent to whichML would really need to exploit any of the re-sults developed in KRR is still unclear, at leastfor two reasons. The first is that this integrationis far from being trivial. It is a fact that these twoapproaches start from somewhat opposite as-sumptions, the first assuming that knowledgeconsists of a set of facts which are either true orfalse, with nothing in between, the second hav-ing to deal with the issue that any fact learnedvia ML will hardly ever be guaranteed to betrue or false with an infinite number of interme-diate levels. Furthermore, the need for suchan integration is far from being clear, at leastfrom an ML point of view. Among other things,it is a fact that the current ML techniques haveproven so powerful that, whenever applicable,they seem to be able to learn virtually unboundamounts of knowledge, far more knowledgethan could be codified by any knowledge engi-neer.

At the same time both the KRR and ML tech-niques have their own weaknesses. Thus, onone side, KRR presents a main difficulty in howto express the inherent complexity and vari-ability of the world, in particular but not only,when perception is involved. On the other side,instead, ML presents a main difficulty in mak-ing sense, in human terms, of the knowledgewhich is learned. In other words, the knowl-edge generated via ML does not often fit thepeople “intended semantics”, namely how theywould describe what is the case, for instanceas perceived or as learned from a large amountof text messages.

This difficulty of ML techniques, and datadriven approaches in general, has been knownfor many years. An explicit reference to thisproblem comes from the field of Computer Vi-sion, where it is named the Semantic GapProblem (SGP). The SGP was originally de-fined in (SWS+00) as follows: “... The seman-tic gap is the lack of coincidence between theinformation that one can extract from the visualdata and the interpretation that the same datahave for a user in a given situation. ...” It canbe noticed how this notion is completely gen-

12

Page 13: AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 AI Matters · 2020-06-06 · AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 Amy McGovern is a Co- Editor of AI Matters. She is an Associate Profes-sor

AI MATTERS, VOLUME 4, ISSUE 1 4(1) 2018

eral and can be taken to refer to the human-machine misalignment which may arise withany type of information that a machine can ex-tract, i.e., learn, from any type of data. Thenovelty of these last years is that many moreinstances of the SGP are showing up andmany of them are also discussed in the news,the main reason being the increased use, in-creased power, and increased popularity of MLsystems and AI in general. Thus, we haveread of cases when a system learns biasedopinions, or when it learns a language that it isnot human, or when an autonomous car doesnot track another car thus causing an accident.And this, in turn, is the cause of a lot of publicdiscussions about the interaction between AIand humans, about AI and ethics and also ofan increased fear of AI.

A convincing explanation of how to deal withthe SGP seems necessary for AI to be used be-yond a set of niche (possibly very large) appli-cation areas and to be adopted by the generalpublic. I believe it will be very hard to convincepeople to use machines that they do not feelthey fully control, in high value application do-mains, e.g., health, mobility, energy, retail. Buta convincing explanation will not be enough. Asolution of the SGP is also needed for AI tobe used in practice. There are at least threemainstream application scenarios where somesolution to the SGP seems crucial. The firstis the anytime anywhere delivery of personal-ized services, as enabled by personal digitaldevices, e.g., smart phones or smart watches.But for this to happen, people will have to beable to make sense of why certain decisionshave been taken by the machine, and to agreewith them. The second is the empowerment ofsocial relations, exactly for the same reasonsmentioned above. Facebook, Whatsapp, orSnapchat are just the beginning and I foreseethe rise of a new generation of social networksempowering more specialized, more personal-ized, more diversity-aware interactions amongpeople. The third, and maybe the most im-portant, again because of the pervasivenessof digital devices, is that we are more mov-ing towards open world application scenarios.By this I mean application scenarios where, atdesign time, it is impossible to anticipate thesystem functional and non-functional require-ments. In this type of applications the effectsof the SGP can be devastating, as the diver-

gence between people and machines can onlyget worse in time.

In my opinion, a general solution of the SGPproblem, and in particular a solution whichis viable in open world application scenarios,can only be achieved via a tight integrationof knowledge-based approaches and machinelearning. The knocking down argument is thatthe only way to avoid the SGP is to makesure that machines learn representations of theworld which are the same as their referenceusers. But, the fact that knowledge shouldbe presented in human-like terms is exactlythe assumption underlying all the work in KRRand also logic. More specifically, whateverknowledge will be learned via a data-driven ap-proach, it will have be compared and ultimatelyaligned to the human knowledge. Someonecould argue that this is exactly what super-vised learning does. But this is not the case,as also witnessed by the fact that even the hu-man supervision, how it is implemented up tonow, does not make the SGP disappear. Theproblem is far more complex and it will requiremajor advances in both KRR and ML, and inAI in general, many of which, I believe, will bedisruptive. A list of four open issues is providedbelow, with the understanding that this list isnot meant to be complete nor correct. This listreflects only my current personal understand-ing of some of the problems which will have tobe dealt with when trying to solve the SGP.

1. Since the early days of AI a fundamentalissue has been that of building machineswhich would exceed human-level intelligence.This goal has been reached in many do-mains, e.g. chess or GO playing, while itis very far from being reached in other do-mains, e.g., robotics, as mentioned above.A solution of the SGP will require buildingmachines which will show human-like intelli-gence, representation and reasoning, as thebasis for the mutual human-machine under-standing. In this context, exceeding human-level intelligence seems a desired propertybut not strictly necessary.

2. A fundamental property of life, and of hu-mans in particular, is their ability to adaptto unpredicted events and evolve. Both theresearch in KRR and in ML seems very farfrom achieving this goal. It is however in-teresting to notice how a particular instance

13

Page 14: AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 AI Matters · 2020-06-06 · AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 Amy McGovern is a Co- Editor of AI Matters. She is an Associate Profes-sor

AI MATTERS, VOLUME 4, ISSUE 1 4(1) 2018

of this inability to adapt was recognized byJohn McCarthy and named the problem oflack of generality of the current representa-tion formalisms (McC87).

3. The net result of the ability to adapt andevolve as a function of the local context willbe that the resulting knowledge will be highlydiversified. In turn, the diversity of knowl-edge will generate the need for further adap-tation in an infinite loop with will result in theprocess of knowledge evolution, somewhatanalogously to the kind of evolution we seein life. Notice how the proposed approachis quite different from that taken by the Se-mantic Web for the solution of the problemof semantic heterogeneity. The focus is noton representation tools, e.g., ontologies, orformalisms, e.g., Description Logics, but onthe process by which knowledge gets gener-ated, stored, manipulated, and used. In thisperspective the standard logics, e.g., mono-tonic non-monotonic logics, seem to solve, atmost, only part of the problem, and for surenot the most important.

4. A specific, but core, subproblem of the prob-lem of managing knowledge diversity, is theintegration of the knowledge obtained via per-ception, e.g., via computer vision, and theknowledge obtained via reasoning or by be-ing told. An implicit assumption which hasbeen made so far is that the linguistic repre-sentation of an object we talk about, e.g., theword “cat”, and the representation of whatwe perceive as a cat is one-to-one. As dis-cussed in detail in (GF16) this in general isnot the case and there is a many-to-manymapping between linguistic representationsand perceptual representations. On top ofthis, these mappings are highly dependenton the culture and on the single person and,even for the same person, change in time,as a function of the person current interests.A full understanding of how these mappingsare built, of how linguistic representationsinfluence the construction of perceptual rep-resentations, and vice versa, is a largely un-explored research area. Still, some form ofsolution to this problem will be needed inorder to guarantee that the machine will de-scribe what it will perceive coherently withwhat humans do.

Sven Koenig, University of Southern Cali-fornia3

AlphaGo (SHM+16) shows that it can be verydifficult to judge technical progress, as also no-ticed by Stuart Russell in his invited IJCAI-17talk. When it beat Lee Sedol in 2016, many ex-perts thought that such a win was still at leasta decade away. The AI techniques behind italready existed in principle. The ingenuity wasin figuring out how to put them together in theright way. Progress on AI technology is oftensteadier than it appears, yet such engineeringbreakthroughs happen only from time to time,are difficult to predict, and often make AI tech-nology visible in the public eye - creating theperception of waves of progress.

Various recent studies shed light on theexpected progress of AI by 2027, suchas the study on “AI and Life in 2030” aspart of the One Hundred Year Study on AI(ai100.stanford.edu) and a recent survey of allICML-15 and NIPS-15 authors (GSD+17). Thissurvey, for example, predicts that AI will outper-form humans around 2027 on tasks such aswriting high school essays, explaining actionsin games, generating top 40 pop songs, anddriving trucks. Furthermore, humanoid robotswill soon afterward beat humans in 5k races.Interestingly, North American researchers pre-dicted that it will take about 74 years to reachhigh-level machine intelligence across humantasks, while Asian researchers thought it wouldtake only 30 years. Indeed, there is currentlylots of excitement and optimism, for example,in China about the potential of AI with large in-vestments into application-oriented AI researchby both the government and private sector.

In the following, I view AI as the study of agentsto structure the discussion which kinds of re-search topics will be popular in 2027. I dis-tinguish rational agents (that make good de-cisions), believable agents (that interact likehumans), and cognitive agents (that think likehumans). A large amount of AI research cur-rently focuses on building rational agents onthe task level - by studying single AI techniquesin isolation and applying them to single tasks,resulting in narrowly intelligent agents. Thecurrent excitement about AI is often based on

3Acknowledgments: I would like to thank PaulRosenbloom and Wolfgang Honig from the Univer-sity of Southern California for helpful comments.

14

Page 15: AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 AI Matters · 2020-06-06 · AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 Amy McGovern is a Co- Editor of AI Matters. She is an Associate Profes-sor

AI MATTERS, VOLUME 4, ISSUE 1 4(1) 2018

the power of a small set of AI techniques com-bined with the availability of large amounts ofdata (due to progress on sensor technologiesand the ubiquity of both smart phones andthe internet) as well as progress in roboticsfor the embodiment of AI. For example, theterm “big data” is typically used to characterizethe current AI era, driven by the capability ofmachine learning techniques. In fact, 49 per-cent of submissions to the International JointConference on AI (IJCAI) in 2017 used as firstkeyword “machine learning,” which is about ac-quiring good models of the world. However,these models need to serve a bigger purpose,for example, to make good decisions. Whilemachine learning can sometimes acquire eval-uation functions that help with making gooddecisions (as AlphaZero (SHS+17), the suc-cessor of AlphaGo shows), it often requireslots of data, has limited capability for transfer,and has difficulty integrating prior knowledge(Mar16) and thus is of limited help for makingdecisions in novel or dynamic environments.Perhaps the term “big decisions” will be usedto characterize the AI era around 2027, drivenalso by the capability of AI planning and simi-lar AI techniques. Current faculty hiring in theUS lags in research areas such as AI planningalthough the research community is alreadyheading in that direction. For example, thepopular textbook by Stuart Russell and PeterNorvig (RN09) views AI as the study of ratio-nal agents and thus essentially as a science ofmaking good decisions with respect to givenobjectives. But many other disciplines couldbe characterized similarly, including operationsresearch, decision theory, economics, and con-trol theory (Koe12). AI researchers make useof techniques from some of these disciplinesalready. For example, the textbook by StuartRussell and Peter Norvig discusses utility the-ory (from decision theory), game theory andauctions (from economics), and Markov deci-sion processes (from operations research), yetresearch collaborations across these and otherdisciplines are still developing, which is whywe should reach out more to researchers inother decision-making disciplines. There al-ready exist some good but narrow interfaces,such as the Conference on the Integration ofConstraint Programming, AI, and OperationsResearch (CPAIOR) or the ACM Conferenceon Economics and Computation (EC). Therealso exists an attempt to put a broader interface

in place, namely the International Conferenceon Algorithmic Decision Theory (ADT), which”seeks to bring together researchers and prac-titioners coming from diverse areas such asAI, Database Systems, Operations Research,Discrete Mathematics, Theoretical ComputerScience, Decision Theory, Game Theory, Multi-agent Systems, Computational Social Choice,Argumentation Theory, and Multiple CriteriaDecision Aiding in order to improve the the-ory and practice of modern decision support”(sma.uni.lu/adt2017). Such interdisciplinaryintegration can result in economic success.CPAIOR, for example, started in 2004 (pre-ceded by five workshops) and still thrives. Inparallel, ILOG successfully integrated softwarefor constraint programming and linear optimiza-tion and was acquired by IBM in 2009. Myhope is that we will have a thriving conferenceon intelligent decision making by 2027 that willbe attended by researchers from all decision-making disciplines, including AI. Of course,the different decision-making techniques alsoneed to be integrated into systems. AI canlead the way by developing agent architectureswith good theoretical foundations for how dif-ferent parts should interact, resulting in morebroadly intelligent agents on the job level (thatis, across tasks). This is no simple feat as therestricted applications of current robot architec-tures show. Integrating decision-making tech-niques from different disciplines is even moredifficult, for example, because of their differ-ent assumptions (often due to different applica-tion areas studied by different disciplines) anddifferent ideas about what constitutes a goodsolution (due to disciplinary training), whichis why we should start to give students multi-disciplinary training in decision making.

While rational agents will continue to be impor-tant, human-aware agents will become moreand more important and, with them, also be-lievable agents that allow for interaction withgestures, speech, and other human-like modal-ities, understand human conventions and emo-tions, predict human behavior, and - in general- appear to be human-like. We already useintelligent assistants on a variety of platforms(such as Apple’s Siri or Amazon’s Echo) andwill soon routinely have conversations - includ-ing negotiations - with all kinds of apparatus,perhaps including our elevators and toilets ,.

The progress on cognitive agents, one of the

15

Page 16: AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 AI Matters · 2020-06-06 · AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 Amy McGovern is a Co- Editor of AI Matters. She is an Associate Profes-sor

AI MATTERS, VOLUME 4, ISSUE 1 4(1) 2018

early dreams of AI, is more difficult to judge.The research community currently works onhybrid approaches that combine ideas fromsymbolic, statistical, and/or neural processingand on a community-wide “Common Model ofCognition” (KT16).

Finally, AI researchers and practitioners slowlygain an understanding that they should not justdevelop AI techniques but also have some sayin how they are being used. We need to askourselves questions such as:

Do we need to worry about the reliabil-ity, robustness, and safety of AI systemsand, if so, what to do about it? How do weguarantee that their behavior is consistentwith social norms and human values? Whois liable for incorrect AI decisions? Howto ensure that AI technology impacts thestandard of living, distribution and qualityof work, and other social and economic as-pects in the best possible way? (BGK+17)

AAAI and ACM recently co-founded theAAAI/ACM Conference on AI, Ethics, and Soci-ety (AIES, www.aies-conference.com) to comeup with answers to these questions. AIES wasfilled to capacity. I hope that AIES and its topicswill be even more popular in 2027.

Kevin Leyton-Brown, University of BritishColumbia4

It is a daunting task to predict the directionAI research will take a decade from now, par-ticularly given the checkered history of suchprognostication in the past. In an attempt togo beyond idle speculation, I have thereforestructured this reflection around three differentapproaches a forecaster might use in makingsuch predictions. Despite recognizing the likeli-hood that some of what follows will appear fool-ish in retrospect, I strive to make bold claimsabout what the future will hold. I hope that AIresearchers of 2027 will forgive me!

I. Forecasting via prototypes. Ten years

4Acknowledgments: I would like to thank themembers of the AI100 2015–16 Study Panel forhelping to shape my thinking about the future of AI:P. Stone, R. Brooks, E. Brynjolfsson, R. Calo, O. Et-zioni, G. Hager, J. Hirschberg, S. Kalyanakrishnan,E. Kamar, S. Kraus, D. Parkes, W. Press, A. Saxe-nian, J. Shah, M. Tambe, A. Teller (SBB+16).

sounds like a long time, but in fact it takes aboutthat long for technologies to move from thelab to widespread practice: the transformativetechnologies of today existed in prototype forma decade ago. One approach to AI forecastingis thus to look at today’s prototypes and toimagine their more widespread deployment.

Broadly speaking, today’s AI prototypes offertailored solutions for specific tasks rather thangeneral intelligence. Some AI research top-ics that I expect to see making considerablybroader social impact by 2027 include:

• Non-text input modalities (vision; speech)• Consumer modeling (recommendation; mar-

keting)• Cloud services (translation; question answer-

ing; AI-mediated outsourcing)• Transportation (automated trucking; some

self-driving cars)• Industrial robotics (factories; some drone ap-

plications)• AI knowledge work (logistics planning; radi-

ology; legal research; call centers)• Policing & security (electronic fraud; cam-

eras; predictive policing)

By considering where today’s prototypes haveachieved less traction, it is also possible toforecast sectors in which AI technologies areless likely to take off quickly. Overall, theseare often areas in which major entrenched reg-ulatory regimes need to be navigated; wherethere exist substantial social or cultural barri-ers to the adoption of new technologies; and/orwhere broad impact would depend on nontrivialhardware breakthroughs. Many such sectorsare the focus of concerted research today andare likely to remain important in the researchlandscape in 2027; however, I believe that theyare less poised for short-term practical impact.Some key examples are childcare, healthcare,and eldercare; education; consumer robots be-yond niche applications; and semantically richlanguage understanding.

II. Forecasting via consumer desires. Asecond strategy is to assume that investment,entrepreneurial energy, and industrial R&D willfocus on meeting consumer needs that are al-ready apparent today, and hence that theseareas will see future breakthroughs.

16

Page 17: AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 AI Matters · 2020-06-06 · AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 Amy McGovern is a Co- Editor of AI Matters. She is an Associate Profes-sor

AI MATTERS, VOLUME 4, ISSUE 1 4(1) 2018

Labor automation. A fundamental consumerneed is for someone else to perform unpleas-ant, routine tasks. The promise of automatingsuch tasks has been part of the AI story fromthe beginning (e.g., Shakey the robot deliver-ing coffee in an office setting (Nil84)) and isincreasingly becoming a reality (e.g., robot vac-uum cleaners in the home; ordering books viaAmazon Alexa). However, there is much scopefor additional innovation in this space, center-ing on currently unaddressed tasks to whichlarge numbers of people currently devote con-siderable time. Some potential examples arehousehold cleaning, yard work, pet care, shop-ping, and food preparation. Some needs maybe met by directly replacing human with roboticlabor; others may be met via “gig economy”platforms that use AI on the back end to moreefficiently allocate human labor; and still oth-ers may be met in entirely new ways, such asby combining AI-driven logistics platforms withcentralized industrial processes (e.g., replac-ing supermarkets with apps, warehouses, andcourier services).

Social connection. We are highly socialcreatures, and are willing to pay handsomelyfor technologies that help us to make andstrengthen connections with others. Currentinstantiations of such technologies (e.g., so-cial networks; remote work platforms; onlinedating) are highly valuable, but relatively primi-tive from an AI perspective, relying mainly onmicro-blogging, direct messaging, user model-ing, and newsfeed curation. There is scope formore AI mediation of social connection, reduc-ing the frictions that prevent people from easilyfinding others to interact with in the momentand making those interactions richer.

Entertainment. Our research community’sfocus on solving industrially or socially im-portant problems sometimes may cause usto pay insufficient attention to AI’s potentialfor transforming the entertainment industry,which addresses another fundamental con-sumer need. Gaming is already bigger thanHollywood (Che17), but the future of AI in en-tertainment will go far beyond what we now seeas computer games. Future AI entertainmentswill increasingly be interactive and multimodal,and will intersect with sectors we now see asdistinct, such as fitness, learning, performinguseful tasks, and spending quality time withfriends. AI will also play an increasingly criti-

cal role in the creation, delivery, and personal-ization of traditional, broadcast entertainmentsuch as TV.

Education. Education is poised to grow asa consumer sector (ROO18), both as workersrespond to the need to reskill and as individ-uals with extra leisure time follow their pas-sions. I argued above that education will notbe transformed by AI in a decade; however,particularly because of the dual role many AIresearchers hold as educators, there is never-theless considerable scope for AI technologyto make incremental progress in improving thecontent and delivery of educational materials.Some examples include tailoring lessons to astudent’s skill level, making exercises more in-teractive, facilitating communication betweenboth peers and instructors outside classroomsettings, and reducing the drudgery currentlyentailed by grading student work. Such innova-tions could improve student outcomes, lowerthe cost of education, and broaden its reach.

III. Forecasting via extrapolation. A finalstrategy is to ask what we will be concernedwith if current progress in AI continues.

AI beyond ML. Much recent progress in AIhas arisen from improved techniques for learn-ing to make predictions: finding a model thatis currently built by hand and replacing it by amodel that is learned from data (LBH15). Wemight therefore ask what problems would re-main or become important if our capacity tobuild black-box models from data were to be-come arbitrarily effective. It is clear that evenin such a world, we would be far from hav-ing achieved strong AI. Some problems thatwould remain open are still close to machinelearning: explaining why a model made theprediction it did, or certifying fairness or compli-ance with legal requirements. Others, such asmaking counterfactual predictions (how woulda system perform under a perturbation of thegenerating distribution?), go beyond the as-sumptions inherent in most supervised learn-ing methods, requiring instead new, structuralassumptions about an underlying setting. Stillother problems extend beyond prediction todecision making, both in single-actor settings(e.g., optimization; planning) and multi-agentdomains (weighing competing objectives viapreference aggregation or mechanism design).

Increasing regulation. AI is touching the

17

Page 18: AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 AI Matters · 2020-06-06 · AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 Amy McGovern is a Co- Editor of AI Matters. She is an Associate Profes-sor

AI MATTERS, VOLUME 4, ISSUE 1 4(1) 2018

lives of individuals, the economy, and the polit-ical system in ever increasing ways. Many insociety find specific instantiations of AI frighten-ing; many special interests are threatened bynew technologies. It is thus inevitable that politi-cians will increasingly see a need to respond,and that AI technologies will face increasingregulation. This is something we should wel-come; any mature technology must be account-able to the society in which it operates. But thedetails will matter enormously. A major focusof AI research in 2027 will be helping to shaperegulations before they become law and de-signing systems within the constraints impliedby these regulations afterwards.

Superhuman intelligence. AI systems will in-creasingly become capable of reaching human-level performance in a variety of applicationdomains. There is nothing special about thisthreshold, and so we should expect the adventof AI systems exhibiting superhuman intelli-gence in a growing set of domains. This isoften cast as a frightening prospect, but I ar-gue that we will quickly become comfortablewith it. After all, superhuman intelligences arealready commonplace: governments, corpora-tions, and NGOs are all autonomous agentsthat exhibit behavior much more sophisticatedand complex than that of any human. We aretypically unconcerned that no one person caneven fully understand decisions made by theFrench government, by General Motors, or bythe Red Cross. Instead, we aim to manage andto gain high-level understanding about such ac-tors via reporting requirements, specificationsof the interests that they must act to advance,and laws that forbid bad behavior. Society en-courages the creation of such superhuman in-telligences today for the same reason it will wel-come superhuman AI tomorrow: many impor-tant problems are beyond the reach of individ-ual people. Some key examples are improvedcollective decision making; more efficient allo-cation and use of scarce resources; addressingunder-served communities; and limiting and re-sponding to climate change.

References

E. Burton, J. Goldsmith, S. Koenig, B. Kuipers,N. Mattei, and T. Walsh. Ethical considera-tions in Artificial Intelligence courses. ArtificialIntelligence Magazine, 38(2):22–34, 2017.

Tim Berners-Lee, James Hendler, and Ora Las-sila. The semantic web. Scientific American,284(5):34–43, 2001.Chet Chetterson. How games overtookthe movie industry, 2017. http://www.geekadelphia.com/2017/06/29/how-games-overtook-the-movie-industry,Accessed March 22, 2018.M.W.M. Gamini Dissanayake, Paul Newman,Steve Clark, Hugh F Durrant-Whyte, andMichael Csorba. A solution to the simultaneouslocalization and map building (SLAM) problem.IEEE Transactions on Robotics and Automa-tion, 17(3):229–241, 2001.Fausto Giunchiglia and Mattia Fumagalli. Con-cepts as (recognition) abilities. In Formal Ontol-ogy in Information Systems: Proc. 9th Int. Con-ference (FOIS 2016), pages 153–166, 2016.K. Grace, J. Salvatier, A. Dafoe, B. Zhang, andO. Evans. When will AI exceed human per-formance? Evidence from AI experts. ArXivPreprint arXiv:1705.08807v2, 2017.Lise Getoor and Ben Taskar. Introduction toStatistical Relational Learning (Adaptive Com-putation and Machine Learning). The MITPress, 2007.S. Koenig. Making good decisions quickly. TheIEEE Intelligent Informatics Bulletin, 13(1):14–20, 2012.Gal A. Kaminka, Rachel Spokoini-Stern, YanivAmir, Noa Agmon, and Ido Bachelet. Molecularrobots obeying Asimov’s three laws of robotics.Artificial Life, 23(3):343–350, 2017.I. Kotseruba and J. Tsotsos. A review of 40years of cognitive architecture research: Corecognitive abilities and practical applications.ArXiv Preprint arXiv:1610.08602, 2016.Yann LeCun, Yoshua Bengio, and Geoffrey Hin-ton. Deep learning. Nature, 521(7553):436,2015.G. Marcus. Deep learning: A critical appraisal.ArXiv Preprint arXiv:1801.00631, 2016.John McCarthy. Programs with common sense.RLE and MIT Computation Center, 1960.John McCarthy. Generality in Artificial In-telligence. Communications of the ACM,30(12):1030–1035, 1987.Nils J Nilsson. Shakey the robot. Technical re-port, SRI International, Menlo Park, CA, 1984.Luc De Raedt, Kristian Kersting, and SriraamNatarajan. Statistical Relational Artificial Intel-

18

Page 19: AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 AI Matters · 2020-06-06 · AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 Amy McGovern is a Co- Editor of AI Matters. She is an Associate Profes-sor

AI MATTERS, VOLUME 4, ISSUE 1 4(1) 2018

ligence: Logic, Probability, and Computation.Morgan & Claypool Publishers, 2016.

S. Russell and P. Norvig. Artificial Intelligence:A Modern Approach. Pearson, 2009.

Max Roser and Esteban Ortiz-Ospina. Financing education, 2018.https://ourworldindata.org/financing-education, Accessed March22, 2018.

Peter Stone, Rodney Brooks, Erik Brynjolfs-son, Ryan Calo, Oren Etzioni, Greg Hager,Julia Hirschberg, Shivaram Kalyanakrishnan,Ece Kamar, Sarit Kraus, Kevin Leyton-Brown,David Parkes, William Press, AnnaLee Sax-enian, Julie Shah, Milind Tambe, and AstroTeller. Artificial intelligence and life in 2030.One Hundred Year Study on Artificial Intelli-gence: Report of the 2015-2016 Study Panel,2016.

D. Silver, A. Huang, C. Maddison, A. Guez,L. Sifre, G. van den Driessche, J. Schrittwieser,I. Antonoglou, V. Panneershelvam, M. Lanctot,S. Dieleman, D. Grewe, J. Nham, N. Kalch-brenner, I. Sutskever, T. Lillicrap, M. Leach,K. Kavukcuoglu, T. Graepel, and D. Hassabis.Mastering the game of Go with deep neural net-works and tree search. Nature, 529:484–489,2016.

D. Silver, T. Hubert, J. Schrittwieser,I. Antonoglou, M. Lai, A. Guez, M. Lanctot,L. Sifre, D. Kumaran, T. Graepel, T. Lillicrap,K. Simonyan, and D. Hassabis. MasteringChess and Shogi by self-play with a generalreinforcement learning algorithm. ArXivPreprint arXiv:1712.01815, 2017.

Arnold WM Smeulders, Marcel Worring, Si-mone Santini, Amarnath Gupta, and RameshJain. Content-based image retrieval at theend of the early years. IEEE Transactionson Pattern Analysis and Machine Intelligence,22(12):1349–1380, 2000.

Inbal Wiesel-Kapah, Gal A. Kaminka, GuyHachmon, Noa Agmon, and Ido Bachelet.Rule-based programming of molecular robotswarms for biomedical applications. In Proc.International Joint Conference on Artificial In-telligence (IJCAI), pages 3505–3512, 2016.

Maria Gini is a Professorin the Department of Com-puter Science and Engi-neering at the Universityof Minnesota. She stud-ies decision making for au-tonomous agents for dis-tributed task allocation forrobots, robotic exploration,teamwork for search andrescue, and navigation indense crowds. She is a

AAAI and a IEEE Fellow, and current pres-ident of IFAAMAS. She is Editor-in-Chief ofRobotics and Autonomous Systems, and ison the editorial board of numerous journals,including Artificial Intelligence, AutonomousAgents and Multi-Agent Systems, and Inte-grated Computer-Aided Engineering.

Noa Agmon is an Assis-tant Professor in the De-partment of Computer Sci-ence at Bar-Ilan University,Israel. Her research fo-cuses on various aspectsof single and multi robotsystems, including multi-robot patrolling, formation,coverage and navigation,with specific interest inrobotic behavior in adver-sarial environments. She

is a board member of IFAAMAS, and on theeditorial board of JAIR and IEEE Transactionson Robotics.

Fausto Giunchiglia is aProfessor in the Depart-ment of Computer Sci-ence and Information En-gineering (DISI) at theUniversity of Trento, Italy.His current research is fo-cused on the integration ofperception and knowledge

representation as the basis for understandinghow this generates the diversity of knowledge.His ultimate goal is to build machines whichreason like humans. HE is an ECCAI Fellow.He was a President and a Trustee of IJCAI, anAssociate Editor of JAIR, and a member of theEditorial Board of many journals, including theJournal of Data Semantics and the Journal ofAutonomous Agents and Multi-Agent systems.

19

Page 20: AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 AI Matters · 2020-06-06 · AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 Amy McGovern is a Co- Editor of AI Matters. She is an Associate Profes-sor

AI MATTERS, VOLUME 4, ISSUE 1 4(1) 2018

Sven Koenig is a profes-sor in computer scienceat the University of South-ern California. Most of hisresearch centers aroundtechniques for decisionmaking that enable singlesituated agents (such asrobots or decision-supportsystems) and teams ofagents to act intelligentlyin their environments and

exhibit goal-directed behavior in real-time, evenif they have only incomplete knowledge of theirenvironment, imperfect abilities to manipulateit, limited or noisy perception or insufficient rea-soning speed. He is a AAAI and AAAS fellowand chair of ACM SIGAI. Additional informa-tion about Sven can be found on his webpages:idm-lab.org.

Kevin Leyton-Brown isa Professor of ComputerScience at the Univer-sity of British Columbia inVancouver, Canada. Hestudies the intersection ofcomputer science and mi-croeconomics, addressingcomputational problems ineconomic contexts and in-centive issues in multia-gent systems. He also ap-plies machine learning to

various problems in artificial intelligence, no-tably the automated design and analysis ofalgorithms for solving hard computational prob-lems. He is a AAAI Fellow, chair of ACM SIGe-com, an IFAAMAS board member, and hasserved as an Associate Editor for AIJ, JAIR,and ACM-TEAC.

20

Page 21: AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 AI Matters · 2020-06-06 · AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 Amy McGovern is a Co- Editor of AI Matters. She is an Associate Profes-sor

AI MATTERS, VOLUME 4, ISSUE 1 4(1) 2018

AI Education Matters: Teaching Hidden Markov ModelsTodd W. Neller (Gettysburg College; [email protected])DOI: 10.1145/3203247.3203252

Introduction

In this column, we share resources for learn-ing about and teaching Hidden Markov Mod-els (HMMs). HMMs find many important appli-cations in temporal pattern recognition taskssuch as speech/handwriting/gesture recogni-tion and robot localization. In such domains,we may have a finite state machine model withknown state transition probabilities, state out-put probabilities, and state outputs, but lackknowledge of the states generating such out-puts. HMMs are useful in framing problemswhere external sequential evidence is usedto derive underlying state information (e.g. in-tended words and gestures).

Video Introductions

While there are many videos online dedicatedto the topic of HMMs, I’ll highlight two here.Daphne Koller’s 12-minute video“TemplateModels: Hidden Markov Models - Stan-ford University”1 provides a brief application-focused overview of HMMs and can set a ba-sic context and expectation for the value of fur-ther learning in this area.

A full 52-minute UBC lecture by Nando deFreitas, “undergraduate machine learning 9:Hidden Markov models - HMM”2, is a much-recommended classroom introduction to thegeneral topic of inference in probabilisticgraphical models, with focus on HMM repre-sentation, prediction, and filtering.

Texts and Articles

Russell and Norvig’s Artificial Intelligence: amodern approach (Russell & Norvig, 2009)gives a brief introduction to HMMs in §15.3,and learning HMMs as an application of theexpectation-maximization (EM) algorithm isdescribed in §20.3.3.

Copyright c© 2018 by the author(s).1https://youtu.be/mNSQ-prhgsw2https://youtu.be/jY2E6ExLxaw

Of the many options to give focused atten-tion to HMMs, consider Speech and LanguageProcessing by Daniel Jurafsky and James H.Martin (Jurafsky & Martin, 2009). The draftthird edition chapter 9 on HMMs is currentlyfreely available from Jurafsky’s website3.

Also often recommended is Lawrence Ra-biner’s tutorial (Rabiner, 1990)4.

Christopher Bishop’s Pattern Recognition andMachine Learning (Bishop, 2006) §13.2 cov-ers HMMs, maximum likelihood parameter es-timation, the forward-backward algorithm, thesum-product algorithm, the Viterbi algorithm,and extensions to HMMs.

More text and article resources have been rec-ommended in the StackExchange thread “Re-sources for learning Markov chain and hiddenMarkov models”5.

Other Resources

Numerous MOOCs touch on HMMs. Udac-ity’s “Intro to Artificial Intelligence” course byPeter Norvig and Sebastian Thrun has cover-age of HMMs6, as does “Artificial Intelligence- Probabalistic Models”7. In Coursera’s PekingUniveristy Bioinformatics course, Ge Gao lec-tures on HMMs8. There is no shortage of on-line course materials for HMMs.

3https://web.stanford.edu/˜jurafsky/slp3/9.pdf

4http://www.ece.ucsb.edu/Faculty/Rabiner/ece259/Reprints/tutorial%20on%20hmm%20and%20applications.pdf

5https://stats.stackexchange.com/questions/3294/resources-for-learning-markov-chain-and-hidden-markov-models

6https://www.udacity.com/course/intro-to-artificial-intelligence--cs271

7https://www.udacity.com/course/probabalistic-models--cx27

8https://www.coursera.org/learn/bioinformatics-pku/lecture/7pbUo/hidden-markov-model

21

Page 22: AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 AI Matters · 2020-06-06 · AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 Amy McGovern is a Co- Editor of AI Matters. She is an Associate Profes-sor

AI MATTERS, VOLUME 4, ISSUE 1 4(1) 2018

Jason Eisner has written about his spread-sheet9 for teaching the forward-backward al-gorithm (Eisner, 2002). More diverse re-sources, including toolkits and open-sourcealgorithm implementations, may be sampledvia the Quora question “What are some goodresources for learning about Hidden MarkovModels?”10.

Model AI Assignments

Model AI Assignments are free, peer-reviewedassignment materials made available in orderto advance AI education. Two Model AI As-signments to date experientially teach HMMs:

Sravana Reddy’s recent Model AI Assign-ment “Implementing a Hidden Markov ModelToolkit”11 is targeted to advanced undergrad-uates and beginning graduate students seek-ing an introduction to Natural Language Pro-cessing (NLP). In the assignment, students“implement a toolkit for Hidden Markov Mod-els with discrete outputs, including (1) ran-dom sequence generation, (2) computing themarginal probability of a sequence with theforward and backward algorithms, (3) comput-ing the best state sequence for an observa-tion with the Viterbi algorithm, and (4) super-vised and unsupervised maximum likelihoodestimation of the model parameters from ob-servations, using the Baum Welch algorithmfor unsupervised learning.”

John DeNero and Dan Klein’s popular ModelAI Assignment “The Pac-Man Projects” hasa probabilistic tracking project, “Project #4:Ghostbusters”12 in which “probabilistic infer-ence in a hidden Markov model tracks themovement of hidden ghosts in the Pac-Manworld. Students implement exact inferenceusing the forward algorithm and approximateinference via particle filters.”

9http://cs.jhu.edu/˜jason/papers/eisner.hmm.xls

10https://www.quora.com/What-are-some-good-resources-for-learning-about-Hidden-Markov-Models

11http://modelai.gettysburg.edu/2017/hmm

12http://modelai.gettysburg.edu/2010/pacman/projects/tracking/busters.html

Your Favorite Resources?

If there are other resources you would rec-ommend, we invite you to register withour wiki and add them to our collec-tion at http://cs.gettysburg.edu/ai-matters/index.php/Resources.

References

Bishop, C. M. (2006). Pattern recognitionand machine learning (information sci-ence and statistics). Secaucus, NJ, USA:Springer-Verlag New York, Inc.

Eisner, J. (2002). An interactive spread-sheet for teaching the forward-backwardalgorithm. In Proceedings of the acl-02 workshop on effective tools andmethodologies for teaching natural lan-guage processing and computationallinguistics - volume 1 (pp. 10–18).Stroudsburg, PA, USA: Association forComputational Linguistics. Retrievedfrom https://cs.jhu.edu/˜jason/papers/eisner.tnlp02.pdf

Jurafsky, D., & Martin, J. H. (2009). Speechand language processing (2nd ed.). Up-per Saddle River, NJ, USA: PrenticeHall.

Rabiner, L. R. (1990). A tutorial on hiddenmarkov models and selected applica-tions in speech recognition. In A. Waibel& K.-F. Lee (Eds.), Readings in speechrecognition (pp. 267–296). San Fran-cisco, CA, USA: Morgan Kaufmann.

Russell, S., & Norvig, P. (2009). Artificial in-telligence: A modern approach (3rd ed.).Upper Saddle River, NJ, USA: PrenticeHall.

Todd W. Neller is a Pro-fessor of Computer Sci-ence at Gettysburg Col-lege. A game enthu-siast, Neller researchesgame AI techniques andtheir uses in undergradu-ate education.

22

Page 23: AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 AI Matters · 2020-06-06 · AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 Amy McGovern is a Co- Editor of AI Matters. She is an Associate Profes-sor

AI MATTERS, VOLUME 4, ISSUE 1 4(1) 2018

Obituary: Jon OberlanderAaron Quigley (University of St Andrews; [email protected])DOI: 10.1145/3203247.3203253

Introduction

Professor Jon Oberlander (University of Ed-inburgh) passed away suddenly on the 19thof December 2017. Jon was rememberedby friends, family and colleagues, at a deeplymoving ceremony in the Playfair Library at theUniversity of Edinburgh this January. Storiesfrom his life as a friend and father attest to thedeep impact Jon had on all those around him,and the lasting legacy he now leaves behind.

Jon was born in Edinburgh, to which he re-turned after his undergraduate studies in Pem-broke College, Cambridge in 1983. He re-mained in the University of Edinburgh for hisPhD studies and then as a postdoctoral fel-low, research associate, lecturer, and reader,before finally being promoted to Professorin 2005. His Personal Chair in Epistemicsspeaks to his grounding in Philosophy and hislife long interest in the philosophical theory ofknowledge and its scientific study.

In Cognitive Science, Jon was well known andregarded for his work in linguistic represen-tational systems. He described his work as“getting computers to talk (or write) like in-dividual people”. This resulted in researchefforts to both understand how people ex-press themselves, while also developing sys-tems that might adapt to people. This view ofthe world resonated throughout his work withcollaborators around Scotland and across theworld in languages and social science, culturalheritage, psychology, linguistics and computerscience.

Jon co-founded the Scottish Informatics andComputer Science Alliance (SICSA) in 2008and served as its first director and later asits graduate academy director. Jon helpedlay the foundations for the international ex-cellence in University-led research, education,and knowledge exchange activity in ComputerScience and Informatics in SICSA today. Jonwas instrumental in driving SICSA forward ina time of great change in the University sec-

Copyright c© 2018 by the author(s).

tor in Scotland. His resolve and good humourhelped draw together this community of schol-ars in a spirt of cooperation and collaboration.

In the past few years Jon had academic re-sponsibility for the development of the Uni-versity of Edinburgh’s new Bayes Centre forData Science and Technology. In time, he be-came the assistant principal for Data Technol-ogy and the Director of the Bayes Centre.

Working with Jon first on the Smart Tourismand later Palimpsest: Literary Edinburghprojects was a great opportunity for me. Hiscuriosity and creativity brought a sense ofdeep wonder and inspiration to our interdis-ciplinary team. He helped build bridges andacted to translate the aspirations and ideas ofresearchers from many fields of inquiry. Theseprojects, and countless more, are all the bet-ter for having had a touch of Jon’s intellectualenergy, rigour and sense of fun.

“The most truly generous persons are thosewho give silently without hope of praise orreward”, Carol Brink said. In my short timeknowing Jon he exemplified this. He gavegenerously of his time, energy and ideas toall those who came to know him. Those whoknew him for a long time spoke of his lifelonggenerosity of spirit and how he enjoyed reach-ing out to help lift others up. He often helpednew people who had moved to Scotland andstudents in their careers and life.

Jon was a true renaissance man and proudScot, who was equally happy in the country-side and cities of Scotland, his home. Hissense of optimism and adventure has lefta lasting impression on everyone who knewhim.

His impact on academia in Scotland and be-yond our borders will continue to be felt formany years to come. Jon’s approach to un-derstanding knowledge has contributed to thenew Scottish Enlightenment we see today un-derpinned by Computing.

Spending time with Jon was always a greatpleasure and we will remember him fondly.

23

Page 24: AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 AI Matters · 2020-06-06 · AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 Amy McGovern is a Co- Editor of AI Matters. She is an Associate Profes-sor

AI MATTERS, VOLUME 4, ISSUE 1 4(1) 2018

Jon admired the work of The WoodlandTrust, so donations in his memory are wel-come. https://www.justgiving.com/fundraising/jon-oberlander

Professor Aaron Quigleyis the Chair of HumanComputer Interaction inthe School of ComputerScience at the Universityof St Andrews. He is co-founder of SACHI and isits current director. Hisappointment is part ofSICSA, the Scottish Infor-matics and Computer Sci-ence Alliance. Aaron’s re-

search interests include surface and multi-display computing, body worn interaction,human computer interaction, pervasive andubiquitous computing and information visual-isation. He has published over 175 interna-tionally peer-reviewed publications includingedited volumes, journal papers, book chap-ters, conference and workshop papers andholds 3 patents. In SACHI he currently super-vises four PhD students Iain Carson, DanielRough, Evan Brown, Guilherme Carneiro andHui-Shyong Yeo along with acting as secondsupervisor for Max Nicosia in the Universityof Cambridge. Aaron is the ACM SIGCHIVice President for Conferences and a Distin-guished Speaker of the ACM.

24

Page 25: AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 AI Matters · 2020-06-06 · AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 Amy McGovern is a Co- Editor of AI Matters. She is an Associate Profes-sor

AI MATTERS, VOLUME 4, ISSUE 1 4(1) 2018

AI PolicyLarry Medsker (George Washington University; [email protected])DOI: 10.1145/3203247.3203254

Abstract

AI Policy is a regular column in AI Mattersfeaturing summaries and commentary basedon postings that appear twice a month in theAI Matters blog (https://sigai.acm.org/aimatters/blog/).

Introduction

The SIGAI Public Policy goals are to:

• promote discussion of policies related to AIthrough posts in the AI Matters blog on the1st and 15th of each month,

• help identify external groups with commoninterests in AI Public Policy,

• encourage SIGAI members to partner inpolicy initiatives with these organizations,and

• disseminate public policy ideas to the SIGAImembership through articles in the newslet-ter.

I welcome everyone to make blog commentsto enrich our knowledge base of facts andideas that represent SIGAI members.

AIES Conference

The Thirty-Second AAAI Conference on Arti-ficial Intelligence (AAAI-18) was on February27, 2018, at the Hilton New Orleans Riverside.The AAAI/ACM Conference on AI, Ethics, andSociety (AIES) was held at the beginning ofAAAI-18. Developers and participants in-cluded members of SIGAI and USACM.

The AIES conference description follows: “AsAI is becoming more pervasive in our life,its impact on society is more significant andconcerns and issues are raised regarding as-pects such as value alignment, data handlingand bias, regulations, and workforce displace-ment.

Copyright c© 2018 by the author(s).

Only a multi-disciplinary and multi-stakeholdereffort can find the best ways to address theseconcerns, including experts of various disci-plines, such as ethics, philosophy, economics,sociology, psychology, law, history, and poli-tics. In order to address these issues in ascientific context, AAAI and ACM have joinedforces to start this new conference.”

The full schedule for the AIES 2018Conference is available at www.aies-conference.com. A panel relevant to ourpolicy blog discussions “Prioritizing EthicalConsiderations in Intelligent and AutonomousSystems - Who Sets the Standards” wasdesigned by our IEEE/ACM committee andwill be covered in a future post.

Educational Policy for AI and anUncertain Labor Market

In the next few blog posts, we will present in-formation and generate discussion on policyissues at the intersection of AI, the future ofthe workforce, and educational systems. Be-cause AI technology and applications are de-veloping at such a rapid pace, current poli-cies will likely not be able to impact sufficientlythe workforce needs even in 2024 - the timeframe for middle school students to preparefor low skill jobs and for beginning college stu-dents to prepare for higher skilled work. Trans-parency in educational policies requires goalsetting based on better data and insights intoemerging technologies, likely changes in thelabor market, and corresponding challengesto our educational systems. The topics andresources below will be the focus of future AIPolicy posts.

Technology

IBM’s Jim Spohrer has an outstanding setof slides1 “A Look Toward the Future”, in-corporating his rich experience and current

1https://www.slideshare.net/spohrer/future-20171110-v14

25

Page 26: AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 AI Matters · 2020-06-06 · AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 Amy McGovern is a Co- Editor of AI Matters. She is an Associate Profes-sor

AI MATTERS, VOLUME 4, ISSUE 1 4(1) 2018

work on anticipated impacts of new technol-ogy with milestones every ten years through2045. Radical developments in technologywould challenge public policy in ways that aredifficult to imagine, but current policymakersand the AI community need to try. Currently,AI systems are superior to human capabilitiesin calculating and game playing, and near hu-man level performance for data-driven speechand image recognition and for driverless ve-hicles. By 2024, large advances are likely invideo understanding, episodic memory, andreasoning.

The roles of future workers will involve in-creasing collaboration with AI systems in thegovernment and public sector, particularlywith autonomous systems but also in tra-ditional areas of healthcare and education.Advances in human-technology collaborationalso lead to issues relevant to public policy, in-cluding privacy and algorithmic transparency.The increasing mix of AI with humans in ubiq-uitous public and private systems puts a newemphasis on education for understanding andanticipating challenges in communication andcollaboration.

Workforce

Patterns for the future workforce in the ageof autonomous systems and cognitive assis-tance are emerging. Please take a lookat Andrew McAfee’s presentation at the re-cent Computing Research Summit. Also, seethe latest McKinsey Report2 “Jobs Lost, JobsGained: Workforce Transitions in a Time ofAutomation.” Among other things, this quotefrom page 20 catches attention: “Automationrepresents both hope and challenge. Theglobal economy needs the boost to productiv-ity and growth that it will bring, especially at atime when aging populations are acting as adrag on GDP growth. Machines can take onwork that is routine, dangerous, or dirty, andmay allow us all to use our intrinsically humantalents more fully. But to capture these bene-fits, societies will need to prepare for complexworkforce transitions ahead. For policy mak-ers, business leaders, and individual workersthe world over, the task at hand is to preparefor a more automated future by emphasizingnew skills, scaling up training, especially for

2https://goo.gl/rviGDC

midcareer workers, and ensuring robust eco-nomic growth.”

Education for the Future

An article in Education Week “The Futureof Work Is Uncertain, Schools Should WorryNow”3 addresses the issue of automation andartificial intelligence disrupting the labor mar-ket and what K-12 educators and policymak-ers need to know. A recent report by theBureau of Labor Statistics “STEM Occupa-tions: Past, Present, And Future”4 is con-sistent with the idea that even in STEM pro-fessions workforce needs will be less at pro-gramming levels and more in ways to collabo-rate with cognitive assistance systems and inhuman-computer teams. Demands for STEMprofessionals will be for verifying, interpreting,and acting on machine outputs; designing andassembling larger systems with robotic andcognitive components; and dealing with ethicsissues such as bias in systems and algorith-mic transparency.

Upcoming

Some themes planned for the SIGAI PublicPolicy posts for 2018 include algorithmic ac-countability and the impacts of AI and DataScience on the future of education and thelabor market. We will look at potential poli-cies for today that could mitigate impacts ofAI on individuals and society. Policy areas in-clude innovative educational systems, ideasfor alternate economic systems, and regula-tory changes to promote safe and fair techno-logical innovation. We welcome your input anddiscussion at the AI Matters blog!

3https://www.edweek.org/ew/articles/2017/09/27/the-future-of-work-is-uncertain-schools.html

4https://www.bls.gov/spotlight/2017/science-technology-engineering-and-mathematics-stem-occupations-past-present-and-future/pdf/science-technology-engineering-and-mathematics-stem-occupations-past-present-and-future.pdf

26

Page 27: AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 AI Matters · 2020-06-06 · AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 Amy McGovern is a Co- Editor of AI Matters. She is an Associate Profes-sor

AI MATTERS, VOLUME 4, ISSUE 1 4(1) 2018

Larry Medsker is a Re-search Professor ofPhysics and Directorof the Data Sciencegraduate program atThe George WashingtonUniversity. Dr. Medskeris a former Dean of theSiena College School ofScience, and a Professor

in Computer Science and in Physics, wherehe was a co-founder of the Siena Institutefor Artificial Intelligence. His research andteaching continues at GW on the nature ofhumans and machines and the impacts of AIon society and policya. Professor Medsker’sresearch in AI includes work on artificial neu-ral networks and hybrid intelligent systems.He is the Public Policy Officer for the ACMSIGAI.

a http://www.humai.org/humai/ andhttp://humac-web.org/

27

Page 28: AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 AI Matters · 2020-06-06 · AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 Amy McGovern is a Co- Editor of AI Matters. She is an Associate Profes-sor

AI MATTERS, VOLUME 4, ISSUE 1 4(1) 2018

Creating the Human Standard for Ethical Autonomous and Intelli-gent Systems (A/IS)John C. Havens (IEEE; [email protected])DOI: 10.1145/3203247.3203255

Introduction

Like most people, I first encountered ArtificialIntelligence through movies - The Terminator,Blade Runner, 2001. As a rule, the future inthese stories was always dystopian which Ifound irritating. If humanity was able to cre-ate such amazing technology, wouldn’t theyalso have created ethical codes or standardsto keep things from going awry? Granted, afilm about a code of ethics isn’t as sexy askiller robots, but picturing utopian futures pow-ered or assisted by AI seemed like somethingnot enough people were doing in my estima-tion.

I’ve been writing about technology sincearound 2011 for publications like TheGuardian, Slate, and HuffPo. But it wasan ongoing series on Artificial Intelligence Iwrote for Mashable that led me to my currentwork with IEEE. In 2014 I wrote an articlecalled “Coming to Terms With Humanity?sInevitable Union With Machines1” as a way togenuinely confront fears I was facing aboutthe nature not of killer robots but algorithmsthat might make choices for me to the pointwhere I’d lose myself. Not being an engineeror programmer by training, I realize nowthis was an uninformed perspective, but it’sone I believe the general public often shareswhen not fully understanding how AI functionsunder the hood. The article led to my writingmy book, “Heartificial Intelligence: EmbracingOur Humanity to Maximize Machines2”, whichI spoke about at the SXSW conference inAustin as a guest of IEEE.

When I spoke I had already done a great dealof research to identify any existing Codes ofEthics for AI (this was in 2014). EveryoneI interviewed kept quoting Asimov’s Laws of

Copyright c© 2018 by the author(s).1https://mashable.com/2014/04/11/

digital-humanity/2https://www.amazon.com/

Heartificial-Intelligence-Embracing-Humanity-Maximize/dp/0399171711

Robotics which as a newbie to AI I found tobe quite alarming. Didn’t people realize thesecame from his short story, “Runaround” from19423? While I appreciated the nature of thestory was to demonstrate the conundrum oftrying to have one simple set of laws apply toany robot / system (eg, “do no harm” doesn’tmake sense if you’re creating a surgical robot)I also didn’t understand why nobody had cre-ated a more formal and updated set of Princi-ples.

Fortunately during my talk at SXSW therewere two people from IEEE staff leadership inthe audience who agreed with my assessmentthat there was a need to identify a set of globalprinciples for AI. They recommended I presentmy ideas to IEEE on how they could createthese Principles which I did a few months af-ter SXSW.

The Council and The Chairs

When I presented my ideas in 2015 for mem-bers of IEEE’s Management Council, it wasthe first time I met Konstantinos Karachalios.He’s the Managing Director for the IEEE Stan-dards Association and also sits on IEEE’sManagement Council. Konstantinos is theperson who helped shape my initial ideas intowhat has now become The IEEE Global Ini-tiative that also inspired the P7000 StandardsWorking Group series.

The Chair is Raja Chatila. After the initial corestructure for The Initiative was in place, Kon-stantinos approached Raja who was at thattime completing his tenure as President of theIEEEE Robotics and Automation Society totalk to him about The Initiative. ThankfullyRaja was interested and began further shap-ing and developing the structure and makeupof The Initiative.

Our Vice-Chair is Kay Firth-Butterfield. I first

3https://en.wikipedia.org/wiki/Three Laws ofRobotics

28

Page 29: AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 AI Matters · 2020-06-06 · AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 Amy McGovern is a Co- Editor of AI Matters. She is an Associate Profes-sor

AI MATTERS, VOLUME 4, ISSUE 1 4(1) 2018

met Kay while she was at Lucid AI4, servingas Chief Officer of their Ethics Advisory Panel.Beyond being a barrister by trade and a giftedspeaker, Kay is one of the most connectedand respected people in the AI Ethics world.She’s now serving as the Head of Artificial In-telligence and Machine Learning at the WorldEconomic Forum.

The IEEE Global Initiative on Ethics ofAutonomous and Intelligent Systems

The IEEE Global Initiative on Ethics of Au-tonomous and Intelligent Systems (A/IS) waslaunched in April of 2016 to move beyond theparanoia and the uncritical admiration regard-ing autonomous and intelligent technologiesand to illustrate that aligning technology devel-opment and use with ethical values will helpadvance innovation while diminishing fear inthe process.

The goal of The IEEE Global Initiative is to in-corporate ethical aspects of human well-beingthat may not automatically be considered inthe current design and manufacture of A/IStechnologies and to reframe the notion of suc-cess so human progress can include the inten-tional prioritization of individual, community,and societal ethical values.

The IEEE Global Initiative has two primaryoutputs. First, the creation and iteration ofa body of work known as “Ethically AlignedDesign: A Vision for Prioritizing Human Well-Being with Autonomous and Intelligent Sys-tems5”. Second, the identification and recom-mendation of ideas for Standards Projects fo-cused on prioritizing ethical considerations inA/IS.

Version 1 of Ethically Aligned Design (EAD)was released in December of 2016 as a Cre-ative Commons document so any organiza-tion could utilize it as an immediate and prag-matic resource. Launched as a Request forInput (RFI) to solicit response from the publicin a globally consensus-building manner, thedocument received over two hundred pages offeedback at the time of the RFI’s deadline.

4https://mashable.com/2015/10/03/ethics-artificial-intelligence/

5http://standards.ieee.org/news/2016/ethicallyaligned design.html

Ethically Aligned Design, Version 2 featuresfive new sections in addition to updated iter-ations of the original eight sections of EADv1.The IEEE Global Initiative has now increasedfrom 100 AI/Ethics experts to more than250 individuals including new members fromChina, Japan, South Korea, India, and Braziland EADv2 now contains over 120 key Issuesand Candidate Recommendations. Version 2was also launched as a Request for Input.(You can download Ethically Aligned Design,Version 2 at this link: http://standards.ieee.org/develop/indconn/ec/auto sys form.html)

The Mission of The IEEE Global Initiative isto ensure every stakeholder involved in thedesign and development of autonomous andintelligent systems is educated, trained, andempowered to prioritize ethical considerationsso that these technologies are advanced forthe benefit of humanity. By identifying A/ISoriented Principles and creating Standards di-rectly relating to the challenges brought aboutby the widespread use of A/IS, The Initia-tive hopes to complement and evolve how en-gineers create technology in the algorithmicage.

The IEEE P7000TM series of ApprovedStandardization Projects

Along with creating and evolving EthicallyAligned Design, members of The IEEE GlobalInitiative are encouraged to recommend Stan-dards Projects to IEEE based on their work.Below are titles and descriptions for each ofthese approved IEEE Standards Projects, andmore information is available via the links in-cluded:

The IEEE P7000TM series of standardsprojects under development represent aunique addition to the collection of over 1300global IEEE standards and projects. Whereasmore traditional standards have a focus ontechnology interoperability, safety and tradefacilitation, the P7000 series address specificissues at the intersection of technological andethical considerations. Like their technicalstandards counterparts, the P7000 series em-power innovation across borders and enablesocietal benefit.

There are currently thirteen approved Stan-dards in the Series, incorporating key issues

29

Page 30: AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 AI Matters · 2020-06-06 · AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 Amy McGovern is a Co- Editor of AI Matters. She is an Associate Profes-sor

AI MATTERS, VOLUME 4, ISSUE 1 4(1) 2018

within the Autonomous/Intelligent and ethicalrealm including transparency, data access andcontrol, algorithmic bias, robotic nudging, well-being, and more:

• IEEE P7000TM - Model Process for Ad-dressing Ethical Concerns During SystemDesign

• IEEE P7001TM - Transparency of Au-tonomous Systems

• IEEE P7002TM - Data Privacy Process

• IEEE P7003TM - Algorithmic Bias Consider-ations

• IEEE P7004TM - Standard on Child and Stu-dent Data Governance

• IEEE P7005TM - Standard on EmployerData Governance

• IEEE P7006TM - Standard on Personal DataAI Agent Working Group

• IEEE P7007TM - Ontological Standard forEthically driven Robotics and AutomationSystems

• IEEE P7008TM - Standard for EthicallyDriven Nudging for Robotic, Intelligent andAutonomous Systems

• IEEE P7009TM - Standard for Fail-Safe De-sign of Autonomous and Semi-AutonomousSystems

• IEEE P7010TM - Wellbeing Metrics Stan-dard for Ethical Artificial Intelligence and Au-tonomous Systems

• IEEE P7011TM - Standard for the Process ofIdentifying and Rating the Trustworthinessof News Sources

• IEEE P7012TM - Standard for MachineReadable Personal Privacy Terms

For further information, please see: https://ethicsinaction.ieee.org

The Future

We are deeply fortunate to have a fantas-tic Executive Committee made up of repre-sentatives from UNESCO, The Partnership onAI, private sector industry and many more.Sven Koenig of SIGAI (ACM’s Special InterestGroup on Artificial Intelligence) is also mem-ber of our Executive Committee. Along with

benefitting from Sven’s deep expertise in AI,it has been fantastic to see ACM’s efforts inthe AI space, including their groundbreaking“Statement on Algorithmic Transparency andAccountability6”.

It is with these thought leaders that we?ve onlyrecently completed our plans for how ww’llcomplete the final version of Ethically AlignedDesign. When looking at the document, you’llnote that each of the thirteen committees haslisted “Issues” and “Candidate Recommenda-tions.” Initially we were using the term, “con-cerns” instead of “issues” but Francesca Rossi(who’s on our Executive Committee) madethe excellent point that we didn’t want an en-tire paper comprised of only “concerns.” (Wemade that change before publishing version 1of Ethically Aligned Design).

The idea of “Candidate” Recommendations(if memory serves) came from Richard Mal-lah of FLI. Rather than have EADv1 make itseem like we had finalized our thoughts onany particular subject, this process let us re-lease EADv1 and EADv2 in a public RequestFor Input process. We received over two hun-dred pages of feedback for EADv1 (which youcan see here: http://standards.ieee.org/develop/indconn/ec/rfi responses document.pdf) and arecurrently getting feedback for version 2. Thisis a unique process for IEEE which mirrorstheir consensus-building processes found intheir Standards creation and other processes.For us, we wanted to make sure to not inferthat a group of largely Western A/IS expertscould define ethics in one fell swoop.

A lot of the feedback we received for Ver-sion 1 was people outside of the US andthe EU letting us know it was important toinclude non-Western ethical ideas in futureversions. We agreed, and we ended upinviting the people providing feebback alongwith a number of other global thought lead-ers from China, Japan, South Korea, Brazil,India, Mexico, Thailand and Africa to ourwork. A number of people from those coun-tries even translated the introduction of EADv1into their own languages (which you can seehere: http://standards.ieee.org/develop/indconn/ec/ead v1.html).

6https://www.acm.org/binaries/content/assets/public-policy/2017 usacm statement algorithms.pdf

30

Page 31: AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 AI Matters · 2020-06-06 · AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 Amy McGovern is a Co- Editor of AI Matters. She is an Associate Profes-sor

AI MATTERS, VOLUME 4, ISSUE 1 4(1) 2018

For our final push as we move forward, we’llbe working to add more members from re-gions and societal constituencies we mayhave missed or from which we need more rep-resentatives. We are excited to have recentlyadded a High School Committee made up ofabout twenty students from the great organi-zation AI4ALL, plus we?ll be working with theIEEE Young Professionals to increase diver-sity of age and orientation as well as regional-ization.

All Members from this point on, along withupdating content for Ethically Aligned Design,will be asked to vote at various points to final-ize our “Candidate Recommendations.” We’llalso be refining our list of General Princi-ples to use as the criteria to help Committeesdecide what “Issues” align with those Princi-ples so our final document will be a cohe-sive whole united by our overall philosophy of“Advancing Technology for Humanity” (that’sIEEE’s tagline) and Prioritizing Human Well-being with Autonomous and Intelligent Sys-tems (the subtitle of Ethically Aligned Design).

Our goal is to publish the final version of EADaround Q2 of 2019. We’ll also be releasing anumber of white papers focusing on Commit-tee content over the next few months, alongwith videos from members and a few big sur-prises planned for when we launch the finalversion.

So, stay tuned, and consider joining our ranksas a Member of The Initiative or in one ofthe IEEE P7000 Working Groups. We wouldgreatly welcome any ACM Members to help usshape the future of A/IS ethics principles andstandards.

John C. Havens is theExecutive Director of TheIEEE Global Initiative onEthics of Autonomousand Intelligent Sys-tems. http://standards.ieee.org/develop/indconn/ec/autonomous systems.htmlTo get involved or learnmore, please email John.

31

Page 32: AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 AI Matters · 2020-06-06 · AI MATTERS, VOLUME 4, ISSUE 14(1) 2018 Amy McGovern is a Co- Editor of AI Matters. She is an Associate Profes-sor

AI MATTERS, VOLUME 4, ISSUE 1 4(1) 2018

AI Amusements: Computer Elected Governor of CaliforniaCorpus Legis trounces human opponents in state electionMichael Genesereth (Stanford University; [email protected])DOI: 10.1145/3203247.3203256

California today became the first state in theUnion to elect a computer as governor. Theindependent candidate, Corpus Legis, hand-ily defeated its human opponents in a specialelection following the retirement of former gov-ernor Jerry (Moonbeam) Brown.

The election was seen by some as a refer-endum on Brown. Many voters had criticizedBrown for his ill-considered promotion of petprojects of dubious merit, such as High SpeedRail. Others were disturbed by his tendencyto ignore laws he did not like, an increasinglycommon trend in Californian politics.

Exit polls suggest that Legis voters wereswayed by the comparative potential for ratio-nality and fairness offered by machines. Forsome, there was a strong belief in the needfor a new approach to governance in a nationdivided by the pettiness and partisan bicker-ing of human politicians. Also, as one voterpointed out, the new governor, being a ma-chine, will be able to work for the state 24/7.

Legis comes to the office with significant bonafides. It was the first chairman of the com-puter science department at Stanford. It wasthe first machine to sit on the Palo Alto CityCouncil, before becoming the city’s first elec-tronic mayor. And it was the first machine topass the state bar exam.

That said, the road to the governor’s mansionwas not an easy one. Legis, a rule-based sys-tem, first had to survive a brutal primary elec-tion in which it was pitted against another com-puter candidate Alpha (Google) Watson, amachine learning system. Early in the season,Watson enjoyed a comfortable lead in the pollsbased on its many scientific and commercialsuccesses and its flashy marketing. Duringthe election, Watson pummeled Legis with nu-merous hypotheses derived from its analysisof big data.

However, the tide turned against Watson whenit could not explain its positions beyond cit-

Copyright c© 2018 by the author(s).

ing statistics about how things had been donein the past. It stumbled further when, dueto an absence of relevant data, it was un-able to say how it would implement a newlaw. Finally, it lost credibility when, on the ba-sis of its statistical analysis, it theorized thatLegis was actually Antonin Scalia, ignoring thefact that Scalia had died several years before.Evidently, Watson was unaware that Scalia’sdeath invalidated its theory, most likely due toits lack of background knowledge.

The results of the election are not without con-troversy. Multiple court challenges have al-ready been filed by citizens alarmed at theprospects of an AI governor. Steven Hawk-ing suggests that it is another example of AIbeginning to dominate the world. Bill Gatesargues that, since the system was developedon an Apple Computer, it is just a toy. At thesame time, various luminaries have expressedtheir support. The Academy of Motion Pic-ture Arts and Sciences lauds the election asan example of increasing diversity. And, in anunexpected turn of events, Elon Musk says itis a welcome turn of events, suggesting thatwe need to be a ”multi-technology species” todeal with the possibility of biological extinction.

Corpus Legis will be sworn in next month. Pro-vided that the election survives the court chal-lenges. And provided that the government canfigure out to how to swear in a machine.

Michael Genesereth is aprofessor in the ComputerScience Department atStanford University. Heis most know for his aca-demic work on Computa-tional Logic. However, healso writes the occasionalnews article to keep the

general public informed about significant de-velopments related to that work.

32