ise - storycentral.com.au · 2013-08-30 · iphone 4s with siri software, a reliable voice...

3
65 64 64 65 PHOTO: GETTY IMAGES It’s made our lives easier, but will our love of technology come back to bite us? BY HELEN O’NEILL Imagine a world where computers are smart enough to write their own programs, and take charge not only of their own future but of ours as well. It may sound like the stuff of science fiction but for Australian philosopher Huw Price the possibility is visibly on the horizon – and it’s something we should all confront. Price found himself forced to contemplate the dangers posed by artificial intelligence (AI) when he took a ride in a Copenhagen cab in 2011. Visiting Denmark for a conference, the philosopher agreed to share his taxi with Skype co-founder Jaan Tallinn, who explained to him a very troubling theory. The entrepreneurial Estonian computer programmer told Price he thought himself as likely to die from some kind of artificial intelligence- related incident as from cancer or heart disease. His argument (see Tallinn’s Timeline on page 61) was simple: he had spent his life working with cutting-edge technology and knew what could be achieved. Price, who was about to begin his new job as the University of Cambridge’s Bertrand Russell Professor of Philosophy, took Tallinn seriously. So seriously that last

Upload: others

Post on 08-Jul-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: ise - StoryCentral.com.au · 2013-08-30 · iPhone 4S with Siri software, a reliable voice recognition system contained in a mass-market product. Observed Tallinn, “We [now] live

656464 65

ph

ot

o:

Ge

tt

y i

ma

Ge

s

Rise of the Machines

It’s made our lives easier, but will our love of technology come back to bite us? By H e l e n O ’ n e i l l

imagine a world where computers are smart enough to write their own programs, and take charge not only of their own future but of ours as well. It may sound like the stuff of science fiction but for Australian philosopher Huw Price the possibility is visibly on the horizon – and it’s something we should all confront. Price found himself forced to contemplate the dangers posed by artificial intelligence (AI) when he took a ride in a Copenhagen cab in 2011.

Visiting Denmark for a conference, the philosopher agreed to share his taxi with Skype co-founder Jaan

Tallinn, who explained to him a very troubling theory.

The entrepreneurial Estonian computer programmer told Price he thought himself as likely to die from some kind of artificial intelligence-related incident as from cancer or heart disease. His argument (see Tallinn’s Timeline on page 61) was simple: he had spent his life working with cutting-edge technology and knew what could be achieved.

Price, who was about to begin his new job as the University of Cambridge’s Bertrand Russell Professor of Philosophy, took Tallinn seriously. So seriously that last

Page 2: ise - StoryCentral.com.au · 2013-08-30 · iPhone 4S with Siri software, a reliable voice recognition system contained in a mass-market product. Observed Tallinn, “We [now] live

6766

ph

ot

os

: t

hin

ks

to

ck

; G

et

ty

im

aG

es

Consumer Electronics Show in Las Vegas was awash with high-tech gadgetry – some of it extremely useful and much of it highly seductive.

Innovations included the Winbot window-cleaning automaton, a robot masseuse, a “smart fork” capable of monitoring its user’s eating patterns and the Pebble – a watch able to run apps and alert its wearer of emails.

Audi presented a car able to park itself inside a multi-storey car park, but it was Toyota’s ultra-futuristic Lexus LS sedan that really stole the show. This vehicle sported electronic safety systems, a satellite connection, infra-red projectors, cameras, radars and a “nervous system” so advanced that the Japanese manufacturer announced the car has the ability to drive itself – although so far, they won’t allow it.

“On face value, the vehicle can drive itself, but its real value is safety,” says Tyson Bowen of Lexus Australia. “It can monitor the road ahead, the road below and the conditions around [in order to] make a judgment.” Think of the car’s artificial intelligence as an

attentive co-pilot. “If a car ahead stops a n d t h e driver hasn’t noticed it … this vehicle can warn the driver and then step in w i t h t h e brakes, and

either slow or stop the car before an incident occurs.”

According to Bowen, the last four years has seen huge “technology flow-down” in cars coming onto the market, with the ability to brake in emergency situations, reverse parallel park or monitor their driver’s eye movements.

“I don’t think in the immediate future we are going to see cars without people operating them, but who knows what will happen in the future,” he adds.

Legislation in the US is ahead of

him. Last September California became the third state to legalise driverless cars on public roads, after Nevada and Florida. No such legislation is, as yet, being proposed in Australia and enthusiasm for such developments is far from universal.

Not surprisingly, then, Price and his colleagues at the Centre for the Study of Existential Risk, have been outspoken on how seriously the possibility of a “Pandora’s Box” moment may be. They worry that handing too much power to machines that can think on their own could be catastrophic for humanity.

And they are not alone in their concern. Late last year the international advocacy group Human Rights Watch (HRW) released a 50-page report titled Losing Humanity: the Case Against Killer Robots. Co-published by Harvard Law School’s International Human Rights Clinic, it called for an international ban on the development, production and use of

california became the third state to legalise driverless cars on public roads

November, the pair – along with Britain’s Astronomer Royal, Sir Martin Rees, a Cambridge professor of cosmology and astrophysics – proposed a landmark venture: the Centre for the Study of Existential Risk (CSER).

“Many scientists are concerned that developments in human technology may soon pose new, extinction-level risks to our species as a whole,” they announced of the new centre, based in Cambridge. “Such dangers have been suggested from progress in AI, from developments in biotechnology and artificial life, from nanotechnology, and from possible extreme effects of anthropogenic [human-caused] climate change. The seriousness of these risks is difficult to assess, but that in itself seems a cause for concern, given how much is at stake.”

“Thinking” robotsScience fiction has long canvassed the possibility of a world taken over by technology. B lockbuster Hollywood movies bring such fantasies to life: evil machines taking over the world (the Terminator series, Tron); good machines saving the world (Wall-E); sentient artificial intelligence gone mad (2001: A Space Odyssey). And the list goes on.

But reality is catching up. In the US, sales of robotics are approaching US$2 billion a year in Massachusetts – where the leading-edge Massachu-setts Institute of Technology (MIT) and more than 100 robotics companies are based. This year’s industry-only

Dubbed an “autonomous” car, the ultra-futuristic lexus lS sedan monitors hazards

r e a d e r s d i g e s t . c o m . a u 0 9 / 1 3

Page 3: ise - StoryCentral.com.au · 2013-08-30 · iPhone 4S with Siri software, a reliable voice recognition system contained in a mass-market product. Observed Tallinn, “We [now] live

6968

“thinking” weapons able to choose and fire on targets at will – a technology some experts predict is possible within two to three decades.

“Giving machines the power to decide who lives and dies on the battlefield would take technology too far,” says Steve Goose, arms division director at HRW. “Action is needed now, before killer robots cross the line from science fiction to feasibility.”

For a good causeAway from the arena of whiz-bang cars and gadgets, there are some seriously ambitious neuroscience ventures under way. Researchers at Europe’s Human Brain Project have announced an attempt to create a simulation of the human brain in a massively powerful supercomputer.

Making sense of the most mysteri-ous and complex machine known to exist could revolutionise both medicine and computing, but these scientists are a long way off that. As the project’s website states: “Today, simulating a single neuron requires the full power of a laptop computer.”

Their goal is to create new ways of dealing with brain diseases (think autism, depression, Alzheimer’s and Parkinson’s) and physical disabilities, something Professor Hung Nguyen, Dean of the Faculty of Engineering and Information Technology at Sydney’s UTS, understands well.

Nguyen heads a team perfecting Aviator, a mind-reading device that can make decisions based upon what its “host” is thinking. Aimed at the

severely disabled, Aviator is integrated into a smart wheelchair named TIM (Thought-Controlled Intelligence Machine), which is controlled by visualising ideas.

“To move to the right you do arithmetic – it has an ‘r’ association [so] ‘r’, arithmetic, right,” Nguyen explains. “To go left [you] compose a letter – ‘l’, letter, left. To stop, just close your eyes.”

The next stage is attracting financial support, says Nguyen, who is also working on a thought-controlled car, something that ushers in “a lot of implications” but whose inevitability is, he believes, “just a matter of time”.

Nguyen, who has worked in biomedical engineering and AI for more than two decades, should know. About 15 years ago, after being surprised by the self-learning intelligence of a chess-playing robot he was working on, he started to realise how combining AI advances with neuroscience might help those with severe disabilities. “My focus is people with medical conditions. To [allow them] to have independence is my moral priority, rather than enhanced strength of people.”

Says Nguyen, “If you don’t like the look of the future, it is our responsibility to change it.”

“There’s a lot to be positive about,” adds Huw Price. “But there are likely to be risks, too, and it would be dangerous to let our optimism about the benefits blind us to those risks. We want our grandchildren to be around to enjoy the benefits, after all.” n

1965 | British mathematician Irving John Good speculates that the creation of a machine smart enough to design even better machines would climax in an “intelligence explosion”. “The first ultra-intelligent machine [would be] the last invention that man ever need make.”

May 11, 1997 | IBM supercomputer Deep Blue beats chess grandmaster Garry Kasparov.

Oct 8, 2005 | Five robotic vehicles complete a 212km race sponsored by the US’s Defence Advanced Research Projects Agency. The fastest DARPA Grand Challenge entry, a self-driving Volkswagen called Stanley, won its creators the largest prize in robotic history – US$2 million.

March 18, 2008 | Boston Dynamics releases video footage of BigDog, a four-legged animal-like rough-terrain robot able to run, walk, climb and shoulder heavy loads.

August 23, 2009 | The US Air Force “is training more drone operators [remote or computer-controlled pilots of unmanned aerial vehicles] than fighter and bomber pilots”, according to Britain’s The Guardian newspaper.

May 6, 2010 | A “Flash Crash” in which the US’s Dow Jones Index plunges dramatically then recovers in minutes is caused by the unexpected activity of automated trading software.

February 17, 2011 | IBM supercomputer Watson beats Jeopardy champions Brad Rutter and Ken Jennings.

October 4, 2011 | Apple releases the iPhone 4S with Siri software, a reliable voice recognition system contained in a mass-market product. Observed Tallinn, “We [now] live with computers that can understand what we say.”*

Adapted from Jaan Tallinn’s Sydney Ideas presentation, July 2012

Tallinn’s Timeline*

ph

ot

os

: G

et

ty

im

aG

es

Top: Chess master Garry Kasparov is beaten by iBM

supercomputer Deep Blue; BigDog gets a pat from his

owner; Siri patiently awaits your instruction

r e a d e r s d i g e s t . c o m . a u 0 9 / 1 3