machine perception
Post on 07-Aug-2018
219 Views
Preview:
TRANSCRIPT
-
8/20/2019 Machine Perception
1/14
Oxford University Press, Scots Philosophical Association and University of St. Andrews are collaborating with JSTORto digitize, preserve and extend access to The Philosophical Quarterly.
http://www.jstor.org
Scots Philosophical Association
University of St Andrews
Machine PerceptionAuthor(s): Margaret A. BodenSource: The Philosophical Quarterly, Vol. 19, No. 74 (Jan., 1969), pp. 33-45Published by: on behalf of the and theOxford University Press Scots Philosophical Association
University of St. AndrewsStable URL: http://www.jstor.org/stable/2218186Accessed: 28-07-2015 02:31 UTC
EFERENCES
Linked references are available on JSTOR for this article:http://www.jstor.org/stable/2218186?seq=1&cid=pdf-reference#references_tab_contents
You may need to log in to JSTOR to access the linked references.
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at http://www.jstor.org/page/ info/about/policies/terms.jsp
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of contentin a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship.For more information about JSTOR, please contact support@jstor.org.
This content downloaded from 158.109.174.204 on Tue, 28 Jul 2015 02:31:49 UTCAll use subject to JSTOR Terms and Conditions
http://www.jstor.org/http://www.jstor.org/action/showPublisher?publisherCode=ouphttp://www.jstor.org/action/showPublisher?publisherCode=spahttp://www.jstor.org/action/showPublisher?publisherCode=ustandrewhttp://www.jstor.org/stable/2218186http://www.jstor.org/stable/2218186?seq=1&cid=pdf-reference#references_tab_contentshttp://www.jstor.org/page/info/about/policies/terms.jsphttp://www.jstor.org/page/info/about/policies/terms.jsphttp://www.jstor.org/page/info/about/policies/terms.jsphttp://www.jstor.org/page/info/about/policies/terms.jsphttp://www.jstor.org/page/info/about/policies/terms.jsphttp://www.jstor.org/page/info/about/policies/terms.jsphttp://www.jstor.org/stable/2218186?seq=1&cid=pdf-reference#references_tab_contentshttp://www.jstor.org/stable/2218186http://www.jstor.org/action/showPublisher?publisherCode=ustandrewhttp://www.jstor.org/action/showPublisher?publisherCode=spahttp://www.jstor.org/action/showPublisher?publisherCode=ouphttp://www.jstor.org/
-
8/20/2019 Machine Perception
2/14
33
MACHINE PERCEPTION
BY
MARGARET
A.
BODEN
I
Could
a
machine
perceive
?
It seems to be
currently
fashionable
for
philosophers, implicitly
or
explicitly,
to
deny
the
possibility
of a
percipient
machine.
Sometimes
they
produce
a verbal
argument,
to the effect
that
we
should
never
call one and the same individual both
percipient
and a
machine,
thus
echoing Wittgenstein's
remark in the
Investigations:
360. But a machine surely cannot think -Is that an empirical
statement
?
No. We
only
say
of a human
being
and
what
is
like
one that
it
thinks
....
This
type
of
argument
I
find
unpersuasive.
Even
if one
accepts,
which
I
should be
reluctant to
do,
that there
is
some
basic
contradiction in
com-
bining
the terms
'percipient'
and 'machine
',
this contradiction
surely
arises because our
language
was formed in a culture
ignorant
of
advanced
technology-in
this
case,
logical absurdity may
be
a
poor
test
of
empirical
possibility. Similarly,
I am
unconvinced
by parallel arguments claiming
that no machine whatever
could,
without
contradiction,
be
said
to
use
language,
to assert
propositions,
to
answer
questions
and so on. I
shall not
discuss this
type
of
argument
further-nor
shall
I
discuss those
arguments
which
rest
upon
the notion of some
private,
inner
process
essential to
per-
ception
but
denied to
machines.
Sometimes,
however,
arguments
are
produced
claiming
to show
that in
principle
no
machine could
perceive,
because a
percipient
being
must be
able to do some
specific
things
which
a machine
just
could not do.
This
type of argument is partly conceptual, for it requires at least a rudimentary
analysis
of
the
concept
of
perception,
and
partly
empirical,
in that it
makes
claims about
what machines can
and cannot do-in
terms,
that
is,
of
possible
outputs,
not
merely
of what are to
be
approved
as
justifiable
descriptions
of
those
outputs.
Moreover,
it
is
said,
if
we
apply psychological
words
to
the behaviour of
machines,
we confer
courtesy-titles,
and such
courtesy
is
excessive
and
dangerous,
for
machines are in
principle
so far
removed
from
the
really
important
and
interesting
human
capabilities
that
psychologists
are
wasting
their time if
they try
to
produce explanations
based on
machine
research.
An
example
of such an
argument
is to
be
found in a recent
article
by
Mr. Alan
Gauld,1
and
I
shall
refer to some
points
in Mr.
Gauld's
paper
as
examples
of the
point
of view
I
wish to
rebut.
What, then,
is
it
to
perceive
an
object
? Is there
anything
intrinsic to
the
concept
of
perception
which
could
not,
in
principle,
be
predicated
of a
machine ? Discrimination is
certainly
a
necessary part
of
percipient
be-
haviour.
The Modern
Times
machine dealt
with
Charlie
Chaplin
in
exactly
1"
Could
a Machine Perceive ?
",
Brit. J.
Phil.
Sc.,
Vol.
XVII,
Part
I,
May
1966.
This content downloaded from 158.109.174.204 on Tue, 28 Jul 2015 02:31:49 UTCAll use subject to JSTOR Terms and Conditions
http://www.jstor.org/page/info/about/policies/terms.jsphttp://www.jstor.org/page/info/about/policies/terms.jsp
-
8/20/2019 Machine Perception
3/14
34
MARGARET
A. BODEN
the same
way
as with its
more conventional
raw
materials,
and there is
thus no
reason whatever for
saying
that it
could
perceive any
difference
between them.
A
creature which
is to discriminate
between one class of
things
and
another,
for instance
between
ducks and
rabbits,
must be
able
to
recognize
a
given
animal as a duck.
But,
like
lions,
all ducks are
different,
and
thus
some form of
stimulus-generalization
must take
place;
any
machine
which
may
be said to
perceive
must
have some
way
of
unifying
differing
inputs
so
as
to
give equivalent
outputs,
it
must be able to
recognize
different
ducks
as
ducks,
different
rabbits as rabbits. As
we shall
see,
Mr.
Gauld
argues
that
no such
power
could be built into a
machine.
But
discriminatory
behaviour,
while an essential
part
of the
concept
of
perception,
is
not
enough:
a rather more
modern
machine,
which
consistently
threw
Chaplin
off the conveyor belt, would not on that account be said to perceive him.
Mr.
Gauld
states
the
basic
problem
as " how we come to be able
to
recognize
a
chair
as
a
chair,
a tomato
as a
tomato,
and so forth
".2
In this he follows
all those
philosophers
who have
analysed
perception
in terms of
judgment,
of
concepts,
of
knowledge
that. He refers us to a
rat,
no
doubt
imaginary,
who
is
trained to
eat
pound-notes
whenever he comes across them. This
is
certainly
discriminatory
behaviour,
but not sufficient
reason for
saying
that
the
rat
perceives
pound-notes,
if
this
is
to mean that
he
perceives
them
as
pound-notes.
For
this,
the
note
has also to be seen as
something
of
value,
as
something
relevant to
exchange,
bribery
and
corruption,
as
legal
tender
in a
complex
monetary system,
all
of which
high-level
notions are
foreign
to rats-as
also to children
and idiots. In
order to decide
whether
a
creature
can
perceive
an
X as an
X
we have to ask
whether the creature
possesses
not
only
the
concept
of
X,
but a matrix of
related
concepts. Seeing
a tomato
as
a tomato
involves
more than
seeing
it
as
edible,
for one
must see it
as a
tomato
and not
as an
apple,
as a
vegetable
which was once
alive,
and
so
on;
it
follows
that no
animal other
than
man
could see a tomato as a
tomato,
since verbal behaviour would be required to express some of these distinctions.
Granted
that
animals
cannot be
taught
anything
which will
pass
as a
respectable
analogue
of
language:
what about
machines ?
Here Mr.
Gauld
argues
that
machine
analogues
of
language
are not
respectable
enough
to
allow
for
perception,
since no machine could ever
extrapolate
word-labels
to
new,
physically
dissimilar
instances of
a
class,
and a
fortiori
could
not
build
up
its own
concepts
by recognizing
the
unity
in
various
parts
of the
environment
and
abstracting
rules
representing
"
common
properties
"
in
the
various
physical
situations said to be instances of
the
concept.
There
is no
physical
feature common to all instances of "
exchange
" or "
game
",
and
even
seeing
something
as a tomato
requires
the
possession
of a number
of
similarly high-level
concepts.
This freedom from
specific
features of
the
physical
environment
seems to
be
what
is
meant
by
the
expression
'high-
level
concepts',
and
I
shall
discuss
the
argument
that no machine could
ever
possess
such
concepts
in the
next
section.
O2p.
cit.,
p.
45.
This content downloaded from 158.109.174.204 on Tue, 28 Jul 2015 02:31:49 UTCAll use subject to JSTOR Terms and Conditions
http://www.jstor.org/page/info/about/policies/terms.jsphttp://www.jstor.org/page/info/about/policies/terms.jsp
-
8/20/2019 Machine Perception
4/14
MACHINE
PERCEPTION
35
II
In
specifying
what he
means
by
'machine'
Mr.
Gauld
refers us to
a
concept
which
is
commonly
used
in the
literature: that
of
a
finite-state
machine
(which Turing,
in his
paper
Computing
Machinery
and
Intelligence,
called a discrete-state
machine).
It
is
possible
to
regard
any
machine
as a
finite-state
machine, i.e.,
a
system
which
must
at
any
moment
be in one
out
of a finite number
of
possible
states,
and
where there
are
rules which
specify
how
its
various
possible
states are to
succeed
one another.
The
rules
may
be
probabilistic,
and
may
even
include a
few
randomizing
operations,
but
in
general
it
is
possible
to
specify
the
state
of and the
input
to
such a
system
and
to
apply
these
rules so as
to
generate
the
succeeding
state
(which
may
or
may
not function
as the
next
input).
Inputs
may
be
internally
or
exter-
nally generated, and may be arbitrary with respect to the present state of
the
system.
To the
extent that
inputs
are
internally
generated,
the
system
as a whole
is autonomous
with
respect
to the
environment.
Thus
a
matrix
linking possible
states in
terms of the
rules
can
in
principle
be drawn
up;
this
is known as a
machine-table,
and
any
such matrix
can be
represented
in a
digital computer.
Thus
any
finite-state
machine
(which
class
includes
at
least some
living things)
can
be embodied
in a
digital computer.
Can
we
embody concepts
in such
a
machine,
so that
discriminatory
behaviour
can
be classified as
perception,
the
object
being
seen as
an
instance
of
the
concept
corresponding
to the word-label
applied
by
the
machine ?
Mr.
Gauld
admits that
in
principle
we
could
observe Jones's
behaviour
expressing
his
concept
of
"
game
"
and
build
in
each
instance
of
Jones's
behaviour
in terms of machine-state
and
machine-input.
The result would
be a
complex
hotch-potch
of
rules,
a
straggly
matrix
drawn
up
by
enumera-
tion of
instances of
Jones's
behaviour.
Mr.
Gauld
says
that
any person
of
moderate
sophistication
would
be able
to
see that
the rules
belonged
to-
gether,
possessed
a
certain
unity.
But
in
what,
he
asks,
could
this
unity
consist ? " Certainly not in physical similarities among the external situa-
tions and
among
the
bodily
states
to which
the rules
refer."3
It is
only
because
we
already
have the
concept
of
game
that
we
can
recognize
all
these
rules as
pertaining
to
games;
and
no
new
game, physically
unlike
any
that
had
gone
before,
could be
recognized
as
a
game
by
such
a machine
-whereas
a human
being
can do
this,
and
parents
and
anthropologists
commonly
do.
In
sum,
no
"
high-level
"
concept
(indeed,
no
concept
at
all)
can
be
expressed
in or
acquired
by
learning-as
a
machine
might
-a
set
of
rules about
how
to react
in each
of a
given
series
of
physical
situations.
"
The
concepts
may
in
some
sense be created
out of such rules
;
but
they
also
transcend
and
unify
them."4
It is assumed
that no
machine
could
generate
its
own
rules for
responding
to the
physical
similarities in
its
environment,
nor
extrapolate
any
rules
written
into
its
programme
by
us. It
would
seem
to follow
that
no
machine
30p.
cit.,
p.
53.
4lbid.,
p.
54.
This content downloaded from 158.109.174.204 on Tue, 28 Jul 2015 02:31:49 UTCAll use subject to JSTOR Terms and Conditions
http://www.jstor.org/page/info/about/policies/terms.jsphttp://www.jstor.org/page/info/about/policies/terms.jsp
-
8/20/2019 Machine Perception
5/14
36
MARGARET
A. BODEN
could
identify
ducks
or rabbits as members of two
classes,
because
it
would
not be
capable
of the
stimulus-generalization
required
to
recognize differing
individuals as
being
of one and the same
type.
Since
we cannot
specify
precisely
in
advance the various clusters of
physical
characteristics which
as
input-state
should invite the
output
"rabbit
",
we cannot draw
up
a
machine-table for this
task,
and hence
it is
not a task which could be
per-
formed
by
a machine.
Still
less,
on this
view,
could
a
machine
be
expected
to
cope
with
ambiguous
figures
such
as
duck-rabbits. Mr.
Gauld's
view
of
machine-behaviour as thus limited
and
inflexible is
linked
with his view
that finite-state
systems-but
not human
behaviour-are amenable to
mechanical
explanation,
explanation
which could
be
expressed
in
terms of
a
machine-table,
whereby
the
occurrence of
a
given
state is shown to
depend
upon the prior occurrence of other states. This complaint about the rigidity
and narrowness
of machine behaviour
may
seem
plausible
in view of the
nature of
many existing
programmes,
but
it
can be
countered
by
some of
the more
recently
developed programmes,
and
certainly
cannot be
deduced
from the
concept
of
a finite-state
system.
A finite-state
system
is
one in
which
each state
is
determined
by
the
preceding
state,
even
if
the determination
is
by way
of a
randomizing
operator-thus
in
principle
a machine-table
exists
which
will
fully
describe
the succession
of states of
the
system
in
various conditions. But this
is
not to
say
that we
actually
could write the machine-table for
any given
machine,
still less
that
we
could do so before the machine was built and
functioning
so that we could foresee
every
detail
of
its
behaviour.
The
machine-table
can be
guaranteed
only
as a
conceptual
device,
as an extra-
polation
from the basic
postulate
of
determinism as
applied
to
the
physical
states
of
the
machine,
from the
assumption
that
the mechanism works in
accordance
with
known
physical
laws. In most actual
cases the
complete
machine-table
will exist
for
us
only
in the sense in which
the
complete
story of the cosmos existed for Laplace. We may choose to define 'state of
the
system
in
molar,
or
behavioural,
terms,
speaking
justifiably
of
"
one
and the same
"
output
or
initial
state
in cases where the
molecular,
or
mechan-
ical,
specifications
would
be
different.
This will mean
that,
while
a
complete
mechanical
explanation
is
in
principle
always
possible,
we
may
not be able
to
give
it,
and the
terminology
we find it
most convenient to use
may
not
be fitted to reflect
the
level of
detail such an
explanation
would
require.
Unless
one is
prepared
to
deny
that
the
brain
and associated
bodily
functions
are
subject
to familiar
types
of natural
laws
(which
leaves
open
the
possi-
bility of randomness arising in some, specifiable, conditions), then one must
regard
the brain as
(part
of)
a
finite-state
mechanism. If
one is
then
to
deny
that human
behaviour can
be
generated by
such a
system-or,
in
a
sense,
explained
in terms of
it-one must
recognize
that one
is
denying
the
postulate
that our
behaviour
is in some
sense
fully
dependent
on
our
bodies,
on
a series
of
events
which
could in
principle-be
described
by
a
biologically
competent super-Laplace.
Despite
his
strange
remark
about
transcendence
This content downloaded from 158.109.174.204 on Tue, 28 Jul 2015 02:31:49 UTCAll use subject to JSTOR Terms and Conditions
http://www.jstor.org/page/info/about/policies/terms.jsphttp://www.jstor.org/page/info/about/policies/terms.jsp
-
8/20/2019 Machine Perception
6/14
MACHINE PERCEPTION
37
which I
quoted
earlier,
I
hesitate
to
ascribe
such a
denial
to Mr.
Gauld,
for
I doubt
whether his
psychologist's suspicion
of
the
physiologist
goes quite
this far.
Perhaps
he
merely
means that the
psychologist
uses a
language
con-
sisting
of
organizational
or behavioural
terms,
terms which
commonly
provide explanations,
and
which
cannot be reduced to
physiological
con-
cepts.
However,
the
case
is
parallel
when we consider
machines,
for if we
wish to
explain
the
behaviour
of a
computer
we can do so in
either
of two
ways:
we
can refer to
its
mechanism or
we
can
refer to
its
programme.
These two
types
of
explanation
are not reducible to one
another,
although
the instantiation of
any programme
in
a
particular
case is of course
always
wholly
dependent upon
the detailed
mechanism of the material
object,
the
machine. If we were to give a complete description of the structure and
history
of the
machine,
in
terms of
its individual
components
and the
physical
inputs
it
receives,
there
is a sense
in
which
we should have described
everything
that
was
there,
every
causal factor
contributing
to
its
behaviour.
This is the account
which the
Laplacean
engineer
could,
in
principle,
give
us,
and
we
could
use this means to
explain
the
behaviour;
given
a
certain
output
we
could
list the various
changes
in
the
components
involved
in
producing
it. If
the
machine behaves
in an
unexpected
fashion,
for
instance
if it makes
a
mistake such
as
giving
the
wrong
label to
an
apparently
un-
ambiguous
stimulus-figure,
or
failing
to solve a mathematical
problem
of a
type
which
usually
presents
it
with no
difficulty,
then
we
may explain
such
behaviour
by
opening
the machine and
literally
finding
a
spanner
in the
works,
an
unusual
connection
of
wires,
or a short
circuit. In
principle
this
type
of
explanation
will
always
be
in
order,
for
there will
always
be
some
physical
basis for the
strange
behaviour.
However,
in
many
cases a
quite
different
type
of
explanation
will
be
in
order
also,
will
indeed be more
illuminating,
and this
will involve
reference to the
programme.
The
concepts
of mechanism and programme are independent
of one
another,
and
a
person
can understand
the nature
of a
given programme
and
even
criticize
or write
one
without
any knowledge
of
electronic
hardware.
A
programme
is written
before
it is
instantiated
in
any particular
machine,
and
it
may
be built into
machines
differing
greatly
in their
basic
mechanism.
This is so because a
programme
is a
logical
or
behavioural
as
opposed
to
a
physical
concept;
it
is
expressed
in
terms of notions
such as
equal,
greater
than,
average,
and
operations
such
as
search,
compare, test, repeat, increase,
delete
. . .
and the
more
sophisticated
programmes
can
be
described
in terms
of
concepts
such
as goal, sub-goal, means to an end, heuristic selectionof criteria or operators,
etc. In the
example
mentioned
before,
of the
machine's
apparently
making
a
mistake,
we should
be wise to look
carefully
at the
logical
structure
or
plan
of
the
behaviour
and
to see whether
similar
mistakes
were
made in
similar
problem-situations.
This
investigation
might suggest,
for
instance,
that
a
certain
instruction
in the
programme
had been
wrongly
expressed
by
tlhe
programmer
so that there
was
a
systematic
mistake
in situations where
This content downloaded from 158.109.174.204 on Tue, 28 Jul 2015 02:31:49 UTCAll use subject to JSTOR Terms and Conditions
http://www.jstor.org/page/info/about/policies/terms.jsphttp://www.jstor.org/page/info/about/policies/terms.jsp
-
8/20/2019 Machine Perception
7/14
38
MARGARET A.
BODEN
that
instruction was
called
upon;
one
example
of
this
would
be an
infinite
loop
in the
programme
under certain
(logical)
conditions-that
is,
a
rule
or
series
of rules in
the
programme
would
be
applied
recursively,
so
that the
solution
of
the
problem
would
be
blocked
in a
characteristic
fashion.
These
hypotheses,
drawn from observation of the behaviour of the
machine,
could
then
be checked
by referring directly
to the
programme,
if
this
were available
in the
form
of
a
print-out
from the
machine;
if
the
machine
were
one which
can
modify
its own
programme,
but made no record
of such
modifications
which
could be
printed
out
directly,
then
we
might
have to
be content
with
behavioural
confirmation of our
hypothesis
alone. To be
sure,
if
there
is
an infinite
loop
in
a
programme
then
it will
be
true to
say,
for
instance,
that such-and-such
a
circuit
tends to
be
continually
activated under such-
and-such conditions-but even if we are able to provide a description at
this
level it will
not enable
us to see
the
logical
or
behavioural structure of
the
fault,
nor
will
it
necessarily
suggest
immediately
how the fault can be
put
right.
Both
ways
of
describing
what
is
going
on are
useful,
and
each
carries
with
it its
own form of
explanation.
If we
consider
the
programmes
developed
in
an
attempt
to
simulate
pattern-recognition,
we find that
many
of
them do
have
the
failings
which
Mr.
Gauld
mentions. But
there
are some
which
are
a
good
deal
more
sophis-
ticated,
and which cannot be so
easily
faulted.
One
of
these
is
a
programme
developed
by
Uhr & Vossler,5 which will enable a machine to learn to
recog-
nize
line-drawings, including
letters,
numbers and
cartoon-figures.
Each
input-pattern
is
assessed in terms
of
a number
of
criteria,
or
"
operators
",
and
these
determine
the
label chosen to
classify
the
pattern.
Basically
the
operators
used reflect
physical
characteristics
of the
pattern,
such as
the
average
number
and/or
position
of certain
shapes,
but
the
first-level
charac-
teristics
can be combined
with
one another
in
various
ways
to
form
more
sophisticated,
high-level,
characteristics. Thus the
recognition
is
always
based on physical
features of the
patterns,
but cannot
necessarily be reduced
to a
straightforward
list of
physical
characteristics
easily
expressed
in
simple
geometrical
terms,
such
as
'three-sided rectilinear
figure
'.
The
concept
of
"
physical
similarity
"
is
by
no
means
a
clear one-one can
programme
a
computer
to deal
with
topological
relationships,
and even to
recognize
geometrical
and
topological
analogies
between
diagrams
of the
type
com-
monly
used
in
intelligence
tests,
but
I
suspect
that
many
people
who
think
of
machines as
limited
to
the
recognition
of mere
physical
similarities
might
not think
of
including
such
relationships
in this
category.
In
the
Uhr
&
Vossler programme verbal definition of any given operator, even a first-
level
operator,
may
be
difficult,
and
a
fortiori
it
may
be
impossible
to
deduce
the
exact
nature of the
operators
used
from an
examination
of
the
labels
actually applied
to
the
various
input-patterns.
Such
classificatory
behaviour
5Uhr,
L.,
and
Vossler, C.,
"
A
pattern
recognition program
that
generates,
evaluates,
and
adjusts
its
own
operators
",
in
"
Teleological
Mechanisms
",
Annals
New York
Acad.
Sc.,
50: 189
(1961),
pp.
555-569.
Reprinted
in E.
A.
Feigenbaum
& J.
Feldman
(eds.), Computers
and
Thought
(New
York,
1963),
pp.
251-268.
This content downloaded from 158.109.174.204 on Tue, 28 Jul 2015 02:31:49 UTCAll use subject to JSTOR Terms and Conditions
http://www.jstor.org/page/info/about/policies/terms.jsphttp://www.jstor.org/page/info/about/policies/terms.jsp
-
8/20/2019 Machine Perception
8/14
MACHINE PERCEPTION
39
may
indeed
seem to
"
transcend
"
the
physical
characteristics,
while
being
based
upon
them.
The
operators
used
may
be,
but
need
not
be,
pre-programmed; they
can be
initially
generated
at random
by
the
machine,
and then selected
on
the basis of their
success,
and can be
weighted differentially
so that those
most useful in
discriminating
between different
patterns
are relied
on
more
heavily
thereafter. The machine will
also
tend
to select
those
operators
which are
particularly
useful in
cases of
difficulty,
where two
patterns
may
be
easily
confused. The
number
of
operators
relied
upon
for
each discrim-
ination
can be
varied,
and
so also
can
the
extent
of
"
memory
"
for
past
examples,
so that the
groups
of
operators
responsible
for
the
application
of
a
given
label
in
different
cases
may
be related
to
one another
by
shifts
in
membership similar to the shifting of criteria which Wittgenstein compared
to
family-resemblances.
Of
course,
there has
to be some
criterion
of successful
labelling
other than
the
labelling
response
itself;
this
may,
and in the
present
programme
does,
come from
the
judgment
of a human
operator
(compare
the
parent
saying
"
Yes,
that's
right
"),
but
there
could be
a
programme
in which
the
consequences
of
wrong labelling
were
such that the
labelling
could
be
retrospectively
marked
as
incorrect,
and
appropriate
adjustments
made.
This
requires
that the
labelling
be made use
of
by
the machine
in
further
tasks,
and
I
shall
return to
this
point
later.
Although
in
many
pattern-recognition
tasks this
programme gives
results which-while im-
pressive-are
inferior
to
those obtained
from
human
subjects,
it
is note-
worthy
that
when tested for
ability
to
recognize
"nonsensical"
doodles,
its
performance
is
significantly
better than that
of
any
of
the human sub-
jects
so
tested.
It
is,
of
course,
in these
cases
that
the effect of the
greater
prior experience
of
the human
subject
is minimized.
This
programme
can
generate
its
own
criteria,
and
evaluate and
adjust
them in
response
to the
particular
features of the
patterns
it
encounters.
In principle, the programme could be improved so as to provide for the
building up
of a matrix of labels
linked
together
both
hierarchically
and in
other
ways,
so that
something
labelled
as "a tomato
"
might
be
linked
not
only
hierarchically
with the
label
'vegetable',
but
also
associatively
with
the label
'Worthing'; similarly,
it
would
not seem too
difficult
to
allow
for the
generation
of
analogical
or
metaphorical
uses,
such as
labelling
a certain man
as
"
a
rabbit
",
either in
virtue of
facial
or of
behavioural
characteristics.
It would
then
be
possible
to
raise such
questions
as whether
two
machines
with differential
learning
experiences
had the
same
concept
of tomato, even though both could apply the label correctly to actual
tomatoes,
or
pictures
of
tomatoes. I do
not
deny
that
it
would
require
a
great
deal
for
us
to
be able
to
say
that
they
had
the same
concept
of
tomato
as
ourselves,
not least
because
this would
require
a
machine
which
could
make
tactile and
chemical
discriminations
as well as visual ones.
However,
to admit that
the
complexity
and
number of discriminations
made
by
the
machine would have
to
be
very
great,
before
one
could
speak
of
the
same
This content downloaded from 158.109.174.204 on Tue, 28 Jul 2015 02:31:49 UTCAll use subject to JSTOR Terms and Conditions
http://www.jstor.org/page/info/about/policies/terms.jsphttp://www.jstor.org/page/info/about/policies/terms.jsp
-
8/20/2019 Machine Perception
9/14
40
MARGARET
A. BODEN
concept
with
an
easy
conscience,
is not
to
say
that
in
principle
this is
im-
possible
of
achievement.
Insofar
as
perceiving
a tomato
requires
having
a
concept
of
a
tomato,
together
with a network
of
related
concepts,
I
am not
convinced
that
no machine
could be built
with
such
a
property.
III
So far
I
have
argued
that the fact
that
a machine
may
be
regarded
as
a finite
state
system
does not
imply
the sort of behavioural
rigidity
which
would
prevent
it from
attaching
correct labels
to stimulus
inputs
in
various
situations,
even
in cases
where
the new instance
is
physically
very
dissimilar
to those
it
has encountered
so far.
However,
this
is
not to
say
that we
can
straightforwardly apply
the
concept
of
perception
to
any
machine
capable
of attaching the correct6 verbal labels to external objects, for perception
involves
more
than mere discrimination-it
involves the use of
discrimination
in
the
guidance
of
voluntary
action.
We
do not
merely
characterize
or describe
what
is
going
on
by
saying
"He
perceived
the
policeman
",
or
"
He sees
it as
a
policeman
"-we also
explain
differences
in behaviour
:
if
you
want to know
the
time,
ask
a
police-
man,
but
not a wax
dummy
outside Madame
Tussaud's. The
explanation
is
teleological
in
character,
for
perception
is a
purposive
notion
closely
con-
nected
with
that
of instrumental
activity.
The discriminations
are
used
in
the
activity-they
guide
the choice of means to a given end, they direct the
strategy
for
reaching
the
goal
by
picking
out the
appropriate
means
and
sub-goals,
and
rejecting
inappropriate
ones.
It
is
by
means of
perception
that
we
adjust
to the detailed
features of
the environment when we are
performing
complex
tasks-for
to
adjust
our behaviour
in this
way just
is
to
make
the
sorts
of
discrimination,
of
differential
response
to
particular
features
of the
environment,
which we term
perceptual
discrimination.
To
attain certain
goals,
certain
procedures
are
appropriate;
but
if
you
can't
tell a hawk from a handsaw you are likely to find some difficulty in fitting
these
procedures
to
reality.
Your
failure can then
be
explained
in terms of
your
mistaken
perception,
though
the
precise
nature of
that
misperception
may
be inferred
from
the
details
of the fruitless behaviour
itself.
If a creature
reacts
differentially
to various
environments then we
may
say
it
is
sensitive
to
changes
in the
environmental features involved. A
limpet
on
a
rock,
or a
sea-anemone,
may
be sensitive
to
sunrise and
sunset,
or even
to the
shadow
of
a boat
passing
through
the
water,
and this
sensi-
tivity
may
be shown
by changes
in
the
muscular tonus
of
the
organism;
but
to speak of perception here would be out of place. This would require a
fairly complex
background
of
behaviour,
such that
smaller
behavioural
units
of various
types
can be
distinguished
from
one
another,
and
naturally
occur
together
in series
making
up
larger
behavioural
units,
a
description
of which
gives
a
description
of the
activity
of the
organism.
Moreover,
those
cases
of
activity
to which
the
concept
of
perception
is
particularly
8See
p.
44.
This content downloaded from 158.109.174.204 on Tue, 28 Jul 2015 02:31:49 UTCAll use subject to JSTOR Terms and Conditions
http://www.jstor.org/page/info/about/policies/terms.jsphttp://www.jstor.org/page/info/about/policies/terms.jsp
-
8/20/2019 Machine Perception
10/14
MACHINE PERCEPTION
41
appropriate
are those where the behaviour
is of
such
flexibility
that ob-
stacles
to
the
attainment
of
the
goal
are not
only likely
to
be
overcome,
but
overcome
with
a
certain
economy
of
effort,
such that
different
types
of
obstacle elicit different
types
of
compensatory
behaviour,
so
that
the
goal
is reached as
efficiently
as
possible,
granted
the motor abilities of the
organ-
ism. This
may
be
seen
by
considering
examples
of
complex
behaviour which
may
mislead
us,
such that
we
ascribe
inappropriate perceptions,
and
imply
the
wrong
sort of
explanation.
For
instance,
a robin
defending
his
territory
in
spring
will behave
in
exactly
the same
aggressive
fashion whether
we
confront
him
with
a
male
robin
or with
a
piece
of
red
flannel
waggling
on a
stick.
The
bird's
behaviour
is not
to be
explained
in
terms
of its
goals
and
intentions,
or
its
perceptual
judgments about the relevance or threatening character of certain environ-
mental intruders.
The bird's
behaviour
is
fixed,
it is
determined
by
this
particular
stimulus-sign,
by
a small
part
of
the total
stimulus-array
which
happens-in
normal
circumstances-to
form
part
of a
second,
and
possibly
rival,
robin. In
such
cases,
explanation
in
terms
of
finely
discriminating
perceptions
or
concepts
such
as sexual
rivalry
is
superfluous,
as
explanations
are available
at a
much
lower
level;
this is shown
by
the
nature
and inflexi-
bility
of the
behaviour,
and
does
not
depend upon
our
denying
concepts
or
perceptions
to the animals
in
the
absence of
language.
The
bird's
sensitivity
to a
specific
feature of the environment is
responsible
for the occurrence of
the behaviour
in these
conditions
rather
than
in
others,
but
it
does not
go
on
to make the further discriminations
which
would
lead
to
a cessation or
modification
of
the
activity
so
as
to fit
it
more
appropriately
to the
environ-
ment.
In
this
case,
at
least,
explanation
of
the bird's
behaviour in
terms of
its
perceptions
would
seem to be out of
place.
Moreover,
the
behaviour
is
predictable
from
a
knowledge
of
the environmental conditions
alone,
i.e.,
the season
and
the
presence
or absence of a
particular releasing
stimulus,
and successful prediction does not requirereference to the ongoing behaviour
of the
bird,
it
does not
require
us
to
ascribe
particular
purposes
or
intentions
to this
individual
bird.
In
general,
while movement
may
be
analysed
in
terms of
muscle-twitches,
behaviour
must
be
analysed
into conative
units,
but the
purposes
or
goals
involved
may
be more
properly
ascribed
to the individual
organism
in some
cases
than in
others-that
is,
to the
organism
as
an
individual,
rather
than
as a
member
of the class
of,
e.g.,
robins.
In those
cases
where
such
ascription
is most
appropriate
we
may
speak
of
"voluntary
"
behaviour. In such
cases we cannot predict the behaviour from a knowledge of the environ-
mental situation
alone,
but must
explain
the
occurrence
of this
goal-seeking
behaviour
rather than
any
other
which
lies
within the animal's
repertoire,
by
referring
to
behavioural
patterns
which are not
closely
dependent
on
specific
stimuli
from
outside,
and which
may
differ in different
individuals.
This
element
of autonomous
selection
requires
that
there
be
a
behavioural
repertoire
from which
to
select-i.e.,
that the
organism
is
capable
of
seeking
various different
goals
in
various
different
manners.
The
more
varied the
This content downloaded from 158.109.174.204 on Tue, 28 Jul 2015 02:31:49 UTCAll use subject to JSTOR Terms and Conditions
http://www.jstor.org/page/info/about/policies/terms.jsphttp://www.jstor.org/page/info/about/policies/terms.jsp
-
8/20/2019 Machine Perception
11/14
42
MARGARRET
A. BODEN
tasks
which
a
creature
may
perform,
the more room there
is
for
the
ascrip-
tion of voluntariness
to its
activities.
But
this
ascription
will
only
be
in
order
when the
behaviour
is
flexible in
the
sense
already
described-i.e.,
when
different
obstacles
to the
attainment of
the
goal
are
encountered
and
coped
with
appropriately,
so that the
goal
is reached
notwithstanding
the
difficulties
presented
by
the
environment.
Naturally,
in some cases the
goal
will
not
be
reached,
but the
more
the behaviour shows
changes
describ-
able as
attempts
to reach the
goal,
the more
readily may
we
speak
of
volun-
tary
behaviour. Some
degree
of
persistence
is therefore
required
for
volun-
tary
action,
but
if
inappropriate
behaviour
is
persisted
in
then
we
may
not
regard
the
behaviour
as
voluntary,
particularly
if
we
believe
the
organism
to be
capable
in other conditions
of
making
the discriminations
required
to
show that and how the behaviour is inappropriate-for example, compare
the
persistence
of
a
very young
child
trying
to
fit a
piece
into a
jigsaw-puzzle
inappropriately
with that
of
a
panic-stricken
man
trying
to
get
out
of a
room
by
the
locked
and
solid
door,
instead
of
breaking
out
through
the window.
Stereotyped
and
rigid
behaviour,
which
is
unresponsive
to
unusual
features
of the
environment,
such that
the
organism
may
easily
be
cheated
of
the
goal,
may
be
seemingly
very
complex-as
in
the
courtship
and
nest-building
behaviour
of
some birds-but
is
clearly
different from behaviour which
is
guided by
perception
of
the
environment
and which
varies
according
to
such discriminations so as to maximize the
probability
of
reaching
the
goal.
This
is a
conceptual
matter : I
am
claiming
that what we
call
perceptual
discrimination
will
be
required
if
an animal
is
to
learn new
skills,
or
is to
be able to
adjust
his
behaviour
in
conditions
of
difficulty;
I am
not
claiming
that all
motor
activity
under the control of the
organism
is
directed
merely
by
a continuous
series
of
perceptual
or
kinaesthetic
sensations-indeed,
there is
good
experimental
evidence
against
such a view.
The
greater
the
degree
of
autonomy
of the
organism
vis-a-vis
specific
features of the environ-
ment, and the more the organization of behaviour differs across individuals,
the
more
will
we be
ready
to
speak
of
the
goals
or
purposes of
the creature
itself,
and of
its
voluntary activity
in
seeking
those
goals.
These features of
flexibility
and
autonomy
of
behaviour, which,
as
we
have
seen,
require
discriminatory ability
on the
part
of the
organism,
are
not in
principle
denied
to
machines. There
are
already
phototropic
machines
which
will
avoid
physical
obstacles
in their
path,
maze-running
programmes
which
will
learn
short-cuts,
and
problem-solving
programmes
which
will
select
the
most
appropriate
from
a
number of heuristics when
they
are
searching
for a solution. To be
sure,
existing programmes
have
nothing
approaching
sufficient
complexity
for us to
be
tempted
to
distinguish
be-
tween their
voluntary
and
involuntary
behaviour;
outputs
are
very
closely
tied
to
specific inputs,
and while
a
machine
may
be able to select
for
itself
the
way
in
which
it
will
attempt
to solve a
problem,
and
may
be
able
to
apply
criteria of
difficulty
which
will
prevent
its
attempts
at solution from
going
on
indefinitely,
it
will
not be able
to
ignore
the
problem
as
a
problem
because
it is
irrelevant to the
general
pattern
of
interests of the
individual
This content downloaded from 158.109.174.204 on Tue, 28 Jul 2015 02:31:49 UTCAll use subject to JSTOR Terms and Conditions
http://www.jstor.org/page/info/about/policies/terms.jsphttp://www.jstor.org/page/info/about/policies/terms.jsp
-
8/20/2019 Machine Perception
12/14
MACHINE
PERCEPTION
43
machine.
A
problem
may
be
put temporarily
into
a
"
queue
",
and
queue-
jumping
may
be allowed for
on the
basis
of
certain
criteria,
but that is
only
the
rudimentary
beginning
of
anything
which
could
plausibly
be called
free
choice
of
problems
to be tackled because
they
are relevant to
the
general
purposes
of the machine. Connected with this
point
is the fact
that,
so
far,
programmes
are
very
narrowly
conceived
so as to
cope
with a
restricted
range
of
problems
only-in
other
words,
the
behavioural
repertoire
of
existing
machines
is small.
Even
a
programme
specifically designed
to
give
a
com-
paratively
wide
range
of
behaviour,
such
as
Newell,
Simon
& Shaw's
"
Gen-
eral
Problem Solver
"7
(which
uses
the
abstract notions
of
means-end
analysis
and
heuristics,
and which
can
cope
with
a
wide
variety
of
formal
problems),
is
in
fact restricted
to
a small
number of
operations
of
certain
logical types, and does not produce anything which one would want to call
voluntary
behaviour.
Moreover,
even
the
General
Problem
Solver
is
based
on
serial
processing
and thus
is
unsuitable for
the
generation
of
behaviour
which
can
be described
as
directed
to several
goals
at
once,
and where the
informational
input (compare
:
perceptual
processes) may
be
more available
to or
necessary
for one
goal
than for another. If we
are to simulate such
behaviour
as
helping
a
child
with his
algebra
while
making pastry
for
a
dinner-party,
stopping
the
vegetables
from
boiling
over and
periodically
brushing
the hair from our
eyes,
we must
rely
on some form of
parallel
processing.
Further,
existing
machines do not
go
in for instrumental
activity
-that
is,
they
do
not mould
the
environment itself
so as
to make
it
more
suitable for the
pursuing
of
their own
purposes;
such
moulding
of
the
environment
clearly
would
require
the sorts
of
discriminations
we
term
perceptual,
and would involve
motor
activity
on the
part
of
the machine.
It is
true
that
there are
programmes,
for instance some
of
those
designed
to
simulate
pattern-recognition,
which
will
take
a
record
from the
environ-
ment,
and
then mould
that
record
so
as
to make
it
more amenable to
treat-
ment-one simple example would be the deletion of any blank borders of
the
original
record
so as
to
make
all
the
patterns
dealt with of
much
the
same overall
size.
However,
this
change
does
not
feed back onto the en-
vironment
but
is
restricted
to the machine's internal
behaviour,
and
there
is
no need
for
a continual series
of
discriminations
of
and
adjustments
to
the
environment
such
as
would
constitute
percipient activity.
But
it
does
not seem
to
be
in
principle impossible
to
build
machines whose motor
activity
would
be
guided
in
this
way.
A machine which
could
recognize
tomatoes
and
pictures
of tomatoes
and
which could
link the
label'
tomato
'
with
the
labels 'edible ',
'living'
and so on,
might
also be able to
classify
a tomato
as
something
soft
and
squashy
on the basis of
its
own motor
activity
of
squashing
some.
And
this
might
result
in
a classification of
a
tomato
as
something
suitable
to
be
thrown
at
politicians
making
speeches,
which
7Newell, A.,
Shaw,
J.
C.,
and
Simon,
H.
A.,
"
Report
on a
general
problem-solving
Program
",
in
Proc.
Int.
Conf.
Information
Processing,
1959
(Unesco).
See
also Newell
&
Simon,
"GPS-A
program
that simulates
human
thought
",
in Lernende
Automation
(H.
Billing,
ed.)
(repr.
in
Feigenbaum
&
Feldman
(eds.), Computers
and
Thought).
This content downloaded from 158.109.174.204 on Tue, 28 Jul 2015 02:31:49 UTCAll use subject to JSTOR Terms and Conditions
http://www.jstor.org/page/info/about/policies/terms.jsphttp://www.jstor.org/page/info/about/policies/terms.jsp
-
8/20/2019 Machine Perception
13/14
44
MARGARET
A.
BODEN
classification
might
be acted
on
by
the
machine
itself when
confronted
with,
say,
Harold
Wilson.
Why
not ?8
Percipient
activity
involves the
possibility
of
misperception,
as
opposed
to
mere
failure
to
discriminate
at
all,
and the occurrence of
a
particular
misperception
may
be seen from the behaviour of the
organism-a dog
might
attack or
try
to mate with
a
child's
toy
dog,
and
this
could
surely
be
explained
in terms of
misperception,
particularly
if the
dog
soon modified
his
inappropriate
behaviour
on closer
acquaintance
with the
toy.
A
language-
user can
tell us more
directly
about his
perceptions
by
using
verbal
labels
or
concepts,
and
as Mr.
Gauld
points
out
we
not
only say
such
things
as
"
He
sees
the
policeman
",
but "He
sees
it as a
policeman
". If
every
stimulus were
always responded
to in
exactly
the
same
way,
then we
should
have no room for the notion of seeing as. There has to be a possibility of
alternative
categorization
and
of
mistake. The
point
of
saying
"He
sees
it
as
an
X
"
is
largely
to
distinguish
the
situation
in
question
from that where
the
stimulus remains
the
same,
but
we
say
"He sees
it
as a
Y
".
And
this
is a
distinction
made
in
order
to
explain
differences of behaviour
in
the two
cases.
It is
worth
remarking
that a certain
concept
was
applied
because
it
is
in
principle
possible
that another
concept
might
have been
applied,
and
this
would
have made
some difference
to behaviour.
When
a
stimulus
provides
conflicting
or
ambiguous
cues then different
perceptions
may
be
equally
possible;
a
percipient
being may have to
cope
not only with ducks,
and with
rabbits,
but
also with duck-rabbits.
Is
there
any
reason to think
that a machine could
never be
built with such
a
feature ?
Clearly
not;
a
given
stimulus could
lead with
equal probability
to
two
different
outputs
when
presented
in
isolation,
though
the
probability
of
one
output
rather
than
the
other could
be raised
by
contextual factors
such as the nature of
the
preceding
stimuli-a
succession of
non-ambiguous
ducks could
swing
the
response
to the
ambiguous
duck-rabbit
in
the
duck-direction.
Assess-
ment of the correctnessof a discrimination must involve reference to the
interests
of
the
perceiver-thus,
the machine
might correctly classify
ducks
and
rabbits
separately
when
its
general
purpose
or
task
was
defined
in
terms
of
stocking
a
farm-pond,
and
might correctly classify
them
together
when
its
task was defined
in
terms
of
planning
a
vegetarian
menu.
The
existence
of a
variety
of
possible
internal
representations
of the
environment,
together
with the autonomous
selection
of
goals,
provide
a
basis
for the
ascription
of
intentionality
to machines. Insofar
as the
machine's
behaviour
is
guided
by
its own model of the
environment,
which
model
will
involve specificationsof certain objects as means to certain ends, and certain
sub-goals
as
steps
to
further
goals,
the
overall behaviour
will
be describable
in
intentional terms.
Thus the machine's behaviour
may
be
explained
in
terms of
a
perception
or
model
which
is in
fact
mistaken or
misleading,
as
for instance
if it
labelled
a
rabbit
as a
duck
and
then lassoed
it
and threw
8An
experiment
which
is
relevant
here showed that
if
one first
trains
a child to
push
a
button
in
response
to
a
red
light,
and
then
teaches
him to
call
a
series
of
red,
orange
and
yellow lights
by
the
same
nonsense-syllable,
then his
motor
response
be-
comes
generalized
to
all
these
lights.
W.
O.
Shepard,
Child
Devt.
1956,
27,
173-178.
This content downloaded from 158.109.174.204 on Tue, 28 Jul 2015 02:31:49 UTCAll use subject to JSTOR Terms and Conditions
http://www.jstor.org/page/info/about/policies/terms.jsphttp://www.jstor.org/page/info/about/policies/terms.jsp
-
8/20/2019 Machine Perception
14/14
MACHINE PERCEPTION 45
it
into
a
pond.
And
in
this
case we could
say
in one
sense
that the machine
perceived,
and reacted
to
the
rabbit,
yet
we
could
not
say
that
it
perceived
it
as a rabbit.
Again,
if
we were asked to
say
what
the machine
was
doing
(or
if
it
were asked
to
report
on
this
itself),
it
would
not follow from the
truth of 'It is
preparing
to lasso a duck ' that 'It is
preparing
to lasso a
bird
of the
family
Anatidae '.
Certain
types
of
behaviour would
suggest
the
existence
of
particular
models
of
(compare
:
beliefs
about)
the
actual
nature
of
the
environmental
situation
and
the
strategies appropriate
for
seeking
certain
goals.
A mistaken
assumption
about the overall
goal
being
pursued
by
the machine could lead
us to a mistaken
hypothesis
as to
the
nature
of
the
machine's information about and models
of the
environment,
just
as
we
may
be
misled in our
ascriptions
of
beliefs to
a man if
we are
wrong
about
his general intentions or desires.
IV
To sum
up:
I
have
argued
that,
from the fact that machines
may
be
regarded
as
finite-state
systems,
we cannot
infer
that
their
discriminatory
abilities
must be so
narrowly
restricted
to
precisely specifiable
features
of
the
physical
stimulus that
they
could
never
carry
out the sorts
of
stimulus-
generalization
required
for the correct use
of
verbal
labels.
Nor
is
there
any
reason
to doubt
that
they
could build
up
systems
of labels
related
in
various
ways,
for instance
hierarchically
and
associatively,
as
our
concepts
are
related
to each other.
Moreover,
the more
sophisticated
machines
are
in
principle
open
not
only
to " mechanical
"
explanation,
as is
the brain
itself,
but also
to
the
type
of
explanation
which
we
typically
use
of
human be-
haviour,
involving
reference
to
goals
and intentions. Some
true
propositions
describing
such machines
would
be
intentional
in
character,
as
are
pro-
positions
ascribing
actions
and beliefs
to human
beings; among
such
pro-
positions
would be those
using
the notion of
perceiving
as.
The
concept
of
percep-
tion is closely linked with those of action and intention in such a way that the
fine
adjustments
to
the environment
typical
of
voluntary
actions
are
depen-
dent
upon
discriminations
of the
type
we
term
perceptual.
We
could thus
only
apply
the
predicate 'perceives'
in
its
full sense to
a
machine
if
the
behaviour
of
the machine were
sufficiently
complex,
autonomous
and
flex-
ible
for us to
speak
of
voluntary
as well
as
involuntary
activity.
There
is
no
clear
dividing
line between those
behavioural
repertoires
which would
and
those which would not
quality
for
these
descriptions, though
it
is
clear
that no
existing
programme
offers
any
real
temptation
to use such
language.
But there is no reason in
principle
why
such a machine should not one
day
be
developed: perhaps
a rat could
never see
a tomato
as
a
tomato,
but
a
machine
certainly might.
In
view
of this it is
illegitimate
to condemn
the
psychologist's
attempts
at
computer
simulations of
behaviour
as
inevitably
doomed to
failure. To what
extent
they
will
succeed
in
practice
remains
to
be
seen.
University
of
Sussex.
top related