meanings as instructions for how to build concepts
DESCRIPTION
Meanings as Instructions for how to Build Concepts. Paul M. Pietroski University of Maryland Dept. of Linguistics, Dept. of Philosophy http://www.terpconnect.umd.edu/~pietro. In previous episodes…. Humans acquire words, concepts, and grammars. - PowerPoint PPT PresentationTRANSCRIPT
Meanings as Instructions
for how to Build Concepts
Paul M. PietroskiUniversity of Maryland
Dept. of Linguistics, Dept. of Philosophyhttp://www.terpconnect.umd.edu/~pietro
• In previous episodes…
What are words, concepts, and grammars? How are they related?
How are they related to whatever makes humans distinctive?
Did a relatively small change in our ancestors lead to both the
"linguistic metamorphosis” that human infants undergo, and
significant cognitive differences between us and other primates?
Maybe…
we’re cognitively special because we’re linguistically special, and
we’re linguistically special because we acquire words
(After all, kids are really good at acquiring words.)
Humans acquire words, concepts, and grammars
Language AcquisitionDevice in
a Mature State(an I-Language):
GRAMMAR LEXICON
SEMs
other acquired------>
conceptsinitial
concepts
introduced concepts
what kinds of concepts do SEMs
interface with?
Lexicalization as Monadic-Concept-Abstraction
Concept of
adicity n
Concept of
adicity n
(before)
Concept of
adicity n
Concept of
adicity n Concept
of adicity -1
Concept of adicity
-1
PerceptibleSignal
Word: adicity -1
KICK(x1, x2) KICK(event)
further lexical information
Meanings as Instructions for how to build (Conjunctive) Concepts
The meaning (SEM) of [rideV fastA]V is the following instruction:
CONJOIN[fetch@‘ride’, fetch@‘fast’]
Executing this instruction yields a concept like
RIDE(_) & FAST(_)
The meaning (SEM) of [rideV horsesN]V is the following instruction:
CONJOIN[fetch@‘ride’, DirectObject:SEM(‘horses’)]
CONJOIN[fetch@‘ride’, Thematize-execute:‘SEM(‘horses’)]
Executing this instruction would yield a concept like
RIDE(_) & [THEME(_, _) & HORSES(_)]
RIDE(_) & [THEME(_, _) & HORSE(_) & PLURAL(_)]
Meanings: neither extensions nor concepts
Familiar difficulties for the idea that lexical meanings are concepts
polysemy 1 meaning, 1 cluster of concepts (in 1 mind)
intersubjectivity 1 meaning, 2 concepts (in 2 minds)
jabber(wocky) 1 meaning, 0 concepts (in 1 mind)
But don’t conclude that meanings are extensions
(or referents, or aspects of the environment that speakers share)
A single instruction to fetch a concept from a certain address
can be associated with more (or less) than one concept
Meaning constancy at least for purposes of meaning composition
Meanings: neither extensions nor concepts
Meaning constancy at least for purposes of meaning composition
Don’t confuse (linguistic) meaning with use:meaning is compositional; use isn’t
Don’t forget that shared languages may reflect shared biology more than a shared environment
“Poverty of Stimulus Revisited” (Berwick, Pietroski, Yankama, & Chomsky)current issue of Cognitive Science
“”The Language Faculty” (Pietroski and Crain) Oxford Handbook of the Philosophy of Cognitive Science
More Questions
• why do expressions (PHON/SEM pairs) exhibit nontrivial logical patterns?
(A/Sm) pink lamb arrived Every/All the/No lamb arrived(A/Sm) lamb arrived Every/All the/No pink lamb arrived
No butcher who sold all the pink lamb arrived No butcher who sold all the lamb arrived
• how are phrases and “logical words” related to logic? • why are some claims (e.g., pink lamb is lamb) analytic?• old idea: some words indicate logical concepts, some of which
are complex…logical words are not independent…and so some such words indicate complex logical concepts
‘Most’ as a Case Study
• Many provably equivalent “truth specifications” for a sentence like ‘Most of the dots are blue’
• a standard textbook specification is…
#{x:Dot(x) & Blue(x)} > #{x:Dot(x) & ~Blue(x)}
• some other options…
#{x:Dot(x) & Blue(x)} > #{x:Dot(x)}/2
#{x:Dot(x) & Blue(x)} > #{x:Dot(x)} − #{x:Dot(x) & Blue(x)}
{x:Dot(x) & Blue(x)} 1-To-1-Plus {x:Dot(x) & ~Blue(x)}
a model of the “Approximate Number System”
(key feature: ratio-dependence of discriminability)
distinguishing 8 dots from 4 (or 16 from 8) is easier than
distinguishing 10 dots from 8 (or 20 from 10)
fits for trials
(apart from Sorted-Columns)
to a standard psychophysical model for predicting ANS-driven performance
fits for Sorted-Columns trials to an independent model for
detecting the longer of two line segments
no effect of number of colors
discriminability is BETTER for ‘goo’ (than for ‘dots’)
• final episode…
Hunch (to be refined)
Meanings are simple
• they systematically compose in ways that emerge naturally for humans
• they are perceived/computed automatically, often in the absence of relevant context and in contrast to reasonable expectations
hiker, lost, walked, circles: The hiker who lost was walked in circles
• There are decent first-pass theories of meaning/understanding
• Meaning relies on rudimentary linking of unsaturated conceptual “slots”
Truth is complicated
• It depends on context in apparently diverse ways
• Making a claim truth-evaluable often requires work, especially if you want people to agree on which truth-evaluable claim got made
• There are paradoxes
• Truth requires fancy (Tarskian) variables
‘I’ Before ‘E’
• Frege: each Function determines a "Course of Values"
• Church: function-in-intension vs. function-in-extension
--a procedure that pairs inputs with outputs in a certain way
--a set of ordered pairs (no instances of <x,y> and <x, z> where y ≠ z)
• Chomsky: I-language vs. E-language
--a procedure, implementable by child biology, that pairs phonological structures (PHONs) with semantic structures (SEMs)
--a set of <PHON, SEM> pairs
I-Language/E-Language
function in Intension implementable procedure that pairs inputs with
outputs
function in Extension set of input-output pairs
|x – 1| +√(x2 – 2x + 1)
{…(-2, 3), (-1, -2), (0, 1), (1, 0), (2, 1), …}
λx . |x – 1| = λx . +√(x2 – 2x + 1)
λx . |x – 1| ≠ λx . +√(x2 – 2x + 1)
Extension[λx . |x – 1|] = Extension[λx . +√(x2 – 2x + 1)]
list of atomic <PHON, SEM> pairs, each with an “address” that can
be assigned to one or more concepts
modes ofcomposition
e.g., CONJOIN,
SATURATE, ABSTRACT
complex concepts that are available
for use
further details beyond my pay
grade
an I-language in Chomsky’s sense:
the expression-generator generates
semantic instructions; and executing these instructions
yields concepts that can be used in thought
complex concepts that are available
for use
e.g.,
BROWN(_) & COW(_)
e.g., conjoin:{fetch@’brown’,
fetch@’cow’}
Varieties of E-functions
• sets of <entity, truth value> pairs
• sets of <sequence, speaker> pairs
• sets of <situation, speaker> pairs
• sets of <environment, set of <entity, truth value> pairs> pairs
<Earth, {<e, t>: t = 1 iff x is a sample of H20}>
<TwinEarth, {<e, t>: t = 1 iff x is a sample of XYZ}>
• sets of <environment, set of <sequence, set of <e, t> pairs> pairs> pairs
• sets of < …world…perspective… > pairs
• …
• might wonder which of these are represented (by non-theorists)
Going To Church
given a procedure P that maps each widget to a gizmo, and a procedure P’ that maps each gizmo to a hoosit, there is a procedure P’’ that maps each widget to a hoosit
• in this sense, procedures compose (and some can be compiled)
• but a mind might implement P via certain representations/operations, and implement P’ via different representations/operations,
yet lack the capacity to use outputs of P as inputs to P’
if S and S’ are recursively specifiable sets, and S pairs each widget with a gizmo, and S’ pairs each gizmo with a hoosit, then some recursively specifiable set S’’ pairs each widget with a hoosit
• of course, sets don’t compose: S’’ is no more complex than S or S’
• but procedural descriptions of sets might compose
Familiar Point
given a procedure P that maps each widget to a gizmo, and a procedure P’ that maps each gizmo to a hoosit, there is a procedure P’’ that maps each widget to a hoosit
• specifying a procedure in the lambda calculus (without cheating) tells us that the outputs in can be computed in the Church-Turing sense, given the inputs (and any posited capacities/oracles)
• such specification can raise interesting Chomsky-Marr questions
what kind of algorithm is needed to compute the outputs from the inputs?
could animals use such an algorithm, given their cognitive resources?
if so, do they somehow compute the e-function question? and if so, how?
Composition before Context
• Many philosophers and linguists ask which aspects of context are tracked by expressions of a human I-language. I want to focus on an issue concerning the kind of composition exhibited by the concepts we assemble in response to simple phrases like
brown^cow
• already several kinds of context sensitivity to worry about
• big ant, brown calf, brown house, brown vegetables, blue sky
• something missing in logical forms like: BROWN(x) & COW(x)
• but logical forms with ‘&’ (and ‘x’) may also be too sophisticated
• in any case, given an I-language perspective, we must ask:
which conjoiner?
Kinds of Conjoiners
• If P and P* are propositions (sentences with no free variables), then: &(P, P*) is true iff P is true and P* is true
• If S and S* are sentential expressions (with zero or more free variables) then for any sequence of domain entities σ:
&(S, S*) is satisfied by σ iff S is satisfied by σ, and S* is satisfied by σ
• If M and M* are monadic predicates, then for each entity x:1&(M, M*) applies to x iff M applies to x and M* applies to x
• If D and D* are dyadic predicates, then for each ordered pair <x, y>:2&(D, D*) applies to <x, y> iff D applies to <x, y> and so does
D*
The Bold Ampersand
• &(Fx, Gx) is satisfied by (a sequence) σ iff Fx is satisfied by σ, and Gx is satisfied by σ
• &(Rxx’, Gx’) is satisfied by σ iff Rxx’ is satisfied by σ, and Gx’ is satisfied by σ
• &(Fx, Gx’) is satisfied by σ iff Fx is satisfied by σ, and Gx’ is satisfied by σ
• &(Rxx’, Gx’’) is satisfied by σ iff Rxx’ is satisfied by σ, and Gx’ is satisfied by σ
• &(Wxx’x’’, Rx’’’x’’’’) is satisfied by σ iff Wxx’x’’ is satisfied by σ, and Rx’’’x’’’’ is satisfied by σ
• The adicity of &(S, S*) can exceed that of either conjunct• but think about ‘from under’, which does NOT have these readings:
Fxx’ & Ux’’x’’’, Fxx’ & Ux’x, etc.
Frege-to-Tarski
Fregean Judgment: Unsaturated(saturated)
Planet(Venus)Number(Two)Precedes(<Two, Three>); Precedes(Two, Three)
First-Order Judgment-Frames: Unsaturated(_) Planet(_)
Number(_)Precedes(_, Three); Precedes(Two, _); Precedes(_,
_)
Second-Order Judgment-Frames: __(Saturater)__(Venus); __(Two); __(<Two, Three>)
Frege-to-Tarski
• Tarskian Variables (first-order): x, x', x'', …
• Tarskian Sentences: Planet(x), Planet(x'), ...
Precedes(x, x'), Precedes(x', x), Precedes(x, x), …
• any variable can "fill" any slot of a first-order Judgment-Frame
• Sentences (open or closed) satisfied by sequences: σ satisfies Number(x'') iff σ (x'') is a number
σ satisfies Precedes(x'', x''') iff σ(x'') precedes σ(x''')
σ satisfies Precedes(x'', x''') & Number(x'') iff σ satisfies Precedes(x'', x''') and σ satisfies
Number(x'’)
Tarski-to-Kaplan
• Constants as (finitely many) special cases of variables: c, c', c'', ... , c'''''
• T-sequences of the form: <σ(c), σ(c'), ... , σ(c'''''), σ(x), σ(x'), σ(x''), ... >
• Kaplanian indices: s, p, t
• K-sequences of the form:
<σ(s), σ(p), σ(t), σ(c), σ(c'), ... , σ(c'''''), σ(x), σ(x'), σ(x''), … >
indices constants variables
The Bold Ampersand
• &(Fx, Gx) is satisfied by σ iff Fx is satisfied by σ, and Gx is satisfied by σ
• &(Rxx’, Gx’) is satisfied by σ iff Rxx’ is satisfied by σ, and Gx’ is satisfied by σ
• &(Fx, Gx’) is satisfied by σ iff Fx is satisfied by σ, and Gx’ is satisfied by σ
• &(Rxx’, Gx’’) is satisfied by σ iff Rxx’ is satisfied by σ, and Gx’ is satisfied by σ
• &(Wxx’x’’, Rx’’’x’’’’) is satisfied by σ iff Wxx’x’’ is satisfied by σ, and Rx’’’x’’’’ is satisfied by σ
• The adicity of &(S, S*) can exceed that of either conjunct
Kinds of Conjoiners
• If P and P* are propositions (sentences with no free variables), then: &(P, P*) is true iff P is true and P* is true
• If S and S* are sentential expressions (with zero or more free variables) then for any sequence σ:
&(S, S*) is satisfied by σ iff S is satisfied by σ, and S* is satisfied by σ
• If M and M* are monadic predicates, then for each entity x:1&(M, M*) applies to x iff M applies to x and M* applies to x
• If D and D* are dyadic predicates, then for each ordered pair <x, y>:2&(D, D*) applies to <x, y> iff D applies to <x, y> and so does
D*
Kinds of Conjoiners (now using y instead of x’)
• Note the difference between 2&(D, D*) and &(Pxy, Qxy)
• no need for variables in the former, and hence no analogs of:
&(Pxy, Qyx); &(Pyx, Qxy); &(Pxx, Qxx); &(Pxx, Qxy); ...; &(Pyy, Qyy)
• We could stipulate that 2+(D, D*) applies to <x, y> iff
D applies to <x, y> and D* applies to <y, x>.
But this still leaves no freedom with regard to variable positions
• There is a big difference between
(1) a mind that can fill any unsaturated slot with any variable, and
(2) a mind that has "unsaturated" concepts like D(_, _)but cannot "fill" the slots with variables and create open
sentences
Kinds of Conjoiners
• If D is a dyadic predicate, and M is a monadic predicate, then for each entity x: ^(D, M) applies to x iff for some entity y,
D applies to <x, y> and M applies to y
_____ | |• ^(D, M) [D(_ , _)^M(_)]
|___________|
• Note the difference between ^(D, M) and &(Pxy, Qy) no need for variables in the former; hence no analogs of: &(Pxy, Qx)
…
• We could define other “mixed” conjunctions. But ^(D, M) is a simple one: its monadic conjunct is closed, leaving another monadic predicate
Meanings compose. Truth is (at best) recursively specifiable
• Meaning really is compositional
for this general point…
• it doesn’t matter what the composition operations/concepts are
• it doesn’t matter if meanings are concepts or
instructions to assemble concepts
SEM(‘brown’) = BROWN(_) &[BROWN(_), ___(_) fetch@’brown’
SEM(‘cow’) = COW(_) fetch@’cow’
SEM(‘brown cow’) = &[BROWN(_), COW(_)] CONJOIN[fetch@’brown’, fetch@’cow’]
SATURATE[fetch@’brown’, fetch@’cow’]
Meanings compose. Truth is (at best) recursively specifiable
• Meaning really is compositional, even if follows that meanings have a syntax
SEM(‘brown’) = BROWN(_) &[BROWN(_), ___(_)] fetch@’brown’
SEM(‘cow’) = COW(_) fetch@’cow’
SEM(‘brown cow’) = &[BROWN(_), COW(_)] CONJOIN[fetch@’brown’, fetch@’cow’]
SATURATE[fetch@’brown’, fetch@’cow’]
• Truth really isn’t compositional
• Tarski characterized truth as satisfaction by all sequences, and he specified satisfaction recursive. But satisfaction is not compositional
σ can satisfy ‘xFx’ without satisfying ‘Fx’, and
σ can satisfy ‘Fx’ without satisfying ‘xFx’
• the satisfiers of ‘Bx & Cx’ are not composed of the satisfiers of ‘Bx’ and ‘Cx’
A Versatile (but simple) Conjoiner
• If D is a dyadic predicate, and M is a monadic predicate, then for each entity x: ^(D, M) applies to x iff for some entity y, D applies to <x, y> and M applies to y
_____ | |• ^(D, M) [D(_ , _)^M(_)]
|___________|
• a separate talk to show that given plausible lexical meanings, and a limited form of abstraction that is required on any view, this can handle‘quickly eat (sm) grass’‘saw cows eat grass’‘think I saw most of the cows that ate every bit of grass in the field
Modes of Composition Constrain Composables
• The only lexical items permitted are those that can be combined in permitted ways
• One reason for being suspicious of appeal to “SATURATE” as a general semantic instruction for human I-languages: by itself, it imposes no constraints on lexical types
• But whatever composition operations/concepts one posits, along with whatever further constraints on lexical meanings, human I-language SEMs, there will be (perhaps severe) constraints on the “types” of indices
Modes of Composition Constrain Composables
if a dimension of context-sensitivity is indexed by a concept C that can be a constituent of a concept built by executing a human I-language SEM…
list of atomic <PHON, SEM> pairs, each with an “address” that can
be assigned to one or more concepts
modes ofcomposition
e.g., CONJOIN
(^)complex concepts that are available
for use
Modes of Composition Constrain Composables
if a dimension of context-sensitivity is indexed by a concept C that can be a constituent of a concept built by executing a human I-language SEM…
then C must be of the right formal type to compose with other concepts via the composition operations/concepts that human I-languages invoke
• This might be a huge source of constraint on which dimensions of the context-sensitivity of truth can be indexed by human I-language SEMs
• The issue is not merely if grammatically generated structures/instructions contain enough indices
• There is also the question of whether a posited (covert) index can be used to fetch a concept that has both the posited character and a form that human I-languages can deal with…just think about adicities
Modes of Composition Constrain Composables
of course, one can say that
the truth of assertions made by using human I-language expressions
depends on context in ways that
the compositional meanings of such expressions do not track
• I suspect that’s right, somehow
• But I don’t know what assertions are, or how they relate to I-language expressions of the sort that human children can generate
• And while I know how to specify Tarskian satisfaction conditions for many E-languages that have indices of many sorts, I don’t know which of these E-languages are generable by natural procedures as opposed to normative regimentations of actions of using I-language expressions
Hunch Refined
Meanings are simple
• They systematically compose in ways that emerge naturally for humans
• They are context invariant and perceived/computed automatically
• Meaning relies on rudimentary linking of unsaturated conceptual “slots”
• Simple theories possible (and no paradoxes)
Truth is complicated
• It isn’t compositional
• It depends on context (including norms) in apparently diverse ways
• Truth requires fancy (Tarskian) variables
• At best complex theories (and familiar paradoxes loom)
THANKS
TimHunte
r
DarkoOdic
JeffLidzJustin
Halberda
not pictured: Norbert Hornstein
Meanings as Instructions
for how to Build Concepts
Paul M. PietroskiUniversity of Maryland
Dept. of Linguistics, Dept. of Philosophyhttp://www.terpconnect.umd.edu/~pietro