generation models for american sign language classifier predicates matt huenerfauth penn...

66
Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors: Mitch Marcus & Martha Palmer Computer and Information Science Adapted from presentations given at: The 6th International ACM SIGACCESS Conference on Computers and Accessibility, October 20, 2004, Atlanta, GA The 10th International Conference on Theoretical and Methodological

Upload: marlon-faine

Post on 19-Jan-2016

214 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

Generation Models forAmerican Sign Language

Classifier Predicates

Matt HuenerfauthPenn Computational Linguistics Lunch

November 1, 2004

Research Advisors: Mitch Marcus & Martha Palmer

Computer and Information ScienceUniversity of Pennsylvania

Adapted from presentations given at:• The 6th International ACM SIGACCESS

Conference on Computers and Accessibility, October 20, 2004, Atlanta, GA

• The 10th International Conference on Theoretical and Methodological Issues in Machine Translation, October 4, 2004, Baltimore, MD

Page 2: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

English-to-ASL MT

• Development of English-to-ASL machine translation (MT) software for accessibility applications has been slow…

– Misconceptions: the deaf experience, ASL linguistics, and ASL’s relationship to English.

– Challenges: some ASL phenomena are very difficult (but important) to translate. We’ve had to develop some new models for ASL generation.

Page 3: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

Misconceptions about Deaf Literacy and ASL MT

How have they affected research?

Page 4: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

• Only half of deaf high school graduates (age 18+) can read English at a fourth-grade (age 10) level, despite ASL fluency.

• Many deaf accessibility tools forget that English is a second language for these students (and has a different structure).

• Applications for a Machine Translation System:– TV captioning, teletype telephones.– Computer user-interfaces in ASL.– Educational tools using ASL animation.– Access to information/media.

Audiology Online

Misconception:All deaf people are written-English literate.

Page 5: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

What’s our input? English Text.

What’s our output? ASL has no written form.

Imagine a 3D virtual reality human being…

One that can perform sign language…

What’s our input? English Text.

What’s our output? ASL has no written form.

Imagine a 3D virtual reality human being…

One that can perform sign language…

But this character needs a set of instructions telling it how to move!

Our job: English These Instructions.VCom3d

Building tools to address deaf literacy…

Page 6: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

Photos: Seamless Solutions, Inc.Simon the Signer (Bangham et al. 2000.)Vcom3D Corporation

We can use an off-the-shelf animated character.

Building tools to address deaf literacy…

Page 7: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

Misconception:ASL is just manually performed English.

• Signed English vs. American Sign Language.

• Some ASL sentences have a structure that is similar to written languages.

• Other sentences use space around signer to describe 3D layout of a real-world scene.– Hands indicate movement and location of entities

in the scene (using special handshapes).– These are called “Classifier Predicates.”

Page 8: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

Classifier Predicate

GazeRightLeft

GazeRightLeft

The car parked between the cat and the house.

Viewer

sign:HOUSE

Viewer

sign:CAT

Viewer

sign:CAR

Note: Facial expression, head tilt, and shoulder tilt not included in this example.

Loc#3

To Loc#3

Loc#1

To Loc#1

Eyes follow right hand.

Path of car, stop at Loc#2. To Loc#2

Example

(Loc#2) (Loc#3) (Loc#1)

Page 9: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

Misconception:Traditional MT software is well-suited to ASL.

• Classifier predicates are hard to produce.– 3D paths for the hands, layout of the scene.– Grammar rules & lexicons? Not enough.

• No written form of ASL.– Very little English-ASL parallel corpora. – Can’t use machine learning approaches.

• Previous systems are only partial solutions.– Some produce only Signed English, not ASL. – None can produce classifier predicates.

Page 10: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

But classifier predicates are important!– CPs are needed to convey many concepts.– Signers use CPs frequently.*

– English sentences that produce CPs are the ones that signers often have trouble reading.

– CPs needed for some important applications• User-interfaces with ASL animation

• Literacy educational software

* Morford and McFarland. 2003. “Sign Frequency Characteristics of ASL.” Sign Language Studies. 3:2.

Misconception:OK to ignore visual/spatial ASL phenomena.

Page 11: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

ASL MT Challenges:Producing Classifier Predicates

A new set of generation models…

Page 12: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

Focus on Classifier Predicates

• Previous ASL MT systems have shown promise at handling non-spatial ASL phenomena using traditional MT technologies.

• This project will focus on producing the spatially complex elements of the language: Classifier Predicates of Movement and Location (CPMLs).

• Since some of these new MT methods for CPMLs are computationally expensive, we’ve proposed a multi-path* MT design.

* Huenerfauth, M. 2004. “A Multi-Path Architecture for English-to-ASL MT.” HLT-NAACL Student Workshop.

Page 13: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

EnglishInputSentences

3D Software

Traditional MT Software

Word-to-Sign Look-up

Spatially descriptive English sentences…

Most English sentences…

Sentences that the MT software cannot successfully translate…

ASL sentencecontaining a

classifier predicate

ASL sentencenot containing a

classifier predicate

Signed English Sentence

* Huenerfauth, M. 2004. “A Multi-Path Architecture for English-to-ASL MT.” HLT-NAACL Student Workshop.

Page 14: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

CPML Generation Models

What are the representations used in the English to CPML pathway?

EnglishInputSentences

3D Software

Spatially descriptive English sentences…

ASL sentencecontaining a

classifier predicate

Page 15: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

Design of the CPML Pathway

EnglishSentence

Pred-ArgStructure

3D AnimationPlanning Operator

3D Animationof the Event

CP Semantics

CP Syntax

CP Phonology

CP Discourse

Page 16: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

CP Generation Models Discussed

• Scene Visualization

• Discourse

• Semantics

• Syntax

• Phonology (we’ll talk about this one first)

Page 17: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

Phonological Model

Body Parts Moving Through Space:

“Articulators”

EnglishSentence

Pred-ArgStructure

3D AnimationPlanning Operator

3D Animationof the Event

CP Semantics

CP Syntax

CP Phonology

CP Discourse

Overall Architecture

Page 18: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

ASL Phonetics/Phonology

• “Phonetic” Representation of Output– Hundreds of animation joint angles.

• Traditional ASL Phonological Models– Hand: shape, orientation, location, movement– Some specification of non-manual features.– Tailored to non-CP output: Difficult to specify

complex motion paths. CPs don’t use as many handshapes and orientation patterns.

Page 19: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

Classifier Predicate

GazeRightLeft

GazeRightLeft

The car parked between the cat and the house.

At Viewer

sign:HOUSE

At Viewer

sign:CAT

At Viewer

sign:CAR

Note: Facial expression, head tilt, and shoulder tilt not included in this example.

Location #3

To Loc #3

Location #1

To Loc #1

Eyes follow right hand.

Path of car, stop at Loc #2. To Location #2

Example

Page 20: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

Phonological Model

• What is the output?– Abstract model of (somewhat) independent body parts.

• “Articulators”– Dominant Hand (Right)

– Non-Dominant Hand (Left)

– Eye Gaze

– Head Tilt

– Shoulder Tilt

– Facial Expression

What informationdo we specify for

each of these?

Page 21: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

Values for Articulators

• Dominant Hand, Non-Dominant Hand– 3D point in space in front of the signer– Palm orientation– Hand shape (finite set of standard shapes)

• Eye Gaze, Head Tilt– 3D point in space at which they are aimed.

Page 22: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

EnglishSentence

Pred-ArgStructure

3D AnimationPlanning Operator

3D Animationof the Event

Scene Visualization Approach

Converting an English sentence into a 3D

animation of an event.

CP Semantics

CP Syntax

CP Phonology

CP Discourse

Overall Architecture

Page 23: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

Previously-Built Technology

• AnimNL System

– Virtual reality model of 3D scene.

– Input: English sentences that tell the characters/objects in the scene what to do.

– Output: An animation in which the characters/objects obey the English commands.

Bindiganavale, Schuler, Allbeck, Badler, Joshi, & Palmer. 2000. "Dynamically Altering Agent Behaviors Using Nat. Lang. Instructions." Int'l Conf. on Autonomous Agents.

Related Work: Coyne and Sproat. 2001. “WordsEye: An Automatic Text-to-Scene

Conversion System.” SIGGRAPH-2001. Los Angeles, CA.

Page 24: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

EnglishSentence

Pred-ArgStructure

3D Animationof the Event

How It Works

3D AnimationPlanning Operator

We won’t discussall the details, but

one part of the process is important

to understand.(We’ll come back

to it later.)

Page 25: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

Step 1: Analyzing English Input

• The car parked between the cat and the house.• Syntactic analysis.• Identify word senses: e.g. park-23• Identify discourse entities: car, cat, house.• Predicate Argument Structure

– Predicate: park-23

– Agent: the car

– Location: between the cat and the house

Example

Page 26: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

Step 2: AnimNL builds 3D scene

Example

Page 27: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

Example

Step 2: AnimNL builds 3D scene

Original Image: Simon the Signer

(Bangham et al. 2000.)

Page 28: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

CP Discourse

CP Semantics

CP Syntax

CP PhonologyEnglishSentence

Pred-ArgStructure

3D AnimationPlanning Operator

3D Animationof the Event

Discourse Model

Overall Architecture

Page 29: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

Discourse Model Motivations

• Preconditions for Performing a CP– (Entity is the current topic) OR (Starting point of this

CP is the same as the ending point of a previous CP)

• Effect of a CP Performance– (Entity is topicalized) AND (assigned a 3D location)

• Discourse Model must record: – topicalized status of each entity

– whether a point has been assigned to an entity

– whether entity has moved in the virtual reality since the last time the signer showed its location with a CP

Page 30: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

Discourse Model

• Topic(x) – X is the current topic.

• Identify(x) – X has been associated with a location in space.

• Position(x) – X has not moved since the last time that it was placed using a CP.

Page 31: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

Step 3: Setting up Discourse Model

• Model includes a subset of the entities in the 3D scene: those mentioned in the text.

• All values initially set to false for each entity.

CAR: __ Topic? __ Location Identified? __ Still in Same Position?

HOUSE: __ Topic? __ Location Identified? __ Still in Same Position?

CAT: __ Topic? __ Location Identified? __ Still in Same Position?

Example

Page 32: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

CP Semantics

Semantic Model

Invisible 3D Placeholders: “Ghosts”

CP Discourse

CP Syntax

CP PhonologyEnglishSentence

Pred-ArgStructure

3D AnimationPlanning Operator

3D Animationof the Event

Overall Architecture

Page 33: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

Semantic Model

• 3D representation of the arrangement of invisible placeholder objects in space

• These “ghosts” will be positioned based on the 3D virtual reality scene coordinates

• Choose the details, viewpoint, and timescale of the virtual reality scene for use by CPs

Page 34: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

Step 4: Producing Ghost Scene

Example

HOUSECAR

CAT

Page 35: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

CP Syntax

Syntactic Model

Planning-Based Generation of CPs

CP Discourse

CP Semantics

CP PhonologyEnglishSentence

Pred-ArgStructure

3D AnimationPlanning Operator

3D Animationof the Event

Overall Architecture

Page 36: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

CP Templates

• Recent linguistic analyses of CPs suggests that they can be generated by:– Storing a lexicon of CP templates. – Selecting a template that expresses the proper

semantics and/or shows proper 3D movement.– Instantiate the template by filling in the relevant

3D locations in space.

Huenerfauth, M. 2004. “Spatial Representation of Classifier Predicates for MT into ASL.” Workshop on Representation and Processing of Signed Languages, LREC-2004.

Liddel, S. 2003. Grammar, Gesture, and Meaning in ASL. Cambridge University Press.

Page 37: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

Animation Planning Process

• This mechanism is actually analogous to how the AnimNL system generates 3D virtual reality scenes from English text.– Stores templates of prototypical animation

movements (as hierarchical planning operators)– Select a template based on English semantics– Use planning process to work out preconditions

and effects to produce a 3D animation of event

Page 38: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

Example

Database of TemplatesWALKING-UPRIGHT-FIGURE

Parameters: g0 (ghost car parking), g1..gN (other ghosts)Restrictions: g0 is a vehiclePreconditions: topic(g0) or (ident(g0) and positioned(g0)) for g=g1..gN: (ident(g) and positioned(g))

Articulator: Right HandLocation: Follow_location_of( g0 )Orientation: Direction_of_motion_path( g0 )Handshape: “Sideways 3”

Effects: positioned(g0), topic(g0),

express (park-23 ag:g0 loc:g1..gN )Concurrently: PLATFORM(g0.loc.final), EYETRACK(g0)

MOVING-MOTORIZED-VEHICLE

Parameters: g0 (ghost car parking), g1..gN (other ghosts)Restrictions: g0 is a vehiclePreconditions: topic(g0) or (ident(g0) and positioned(g0)) for g=g1..gN: (ident(g) and positioned(g))

Articulator: Right HandLocation: Follow_location_of( g0 )Orientation: Direction_of_motion_path( g0 )Handshape: “Sideways 3”

Effects: positioned(g0), topic(g0),

express (park-23 ag:g0 loc:g1..gN )Concurrently: PLATFORM(g0.loc.final), EYETRACK(g0)

LOCATE-BULKY-OBJECT

Parameters: g0 (ghost car parking), g1..gN (other ghosts)Restrictions: g0 is a vehiclePreconditions: topic(g0) or (ident(g0) and positioned(g0)) for g=g1..gN: (ident(g) and positioned(g))

Articulator: Right HandLocation: Follow_location_of( g0 )Orientation: Direction_of_motion_path( g0 )Handshape: “Sideways 3”

Effects: positioned(g0), topic(g0),

express (park-23 ag:g0 loc:g1..gN )Concurrently: PLATFORM(g0.loc.final), EYETRACK(g0)

TWO-APPROACHING-UPRIGHT-FIGURES

Parameters: g0 (ghost car parking), g1..gN (other ghosts)Restrictions: g0 is a vehiclePreconditions: topic(g0) or (ident(g0) and positioned(g0)) for g=g1..gN: (ident(g) and positioned(g))

Articulator: Right HandLocation: Follow_location_of( g0 )Orientation: Direction_of_motion_path( g0 )Handshape: “Sideways 3”

Effects: positioned(g0), topic(g0),

express (park-23 ag:g0 loc:g1..gN )Concurrently: PLATFORM(g0.loc.final), EYETRACK(g0)

LOCATE-SEATED-HUMAN

Parameters: g0 (ghost car parking), g1..gN (other ghosts)Restrictions: g0 is a vehiclePreconditions: topic(g0) or (ident(g0) and positioned(g0)) for g=g1..gN: (ident(g) and positioned(g))

Articulator: Right HandLocation: Follow_location_of( g0 )Orientation: Direction_of_motion_path( g0 )Handshape: “Sideways 3”

Effects: positioned(g0), topic(g0),

express (park-23 ag:g0 loc:g1..gN )Concurrently: PLATFORM(g0.loc.final), EYETRACK(g0)

PARKING-VEHICLE

Parameters: g0 (ghost car parking), g1..gN (other ghosts)Restrictions: g0 is a vehiclePreconditions: topic(g0) or (ident(g0) and position (g0)) for g=g1..gN: (ident(g) and position (g))

Articulator: Right HandLocation: Follow_location_of( g0 )Orientation: Direction_of_motion_path( g0 )Handshape: “Sideways 3”

Effects: positioned(g0), topic(g0),

express (park-23 ag:g0 loc:g1..gN )Concurrently: PLATFORM(g0.loc.final), EYETRACK(g0)

Page 39: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

Step 5: Initial Planner Goal

• Planning starts with a “goal.”

• Express the semantics of the sentence:– Predicate: PARK-23– Agent: “the car” discourse entity

• We know from lexical information that this “car” is a vehicle (some special CPs may apply)

– Location: 3D position calculated “between” locations for “the cat” and “the house.”

Example

Page 40: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

Step 6: Select Initial CP TemplatePARKING-VEHICLE

Parameters: g_0, g_1, g_2 (ghost car & nearby objects)Restrictions: g_0 is a vehiclePreconditions: topic( g_0 ) or ( ident( g_0 ) and position( g_0 )) (ident( g_1 ) and position( g_1 )) (ident( g_2 ) and position( g_2 ))

Articulator: Right HandLocation: Follow_location_of( g_0 )Orientation: Direction_of_motion_path( g_0 )Handshape: “Sideways 3”

Effects: position( g_0 ), topic( g_0 ),

express(park-23 agt: g_0 loc: g_1, g_2 )Concurrently: PLATFORM( g_0.loc.final), EYETRACK( g_0 )

Example

Page 41: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

Step 7: Instantiate the TemplatePARKING-VEHICLE

Parameters: CAR, HOUSE, CATRestrictions: CAR is a vehiclePreconditions: topic(CAR) or (ident(CAR) and position(CAR)) (ident(CAT) and position(CAT)) (ident(HOUSE) and position(HOUSE))

Articulator: Right HandLocation: Follow_location_of( CAR )Orientation: Direction_of_motion_path( CAR )Handshape: “Sideways 3”

Effects: position(CAR), topic(CAR), express(park-23 agt:CAR loc:HOUSE,CAT )Concurrently: PLATFORM(CAR.loc.final), EYETRACK(CAR)

Example

Page 42: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

Step 7: Instantiate the TemplatePARKING-VEHICLE

Parameters: CAR, HOUSE, CATRestrictions: CAR is a vehiclePreconditions: topic(CAR) or (ident(CAR) and position(CAR)) (ident(CAT) and position(CAT))

(ident(HOUSE) and position(HOUSE))

Effects: position(CAR), topic(CAR),

express (park-23 agt:CAR loc:HOUSE,CAT )

Example

GazeRightLeft

Eyes follow right hand.

Path of car, stop at Loc#2. To Loc#2

Page 43: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

Step 8: Begin Planning ProcessPARKING-VEHICLE

Parameters: CAR, HOUSE, CATRestrictions: CAR is a vehiclePreconditions: topic(CAR) or (ident(CAR) and position(CAR)) (ident(CAT) and position(CAT))

(ident(HOUSE) and position(HOUSE))

Effects: position(CAR), topic(CAR),

express (park-23 agt:CAR loc:HOUSE,CAT )

Example

GazeRightLeft

Eyes follow right hand.

Path of car, stop at Loc#2. To Loc#2

Page 44: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

Other Templates in the Database

• We’ve seen these:– PARKING-VEHICLE– PLATFORM– EYEGAZE

• There’s also these:– LOCATE-STATIONARY-ANIMAL– LOCATE-BULKY-OBJECT– MAKE-NOUN-SIGN

Example

Page 45: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

Step 9: Planning Continues…PARKING-VEHICLE

Parameters: CAR, HOUSE, CATRestrictions: CAR is a vehiclePreconditions: topic(CAR) or (ident(CAR) and position(CAR)) (ident(CAT) and position(CAT))

(ident(HOUSE) and position(HOUSE))

Effects: position(CAR), topic(CAR), express (park-23 agt:CAR loc:HOUSE,CAT )

Example

Gaze

Right

Left

Eyes follow right hand.

Path of car, stop at Loc#2.

To Loc#2LOCATE-STATIONARY-ANIMAL

Parameters: CATRestrictions: CAT is an animalPreconditions: topic(CAT)

Effects: topic(CAT), position(CAT), ident(CAT)

Gaze

Right

Left

Eyes at Cat Location.

Move to Cat Location.

Page 46: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

Step 9: Planning Continues…

PARKING-VEHICLE

MAKE-NOUN:“CAR”

LOCATE-STATNRY-ANIMAL

MAKE-NOUN:“CAT”

LOCATE-BULKY-OBJECT

MAKE-NOUN:

“HOUSE”

position(CAT)position(HOUSE)

topic(CAR)identify(CAR)

topic(CAT)identify(CAT)

topic(HOUSE)identify(HOUSE)

EYEGAZE

PLATFORM

(concurrently)

Example

Page 47: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

Gaze

Right

Left

at Loc#1 at Loc#3follow car

Step 10: Build Phonological Spec

PLATFORM

EYEGAZE

at viewer

HOUSE

at viewer

CAT

at viewer

CAR

MAKE-NOUN:“CAR”

MAKE-NOUN:“CAT”

MAKE-NOUN:

“HOUSE”

LOCATE-STATNRY-ANIMAL

LOCATE-BULKY-OBJECT

PARKING-VEHICLE

Example

Page 48: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

Wrap-Up and Discussion

Page 49: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

Wrap-Up

• This is the first MT approach proposed for producing ASL Classifier Predicates.

• Currently in early implementation phase.

• Generation models for ASL CPs – discourse (topicalized/identified/positioned)– semantics (invisible ghosts)– syntax (planning operators)– phonology (simultaneous articulators)

Page 50: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

Discussion• ASL as an MT research vehicle

– Need for a spatial representation to translate some English-to-ASL sentence pairs.

– Virtual reality: intermediate MT representation.– A translation pathway tailored to a specific

phenomenon as part of a multi-path system. – Symmetry in use of planning in the analysis

and generation sides of the MT architecture.

Page 51: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

Positive Implications of this Design

• ASL animation for user-interfaces.– CPMLs are important to generate.– Use GUI coordinates to arrange ghosts.

• Non-CPML ASL linguistic phenomena that make use of space.– The model of space helps us generate them.

• CPMLs in other national sign languages.– Some differences, but many similarities to ASL.

(extra slides)

Page 52: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

Questions?

Page 53: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

Extra Slides

Page 54: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

• CPs are important for ASL on user-interface– Can’t “write” ASL on buttons and menus.– Character must refer to parts of screen.– ASL uses CPs to do this.

• “Grab” GUI screen coordinates to lay out our placeholders (the little red dots). – Don’t need AnimNL software.– Can update placeholders dynamically.

Advantage:Producing ASL animation for user-interfaces.

Page 55: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:
Page 56: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

Original Image: Simon the Signer (Bangham et al. 2000.)

Page 57: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

Original Image: Simon the Signer (Bangham et al. 2000.)

Page 58: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

Original Image: Simon the Signer (Bangham et al. 2000.)

Page 59: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

Original Image: Simon the Signer (Bangham et al. 2000.)

Page 60: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

Original Image: Simon the Signer (Bangham et al. 2000.)

Page 61: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

• Other national sign languages have their own signs and structure distinct from ASL.

• However, nearly all have a system of classifier predicates that is similar to ASL.– The specific handshapes used might differ, as

well as some other motion details.– Future Potential:

But this 3D approach to CP production should be easy to adapt to these other sign languages.

Advantage:Create CPs for other national sign languages.

Page 62: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

• There are other linguistic phenomena in ASL (aside from classifier predicates) that could benefit from the way this system keeps track of the space around the signer.– Pronouns– Verb Agreement– Narrative Role-Shifting– Contrastive Discourse Structure

Advantage:Producing non-CP ASL linguistic phenomena.

Page 63: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

• Tactile Sign Language– Feel the signer’s hands move through 3D space.

• This system: unique in use of virtual reality.– Graphics software arranges the 3D objects.– Signing character is also a detailed 3D object.

• Future Potential: – ASL Tactile Sign Language– A deaf-blind user could experience the motion

of signer’s hands using tactile feedback glove.

Potential Future Extension:Sign language virtual reality for the deaf-blind

Page 64: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

• Several approaches to generating the 3D motion path of the hands were examined.*

– For linguistic and engineering reasons, several simplistic approaches were discounted:

• Pre-storing all possible motion paths.

• Rule-based approach to construct motion paths.

– To produce CPs, the system needs to model the 3D layout of the objects under discussion.

Challenge:Calculating 3D motion paths is difficult.

* Huenerfauth, M. 2004. “Spatial Representation of Classifier Predicates for MT into ASL.” Workshop on Representation and Processing of Signed Languages, LREC-2004.

Page 65: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

• Some CP motion paths are linguistically, not visually determined.– E.g. Leisurely walking upright figure.– Motion of 3D character ≠ Motion of hand.

• Solution: Store CPs as a set of templates.– Template represents prototypical form of a CP.– Fill in 3D coordinate details at run-time.– Some 3D paths taken from the virtual reality.– Some 3D info hard-coded inside the template.

Challenge:Some motion paths linguistically determined.

Page 66: Generation Models for American Sign Language Classifier Predicates Matt Huenerfauth Penn Computational Linguistics Lunch November 1, 2004 Research Advisors:

• Sometimes no one-to-one mapping from English sentences to classifier predicates.

• Solution: Use same formalism to represent the structure inside and in-between CPs.– Makes it easy to store one-to-one, many-to-one,

one-to-many, and many-to-many mappings.– Allows one CP to affect production of another.– Allows CPs to work together to convey info.

Challenge:English-to-ASL not always 1-to-1 mapping.