accessibility motivations for an english-to-asl machine translation system matt huenerfauth closing...

Post on 13-Jan-2016

216 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Accessibility Motivations for an English-to-ASL

Machine Translation System

Matt HuenerfauthClosing Plenary

The 6th International ACM SIGACCESS Conference on Computers and Accessibility

October 20, 2004 Atlanta, GA, USA

Computer and Information ScienceUniversity of Pennsylvania

Research Advisors: Mitch Marcus & Martha Palmer

A two-part story…

• Development of English-to-ASL machine translation (MT) software for accessibility applications has been slow…

– Misconceptions: the deaf experience, ASL linguistics, and ASL’s relationship to English.

– Challenges: some ASL phenomena are very difficult (but important) to translate.

Misconceptions about Deaf Literacy and ASL MT

How have they affected research?

• Only half of deaf high school graduates (age 18+) can read English at a fourth-grade (age 10) level, despite ASL fluency.

• Many deaf accessibility tools forget that English is a second language for these students (and has a different structure).

• Applications for a Machine Translation System:– TV captioning, teletype telephones.– Computer user-interfaces in ASL.– Educational tools using ASL animation.– Access to information/media.

Audiology Online

Misconception:All deaf people are written-English literate.

What’s our input? English Text.

What’s our output? ASL has no written form.

Imagine a 3D virtual reality human being…

One that can perform sign language…

What’s our input? English Text.

What’s our output? ASL has no written form.

Imagine a 3D virtual reality human being…

One that can perform sign language…

But this character needs a set of instructions telling it how to move!

Our job: English These Instructions.VCom3d

Building tools to address deaf literacy…

Photos: Seamless Solutions, Inc.Simon the Signer (Bangham et al. 2000.)Vcom3D Corporation

We can use an off-the-shelf animated character.

Building tools to address deaf literacy…

Misconception:ASL is just manually performed English.

• Signed English vs. American Sign Language.

• Some ASL sentences have a structure that is similar to written languages.

• Other sentences use space around signer to describe 3D layout of a real-world scene.– Hands indicate movement and location of entities

in the scene (using special handshapes).– These are called “Classifier Predicates.”

What’s a Classifier Predicate (CP)?

The car drove down the winding road past a house.

HOUSE -{location of house}

CAR -{follow winding path}

Where’s the house, the road, and the car? How close? Where does the path start/stop? How show path is bumpy, winding, or hilly?

Misconception:Traditional MT software is well-suited to ASL.

• Classifier predicates are hard to produce.– 3D paths for the hands, layout of the scene.– Grammar rules & dictionaries? Not enough.

• No written form of ASL.– Very little English-ASL parallel corpora. – Can’t use machine learning approaches.

• Previous systems are only partial solutions.– Some produce only Signed English, not ASL. – None can produce classifier predicates.

But classifier predicates are important!– CPs are needed to convey many concepts.– Signers use CPs frequently.*

– English sentences that produce CPs are the ones that signers often have trouble reading.

– CPs needed for some important applications• User-interfaces with ASL animation

• Literacy educational software

* Morford and McFarland. 2003. “Sign Frequency Characteristics of ASL.” Sign Language Studies. 3:2.

Misconception:OK to ignore visual/spatial ASL phenomena.

ASL MT Challenges:Producing Classifier Predicates

And some novel solutions…

• Several approaches to generating the 3D motion path of the hands were examined.*

– For linguistic and engineering reasons, several simplistic approaches were discounted:

• Pre-storing all possible motion paths.

• Rule-based approach to construct motion paths.

– To produce CPs, the system needs to model the 3D layout of the objects under discussion.

Challenge:Calculating 3D motion paths is difficult.

* Huenerfauth, M. 2004. “Spatial Representation of Classifier Predicates for MT into ASL.” Workshop on Representation and Processing of Signed Languages, LREC-2004.

• English Sentence 3D Scene Layout

• AnimNL System*

– Virtual reality model of 3D scene.– Input: English sentences that tell the

characters/objects in the scene what to do. – Output: An animation in which the

characters/objects obey the English commands.

Challenge:Calculating 3D motion paths is difficult.

* Bindiganavale, Schuler, Allbeck, Badler, Joshi, & Palmer. 2000. "Dynamically Altering Agent Behaviors Using Nat. Lang. Instructions." Int'l Conf. on Autonomous Agents.

AnimNL Software

“the car drove down the winding road past the house.”

Calculating 3D Paths for Hands

“the car drove down the winding road past the house.”

Original Image: Simon the Signer (Bangham et al. 2000.)

• Difficult to get correct.

• Not important for classifier predicates.

• Solution: Detailed objects 3D points.– Record 3d orientation of object (sometimes).– Record what handshape should be used.

• E.g. bulky objects get one handshape.

• E.g. motorized vehicles get a different one.

Challenge:Minor 3D visual details are error-prone.

Reducing the visual detail…

“the car drove down the winding road past the house.”

Original Image: Simon the Signer (Bangham et al. 2000.)

CAR

HOUSE

• Computation time: 3D processing of scene.

• Development effort: extending AnimNL.

• Only need 3D processing to produce CPs.– English sentences not discussing 3D scenes or

spatial relationships are easier to translate.– Can use traditional MT technology instead.

• Solution: Multi-Path System*

Challenge:3D processing is resource intensive.

* Huenerfauth, M. 2004. “A Multi-Path Architecture for English-to-ASL MT.” HLT-NAACL Student Workshop.

EnglishInputSentences

3D Software

Traditional MT Software

Word-to-Sign Look-up

Spatially descriptive English sentences…

Most English sentences…

Sentences that the MT software cannot successfully translate…

ASL sentencecontaining a

classifier predicate

ASL sentencenot containing a

classifier predicate

Signed English Sentence

• Linguists have proposed symbolic representations of ASL output (“phonological models”).– Streams of raw animation coordinates are unwieldy.

– Need something more abstract, parameterized.

– Hand shape, orientation, location, motion.

• But current representations ill-suited to CPs.– Too much handshape info, too little orientation info.

– Hard to specify 3D motion paths.

• Solution: New linguistic model designed for CPs.*

Challenge:ASL phonological models ill-suited to CPs.

*Huenerfauth, M. “Spatial and Planning Models of ASL CPs for MT.” TMI-2004.

• Some CP motion paths are linguistically, not visually determined.– E.g. Leisurely walking upright figure.– Motion of 3D character ≠ Motion of hand.

• Solution: Store CPs as a set of templates.– Template represents prototypical form of a CP.– Fill in 3D coordinate details at run-time.– Some 3D paths taken from the virtual reality.– Some 3D info hard-coded inside the template.

Challenge:Some motion paths linguistically determined.

• Sometimes no one-to-one mapping from English sentences to classifier predicates.

• Solution: Use same formalism to represent the structure inside and in-between CPs.– Makes it easy to store one-to-one, many-to-one,

one-to-many, and many-to-many mappings.– Allows one CP to affect production of another.– Allows CPs to work together to convey info.

Challenge:English-to-ASL not always 1-to-1 mapping.

Implications of this Design

Advantages of a 3D Model

• CPs are important for ASL on user-interface– Can’t “write” ASL on buttons and menus.– Character must refer to parts of screen.– ASL uses CPs to do this.

• “Grab” GUI screen coordinates to lay out our placeholders (the little red dots). – Don’t need AnimNL software.– Can update placeholders dynamically.

Advantage:Producing ASL animation for user-interfaces.

Original Image: Simon the Signer (Bangham et al. 2000.)

Original Image: Simon the Signer (Bangham et al. 2000.)

Original Image: Simon the Signer (Bangham et al. 2000.)

Original Image: Simon the Signer (Bangham et al. 2000.)

Original Image: Simon the Signer (Bangham et al. 2000.)

• Other national sign languages have their own signs and structure distinct from ASL.

• However, nearly all have a system of classifier predicates that is similar to ASL.– The specific handshapes used might differ, as

well as some other motion details.– Future Potential:

But this 3D approach to CP production should be easy to adapt to these other sign languages.

Advantage:Create CPs for other national sign languages.

• There are other linguistic phenomena in ASL (aside from classifier predicates) that could benefit from the way this system keeps track of the space around the signer.– Pronouns– Verb Agreement– Narrative Role-Shifting– Contrastive Discourse Structure

Advantage:Producing non-CP ASL linguistic phenomena.

Summary

Summary

• Misconceptions, challenges, implications…

• This is the first MT approach proposed for producing ASL Classifier Predicates.– New 3D modeling approach to handle CPs.– Embed into multi-path English-to-ASL system.– Amenable to incorporation in a user-interface.

• Currently in early implementation phase.

One more interesting future potential of this design…

• Tactile Sign Language– Feel the signer’s hands move through 3D space.

• This system: unique in use of virtual reality.– Graphics software arranges the 3D objects.– Signing character is also a detailed 3D object.

• Future Potential: – ASL Tactile Sign Language– A deaf-blind user could experience the motion

of signer’s hands using tactile feedback glove.

Potential Future Extension:Sign language virtual reality for the deaf-blind

Questions?

top related