performance assessment methodology workshop
DESCRIPTION
Human Robotic Working Group Meeting. Performance Assessment Methodology Workshop. Embassy Suites Hotel 211 East Huntington Drive, Arcadia, California 91006 June 21, 2001 Future Missions & Task Primitives/Baldwin Room - Mike Duke - PowerPoint PPT PresentationTRANSCRIPT
Appendix I
Performance Assessment Methodology Workshop
Introduction and Review
Performance Assessment Methodology Workshop
Embassy Suites Hotel211 East Huntington Drive, Arcadia, California 91006
June 21, 2001 Future Missions & Task Primitives/Baldwin Room - Mike Duke
Optimal Allocation of Human & Robot Roles/Cancun Room - George Bekey
Organizing CommitteeNASA Human-Robot Joint Enterprise Working Group
D. Clancy, ARC G. Rodriguez, JPL M. DiJoseph, HQ B. Ward, JSC R. Easter, JPL J. Watson, LaRC J. Kosmo, JSC K. Watson, JSC R. Moe, GSFC C. R. Weisbin, JPL
Human Robotic Working Group Meeting
Performance Assessment Methodology WorkshopObjectives
• To provide a forum where scientists, technologists, engineers, mission architects and designers, and experts in other relevant disciplines can together identify the best roles that humans and robots can play in space over the next decades.
• To bring to bear the collective expertise of broad communities in assisting NASA in a thrust toward a better understanding of the complementary roles of humans and robots in space operations.
Human Robotic Working Group Meeting
Performance Assessment Methodology WorkshopAttendees
Dave Akin, University of Maryland Dave Kortenkamp, JSC Jim Albus, NIST Josip Loncaric, LaRC Jacob Barhen, ORNL Neville Marzwell, JPL Arun Bhadoria, USC Rudd Moe, Goddard Maria Bualat, ARC Bill Muehlberger, University of Texas Robert Burridge, JSC Lynne Parker, ORNL Art Chmielewski, JPL Lew Peach, USRA Tim Collins, LaRC Liam Pederson, ARC Chris Culbert, JSC Trish Pengra, HQ Mary DiJoseph, HQ Stephen Prusha, JPL
Bill Doggett, LaRC Steve Rock, Stanford University Steve Dubowsky, MIT Guillermo Rodriguez, JPL Robert Easter, JPL Sudhakar Rajulu, JSC Richard Fullerton, JSC Paul Schenker, JPL Mark Gittleman, Oceaneering, Inc Reid Simmons, CMU Tony Griffith, JSC Kevin Watson, JSC
Human Robotic Working Group Meeting
Workshop Organizer - Charles R. Weisbin, JPLFuture Missions & Task Primitives / Chairman - Mike Duke, Colorado School of Mines
Optimal Allocation of Human & Robot Roles / Chairman - George Bekey, USC
Human Robotic Working Group Meeting
Performance Assessment Methodology Workshop
Agenda Thursday, June 21, 2001 , 8:00 - 5:30 PM
Time Topic Speaker
8:00 - 8:30 p.m. Welcome Robert Easter
Introduction Charles Weisbin
8:30 - 11:30 p.m.Break-Out Session I
- Future Missions & Task Primitives Mike Duke
- Optimal Allocation of Human & Robot Roles George Bekey
11:30 - 12:30 p.m. Lunch
12:30 - 3:30 p.m. Break-out Session II - Optimal Allocation of Human & Robot Roles George Bekey- Future Missions & Task Primitives Mike Duke
3:30 - 4:00 p.m. Break
4:00 - 5:30 p.m. Concluding Plenary Session Charles Weisbin
The workshop focus will be on producing a set of useful products that NASA needs in developing and justifying quantitatively its technology investment goals. To this end, the workshop will provide a framework by which a range of specific questions of critical importance to NASA future missions can be addressed. These questions can be grouped as follows:
What are the most promising human/robot future mission scenarios and desired functional capabilities?
For a given set of mission scenarios, how can we identify appropriate roles for humans and robots.
What set of relatively few and independent primitive tasks (e.g. traverse, self-locate, sample selection, sample acquisition, etc.), would constitute an appropriate set for benchmarking human/robot performance? Why?
How is task complexity, the degree of difficulty, best characterized for humans and robots?
Performance Assessment Methodology WorkshopFuture Missions and Task Primitives
Human Robotic Working Group Meeting
Performance Assessment Methodology Workshop Optimal Allocation of Human and Robot Roles
• How can quantitative assessments be made of human and robots working either together or separately in scenerios and tasks related to space exploration. Are additional measurements needed beyond those in the literature?
• What performance criteria should be used to evaluate what humans and robots do best in conducting operations? How should the results using different criteria be weighted?
• How are composite results from multiple primitives to be used to address overall questions of relative optimal roles?
• How should results of performance testing on today's technology be suitably extrapolated to the future, including possible variation in environmental conditions during the mission?
Human Robotic Working Group Meeting
How should results of performance testing on today's technology be suitably extrapolated to the future, including possible variations in environmental mission dynamics?
How are disciplinary topics (learning, dynamic re-planning) incorporated into the evaluation?
Performance Assessment Methodology Workshop Optimal Allocation of Human and Robot Roles (Cont.)
Human Robotic Working Group Meeting
The workshop will draw from expertise in such diverse fields as space science, robotics, human factors, aerospace engineering, computer science, as well as the classical fields of mathematics and physics. The goal will be to invite a selected number of participants that can offer unique perspectives to the workshop.
Performance Assessment Methodology Workshop Concluding Remarks
Human Robotic Working Group Meeting
Appendix II
Performance Assessment Methodology Workshop
Work Group I: Future Missions and Primitives
Mike Duke, Colorado School of Mines
Performance Assessment Methodology Workshop
Working Group I: Future Missions and Task Primitives
Questions to be Addressed
• What are promising future missions and desired functional capabilities?
• What primitive tasks would constitute an appropriate set for benchmarking human/robot performance?
• How is task complexity, the degree of difficulty, best characterized for humans and robots?
Approach
• Define a basic scenario for planetary surface exploration (many previous analyses of space construction tasks applicable to telescope construction)
• Identify objectives• Characterize common capabilities for task performance• Determine complexities associated with implementing
common capabilities• Define “task complexity” index• Provide experiment planning guidelines
Task: Explore a Previously Unexplored Locale (~500km2) on Mars
• Top level objectives– Determine geological history (distribution of rocks)– Search for evidence of past life– Establish distribution of water in surface materials
and subsurface (to >1km)– Determine whether humans can exist there
• This level of description is not sufficient to compare humans, robots, and humans + robots
More Detailed Objectives
• Reconnoiter surface • Identify interesting samples• Collect/analyze samples• Drill holes• Emplace instruments
• These are probably at the right level to define primitives
Task Characteristics
• Each of the objectives can be implemented with capabilities along several dimensions– Manipulation– Cognition– Perception– Mobility
• These capabilities occupy a range of complexity for given tasks– E.g. mobility systems may encounter terrains of
different complexity
Mobility - Characteristics• Distance/range
• Speed
• Terrain (slopes, obstacles, texture, soil)
• Load carried
• Altitude
• Agility (turn radius)
• Stability
• Access (vertical, sub-surface, small spaces, etc.)
•Suitability to environment •Suitability to task•Reliability, maintainability •Moving with minimum disturbance to environment•Autonomy•Mission duration•Data analysis rate•Capability for rescue•Navigation, path planning
Complexity as related to terrain- Scale of features- Distribution of targets, obstacles- Slope variability- Environmental constraints- Soil, surface consistency- Degree of uncertainty
Perception/Cognition
• Sensing
• Analysis (e.g. chemical analysis)
• Training
• Data processing
• Context
• Learning
• Knowledge
• Experience
• Problem solving
• Reasoning
• Planning/replanning
•Adaptability•Communication•Navigation
Manipulation• Mass, volume, gravity• Unique vs. standard shapes• Fragility, contamination, reactivity• Temperature• Specific technique
– Torque– Precision– Complexity of motion
• Repetitive vs. unique• Time• Consequence of failure• Need for multiple operators• Moving with minimal disturbance
Task Difficulty
• Function of task as well as system performing task• Includes multidimensional aspects
– Perception (sensing, recognition/discrimination/classification)– Cognition (modelability(environmental complexity), error
detection, task planning)– Actuation (mechanical manipulations, control)
• General discriminators– Length of task sequence and variety of subtasks– Computational complexity– Number of constraints placed by system– Number of constraints placed by environment
Experiment Planning
• Thought experiments– Conceptual feasibility
– Eliminate portions of trade space
• Constructed experiments– Natural analogs – need to document parameters
– Controlled experiments• Isolate parameters
• Experiments must be chosen to reflect actual exploration tasks in relevant environments
• Questions are exceedingly complex due to their multidimensionality
• How to determine optimum division of roles, which may change with task, environment, time frame, etc. is a difficult problem
Exploration Implementation Options
Highest
Med-Hi
High
Med-Hi
Med-Hi
Med-Hi
Med
Low-Med
Low
Rel
Cost
Highest
High
Med
Med
Med
Low-
Med
Low-
Med
Low
Lowest
Data
Scope
NoneNoneLowestEarth based controlRemote teleoperation
LowNoneLowOrbital habitatLocal teleoperation
MedFullMed-HiSuited humans on footPrecursors only
Full
Partial
Partial
None
None
None
Hdw Repair
HighLowLander habitat-No EVAVariable autonomy
(pressurized garage)
HighLowLander habitat-No EVALocal teleoperation
Med-HiHighestSuited transportable humans (w/Rovers)
Variable autonomy
(total crew access)
HighestLow-Med
Canned mobility
(No EVA Capability)
Variable autonomy
(dockable to habitat)
HighLowLander habitat-No EVAVariable autonomy
NoneLowEarth based monitoringFully automated
Safety
Risk
Site Access
Human RoleRobot Method
Appendix III
Performance Assessment Methodology Workshop
Quantitative Assessment
Working Group II: Optimal Allocation of Human & RobotRoles
George Becky, University of Southern California (USC)
Performance Assessment Methodology Workshop
Working Group II: Optimal Allocation of Human & Robot Roles
Quantitative Assessment
•Total Cost ($)
•Time to complete task
•Risk to mission
•Degree of uncertainty in environment
•Detection of unexpected events
•Task complexity (branching)
R H
• Manipulation Planning/Perception• “Low Level”
PF = f (performance, success, cost) = f(p,s,c)f = f p + f s + f cp s c
Quantitative AssessmentStandard Measures
Quantitative AssessmentLimitations of Humans
• Health and Safety
• Dexterity of Suited Human
• Strength
Quantitative AssessmentLimitation of Robots
• Task Specific
• Adaptability
• Situation Assessment/Interpretation
Quantitative AssessmentPerformance Measurement
• Expected Information gained / $• (Utility Theory)• Probability of Success
E (Cost)
RH
Cost
Science Return(Performance)Probability
of Success*H - Humans
*R - Robots
Quantitative AssessmentScenario I - Space Telescope
Cost
• Mass In Situ
• Time to recover
• Frequency of Occurrence
• Reliability of Tasks
• Launch Mass/Volume
$ Mass
Mass
$ Tech Change TC
Quantitative AssessmentPerformance
• Based on such quantities as:– Time to completion of task– Mass– Energy required– Information requirements
Quantitative AssessmentIssues
• Quantification is difficult• Planetary Geology requires humans• Define realistic mission/benefits• How to improve predictability• Select a performance level and then study cost/risk
trade-offs• Trade studies important for research also• Consider E(Cost), E(Performance)• Eventually - H will be on Mars• Difficult to use probabilities - but important• Blend of Humans and Robots are bound to be better• Assumptions need to be clear re capabilities of Humans and Robots
Quantitative AssessmentCan We Get the Data to Evaluate Performance?
• Task: Travel 10km and return
• Assume:
- Terrain complexity is bounded
• How to quantify:
- Time
- Energy
- Cost
- Probability of success?
Quantitative Assessment Standard Measures
• Mass
• Failure Probability
• Dexterity
• Robustness
• Cost
Quantitative Assessment Non-Classical
• Detection of surprising events
• Branching of decision spaces
• Degree of uncertainty in environment
Quantitative Assessment Summarize
• Probability of success (risk -1)
• Performance (science success)
• Cost
Quantitative Assessment
HR
Cost
Achievement (A) of Objectives
Probability of Mission Success(PS)
Quantitative AssessmentOverall Performance Criteria
PC = f(A, PS, C)
PC = 2f * 2A pi + 2f 2A pi + …2A 2pi 2A 2pi
+ 2f * 2ps pi...
2C 2pi
+ 2f * 2c pi
2c 2pi
R H
• Manipulation Planning/Perception• “Low Level”
PC = f (A, PS, C )PC = f A p1 + f S p1 + f C p1A p1 PS p1 C p1
where p1 is a parameter that changes by p1
Quantitative Assessment Critique from Afternoon Discussion Group
•Measures like cost, performance and probability of
success are too simple
•These measures are not orthogonal/not independent
•There are intangible factors that need to be considered
•Cannot do this without specifying mission carefully and
completely but not narrowly
•Top down analysis is desirable if possible - but dealing
with primitives and trying to combine them is feasible