cga/qolt/old/v2-cga.doc  · web viewvolume ii. core project summaries. mobility & manipulation...

20
Annual Report Volume II 1. Core Project Summaries 1.1. Mobility & Manipulation Thrust 1.1.1. Mobile Manipulation Planning The goal of this project is to develop the algorithms that underly mobile manipulation found in the Active Home and PerMMA systems. These algorithms must facilitate easy operation, safety, robustness, and high performance. One of the core technical research thrusts is to develop efficient motion planning algorithms for autonomous grasping and manipulation of household objects, and integrate them into reliable robotic platforms. Our primary research efforts over the past year have focused on developing general and fast algorithms for grasping and arm motion generation. The goal of a grasping algorithm is to find a pose and configuration of a robot end-effector that can grasp a target object in a manner suitable for performing a desired task. Development of grasp quality metrics, such as force-closure or Q-distance, has been intensely studied in robotics for many years, however these metrics generally neglect a key component of grasp selection: an object’s environment. Our research in grasping has focused on extracting information about an object’s surrounding environment in order to make informed decisions about how that object should be grasped. Considering the surrounding free space of an object situated in an environment allows the robot to quickly eliminate hand poses that place the hand in areas where there are obstacles, and focus on areas where the hand is collision-free and where the fingers make good contact with the object. Our grasping algorithms are able to find grasps in environments where state-of-the-art algorithms cannot, while requiring only a few seconds of computation. This year, we have extended these algorithms to consider two- handed grasps, with similar results. Additionally, we have also explored tasks where the robot’s motion is constrained. Everyday life is full of tasks that constrain our movement: carrying a coffee mug, lifting a heavy object, or sliding a milk jug out of a refrigerator, are examples of tasks that involve constraints imposed on our bodies as well as the manipulated objects. Creating algorithms for general-purpose robots to perform these kinds of tasks also involves computing motions that are subject to multiple simultaneous task constraints. For example, a robotic manipulator lifting a heavy milk jug while keeping it upright involves a constraint on the pose of the jug as well as constraints on the arm configuration due to the weight of the jug. In general, a robot cannot assume arbitrary joint configurations when performing constrained motions. Instead, the robot must move within some manifold embedded in its configuration space that satisfies both the constraints of the task and the limits of the mechanism. To create plans for such constrained tasks, we have developed the Constrained Bi-directional Rapidly-exploring Random Tree (CBiRRT) motion planning algorithm, which uses Jacobian- based projection methods as well as efficient constraint-checking to explore

Upload: others

Post on 09-Jul-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: cga/qolt/old/v2-cga.doc  · Web viewVolume II. Core Project Summaries. Mobility & Manipulation Thrust. Mobile Manipulation Planning. The goal of this project is to develop the algorithms

Annual ReportVolume II

1. Core Project Summaries

1.1. Mobility & Manipulation Thrust

1.1.1. Mobile Manipulation PlanningThe goal of this project is to develop the algorithms that underly mobile manipulation found in the Active Home and PerMMA systems. These algorithms must facilitate easy operation, safety, robustness, and high performance. One of the core technical research thrusts is to develop efficient motion planning algorithms for autonomous grasping and manipulation of household objects, and integrate them into reliable robotic platforms.

Our primary research efforts over the past year have focused on developing general and fast algorithms for grasping and arm motion generation. The goal of a grasping algorithm is to find a pose and configuration of a robot end-effector that can grasp a target object in a manner suitable for performing a desired task. Development of grasp quality metrics, such as force-closure or Q-distance, has been intensely studied in robotics for many years, however these metrics generally neglect a key component of grasp selection: an object’s environment. Our research in grasping has focused on extracting information about an object’s surrounding environment in order to make informed decisions about how that object should be grasped. Considering the surrounding free space of an object situated in an environment allows the robot to quickly eliminate hand poses that place the hand in areas where there are obstacles, and focus on areas where the hand is collision-free and where the fingers make good contact with the object. Our grasping algorithms are able to find grasps in environments where state-of-the-art algorithms cannot, while requiring only a few seconds of computation. This year, we have extended these algorithms to consider two-handed grasps, with similar results.

Additionally, we have also explored tasks where the robot’s motion is constrained. Everyday life is full of tasks that constrain our movement: carrying a coffee mug, lifting a heavy object, or sliding a milk jug out of a refrigerator, are examples of tasks that involve constraints imposed on our bodies as well as the manipulated objects. Creating algorithms for general-purpose robots to perform these kinds of tasks also involves computing motions that are subject to multiple simultaneous task constraints. For example, a robotic manipulator lifting a heavy milk jug while keeping it upright involves a constraint on the pose of the jug as well as constraints on the arm configuration due to the weight of the jug. In general, a robot cannot assume arbitrary joint configurations when performing constrained motions. Instead, the robot must move within some manifold embedded in its configuration space that satisfies both the constraints of the task and the limits of the mechanism. To create plans for such constrained tasks, we have developed the Constrained Bi-directional Rapidly-exploring Random Tree (CBiRRT) motion planning algorithm, which uses Jacobian-based projection methods as well as efficient constraint-checking to explore constraint manifolds in the robot’s configuration space. The CBiRRT can solve many problems that standard sampling-based planners, such as a generic RRT or Probabilistic Roadmaps (PRM) cannot. Our framework for handling constraints allows us to plan for manipulation tasks that were previously unachievable in the general case, such as solving complex puzzles and sliding and lifting heavy objects. This work has been accepted for publication at the upcoming IEEE International Conference on Robotics and Automation (ICRA2009).

We have also made significant progress on addressing the problem of goal specification for a robot motion planner. In order to plan a motion, standard motion planning algorithms require that the robot’s goal configuration be specified directly in the configuration space, which requires determining what the robot’s joint angles should be at its goal before starting to plan. This is typically done using Inverse Kinematics (IK), which finds a set of joint angles that place the robot’s hand (end-effector) in a desired pose. We have explored during Y1 and Y2 novel algorithms that have allowed users to specify a goal in the robot’s workspace, which is the 6-dimensional space (3D translation and rotation) of the robot end-effector. However, even these algorithms require determining one or several goal poses for

Page 2: cga/qolt/old/v2-cga.doc  · Web viewVolume II. Core Project Summaries. Mobility & Manipulation Thrust. Mobile Manipulation Planning. The goal of this project is to develop the algorithms

the hand before starting to plan. But for many tasks, such as placing an object on a table, or reaching to grasping a simple object such as a cylindrical glass, there is a continuum of goal poses for the hand that are acceptable. We call such a continuum a Workspace Goal Region (WGR). A WGR defines a volume of goal poses in 6-dimensions that is intuitive to specify for many reaching and object placement tasks. WGRs can be sampled very easily and efficient approximate distance metrics can be evaluated very quickly, making them ideal for sampling-based planning. We have developed two new motion planning algorithms that work with WGRs as examples of how to incorporate WGRs into almost any manipulation planning framework. Our algorithms are able to solve fairly complex manipulation tasks on commodity PC hardware in about one second. Experimental results and analysis of desirable completeness properties has been accepted for publication at the upcoming IEEE International Conference on Robotics and Automation (ICRA2009).

One major effort of automating mobility and manipulation relevant to the goals of QoLT is to open doors and cabinets in household environments. During the past year we have developed new algorithms for performing constrained tasks by combining object “caging grasps” and efficient search techniques to produce motion plans that satisfy the task constraints. Previous work on constrained manipulation transfers rigid constraints imposed by the target object motion directly into the robot configuration space. However, this often unnecessarily restricts the allowable robot motion, which can prevent the robot from performing even simple tasks, particularly if the robot has limited reachability. One of the major advantages of our technique is that it significantly increases the range of possible motions of the robot by not having to enforce rigid constraints between the end-effector and the target object.

Our long-term research goal is to develop algorithms for manipulation planning that are both general and efficient. Some of the key research thrusts will be aimed at improving the overall autonomy, efficiency, and robustness of robots performing household tasks:

1) Develop a fully integrated grasp-selection algorithm with our WGR planner, so that WGRs can be generated automatically from grasp sets.

2) For constrained planning, pursue tasks with multiple complex constraints such as balancing and closed-chain kinematics.

3) Explore additional robust grasping strategies and techniques that can guarantee a successful grasp despite significant inaccuracies in object pose. These grasping strategies will need to take into account the various ways in which an object slides when grasped as well as how to cage a given object.

4) Devise new algorithms for dual-arm manipulation of bulky or heavy objects and evaluate them on PerMMA, dual-arm mobile manipulators, or humanoid robot platforms.

ERC team members: Project Leader: James Kuffner (CMU, RI) Faculty: James Kuffner Students: Dmitry Berenson (grad, CMU, RI), Rosen Diankov (grad, CMU, RI)

1.1.2. Soft InteractionThe goal of this project is to develop ways for robots to physically interact safely with humans. A key component of quality care is the ability to gently and safely manipulate humans: assist with transfer between bed, wheelchair, toilet, and bathing area and with physical interaction with the user including feeding, dressing, grooming, and cleaning. The requirements for a manipulation system that touches people are quite different than those for an object manipulation system, which is why we distinguish these two systems. Furthermore, soft interaction should be available from a mobile robot, not just from a device rigidly bolted to the floor or a wall.

Soft manipulation is a transformative capability for safely manipulating people, and important in the Active Home and the PerMMA project. Mobile soft physical interaction with humans is a relatively undeveloped area. No current humanoid robots are fully back-drivable from any contact point, for example, and those that implement soft physical interaction do so only at selected sites using localized force sensing. Soft manipulation of fragile humans is a new challenging area for robotics, and we intend to pioneer this area. One outcome of our work will be to develop a range of soft and safe human-robot physical interaction

First Annual Report – Volume II 1

Page 3: cga/qolt/old/v2-cga.doc  · Web viewVolume II. Core Project Summaries. Mobility & Manipulation Thrust. Mobile Manipulation Planning. The goal of this project is to develop the algorithms

techniques. Transfers have been identified in our discussion with PST as an important area for improved assistance. Another outcome will be to demonstrate improved and increasingly autonomous capabilities to transfer humans to and from a wheelchair, bed, shower, bath, and toilet. We expect member companies to benefit in commercializing this technology.

A fundamental research barrier is current actuation technology. It is difficult to generate large forces with low impedances and high speeds. Another fundamental research barrier is the poor reliability of software. Hardware reliability is also an issue, but it is better understood how to improve hardware reliability compared to improving software reliability.

Addressing the first barrier of actuation, our initial strategy is to leverage existing equipment at several scales. We are using Phantom robots to explore small displacement and low force interactions, a Barrett Technology WAM arm to explore medium displacement and medium force interactions, and a hydraulic human robot (Sarcos Primus System) to explore large displacement and large force interactions. Each of these devices provides low impedance for the level of interaction force it can generate. The electrically actuated Phantom and WAM have low effective “gear” ratios. The Sarcos robot is hydraulicly actuated, and we are exploring how to make such a system compliant. We will also leverage our NSF supported research on compliant and back drivable humanoid robots to develop soft control suitable for physically interacting with a human.

In the reporting period we developed approaches to automatically designing force control algorithms for complex systems that both have to balance and exert large forces. Optimal control is used to develop controllers for expected load conditions. A load estimator has been developed to recognize loads and select appropriate controllers. We have implemented this system on our hydraulic test platform, a Sarcos humanoid robot. We have also developed refined versions of our balance controller developed in the previous reporting period. We have explored robot designs that are fundamentally soft, such as inflatable robots. The first design replaced the tip of a robot arm with an inflatable link. The second design uses tensile elements to control the shape of a bendable inflatable link.

In a related project we are investigating intuitive user interfaces for devices like the PerMMA, while ensuring safety for the user. We have created a skin for the robot. This skin detects when any part of the robot contacts an individual or an object, which enables us to avoid injury or property damage. This skin is compatible with any robotic arm, enabling powerful commercial robots to be utilized as assistive aides.

In addition to ensuring the safety of the user, the skin also enables a user to interact directly with the robot. The user is able to grasp the robot arm, placing it in a compliant mode. When in the compliant mode, the robot arm will behave as though light and low friction. The user is able to position the robot arm for use as an assist or use the robot to augment his or her strength or range of motion. An individual can grasp the robot, and move the compliant robot arm into a desired position (Figure 1). Then, when the user releases the robot, it will remain fixed in position, allowing the individual to use it as an assist. For example, the individual might position the robot so that it is grasping a bowl. Then, when the robot is released, it will remain in position, stabilizing the bowl while the user adds and mixes ingredients. This might be an appropriate method of assistance for an individual with hemiplegia. In addition, with this method of interaction, a powerful robot arm could enable an individual with weakened upper extremities to augment his or her strength. For instance, an individual might wish to retrieve a heavy pan from the oven. The robot could be used to grasp the pan. Then, as the user grasps and moves the robot arm, the compliant mode of the robot would act to compensate for the weight of both the robot and the pan, allowing to user to move a heavy object with little effort. This method of directly interacting with the robot is most appropriate for individuals with a large range of motion for the upper extremities but with upper extremity pain or weakness that limits their ability to functionally manipulate objects. This may include users with paraplegia and multiple sclerosis. Because the user will physically interact directly with the robot, this interface will be very direct and intuitive. It will require little training and enable individuals with cognitive impairment to interact successfully with an assistive robot.

First Annual Report – Volume II 2

Page 4: cga/qolt/old/v2-cga.doc  · Web viewVolume II. Core Project Summaries. Mobility & Manipulation Thrust. Mobile Manipulation Planning. The goal of this project is to develop the algorithms

Figure 1.

We constructed a prototype of our design using force sensors and a PHANTOMTM robot from Sensable Technologies, Inc. We also developed various skin prototypes and developed a structured testing method to evaluate their performance. The robotic skin is based upon the concept of two conductive sheets separated by a thin mesh, as shown in Figure 2. Each conductor is covered by an insulator for protection, resulting in a five-layer skin. A voltage is applied to one conductive layer while the voltage across the other layer is read by a computer. When the skin is touched, the two conductors make contact through the mesh, effectively closing a switch. The characteristics of the mesh largely determine the threshold force, the smallest force that can be detected, as well as the areas of the skin at which a force can be detected. Materials being investigated for the conductive layers include copper foil and conductive cloth. The optimal size and distribution of mesh holes will also be determined. The primary design criterion for the robotic skin is that it must be able to sense a force of 1 N applied anywhere on its surface. It must be sufficiently flexible to conform to a cylinder 1” in diameter and must not require an excessive number of electrical leads. To test whether a given robotic skin satisfies the primary design criteria, we will use a 12” by 12” portion of skin with a 0.25” by 0.25” grid drawn on its surface. Using a 6-axis force/torque sensor (NANO-17, ATI Industrial Automation), we measure the three-dimensional force exerted on the skin while the output voltage of the skin is simultaneously measured. We attach a 0.25” by 0.25” probe to the endpoint of the force sensor, and we place the force probe in each square of the grid and record the threshold force at that point on the skin. The threshold force is the force at which the output voltage of the skin changes from 0 to 5 V. A given skin will be considered to meet the primary design criteria when the threshold voltage of each square in the grid is less than 1 N.

First Annual Report – Volume II 3

Page 5: cga/qolt/old/v2-cga.doc  · Web viewVolume II. Core Project Summaries. Mobility & Manipulation Thrust. Mobile Manipulation Planning. The goal of this project is to develop the algorithms

Figure 2.

While testing of skin prototypes are in progress, we are preparing to perform preliminary user testing to evaluate our concept of intuitive interaction. We will use a commercially available robotic arm, the Assistive Robotic Manipulator (ARM) from Exact Dynamics. We will cover this arm with our robotic skin. Initial tests will be conducted with individuals without neurological disease or injury in order to ensure that all software is safe and performs as intended. Ten individuals without neurological impairment will complete three tasks using direct interaction with the robot. These tasks will be retrieving an object from a low shelf, stabilizing a cup while filling it with water, and placing an object on a high shelf. Though the ARM can be mounted to a wheelchair, we will not do that for this experiment. Instead, the ARM will be mounted to a small fixture that will be used to position it near the user’s chair. The user will be instructed in how to use the system and given time to familiarize him- or herself with the system. The user will not be explicitly told how to accomplish the three target tasks. Experimenters will monitor task performance to detect any software errors that may impede system performance. In addition to completing the target tasks using direct interaction, each user will also attempt the three target tasks using the standard interface for the ARM, a 4x4 keypad. We will compare the time required to accomplish each task using the standard interface and using direct interaction. Users will also be interviewed to determine which interface they preferred and how they feel the system could be improved.

After completing the safety testing for the software for direct interaction, we will perform preliminary tests with end users. Five manual wheelchair users will complete the protocol described above. We will monitor the time required for a user to complete each target task using both direct interaction and the standard interface. We will also complete a structured interview with each individual. Each user will be asked how they might use an assistive robot in daily life and ways in which they would feel comfortable interacting with such a robot. We will also assess each individual’s general attitude toward technology. Each individual will be asked to rate direct interaction and the standard interface along a number of dimensions, including ease of use and how difficult it was to learn. Users will be asked to elaborate on what they did and did not like about each interface and how they feel the system could be improved.

After completing this testing, we will use the feedback from potential users to revise our system. We will then integrate our system with PerMMA and conduct more extensive testing with users in the home environment.

Direct positioning of the robot arm is perhaps the most intuitive method of interaction, but it is only possible for individuals with a substantial range of motion in at least one upper extremity. To address the needs of individuals with more limited upper extremity mobility, we plan to investigate teleoperation. Though some researchers have used the word “teleoperation” to refer to two-dimensional interfaces (e.g.

First Annual Report – Volume II 4

Page 6: cga/qolt/old/v2-cga.doc  · Web viewVolume II. Core Project Summaries. Mobility & Manipulation Thrust. Mobile Manipulation Planning. The goal of this project is to develop the algorithms

joysticks) used to control a robot arm, we use this term to refer to the control of the assistive robot (the slave) via interaction with a small three-dimensional haptic robot (the master). Because three-dimensional movement of the master will correspond to three-dimensional movement of the slave, teleoperation will be a more intuitive method of interaction with the robot than is possible with a two-dimensional interface such as a joystick. In addition, the use of a haptic robot enables us to provide force feedback to the user. For instance, if the slave robot stops because it contacts an external object, we can relay this information to the user through the master robot. Teleoperation has been widely used in the field of robotics, and haptic feedback has been shown to improve the performance of individuals within a teleoperation paradigm. In addition, research in rehabilitation and motor control, including our own, indicates that individuals with disabilities can interact haptically with a robot to accomplish a task. We plan to combine these concepts in the domain of assistive technology. We plan to develop a prototype system to investigate the feasibility of teleoperation for effective interaction with an assistive robot. We will perform preliminary user testing of this concept. After revising our system based on user testing, we will integrate our system with PerMMA in order to perform more extensive testing with users in the home environment.

ERC team members: Project Leader: Chris Atkeson (CMU, RI) Faculty: Chris Atkeson, Bambi Brewer (Pitt, RST) Students: Ben Stephens (grad, CMU, RI), Siddarth Sanan (grad, CMU, RI), Heather Markham (grad,

Pitt, RST)

1.1.3. Modeling of TripsThe goal of our effort in personal safety systems is to reduce the occurrence and impact of accidents. Our current focus is on an important issue for older adults: falling. Falls threaten the quality of life of older adults and can lead to not only physical injury and possibly death but also with fear of falling, decreased activity, functional deterioration, social isolation, depression, and institutionalization. Falls are also associated with a tremendous economic cost. The aging of the population is expected to aggravate these fall-related concerns. To date, fall prevention programs have had limited success minimizing the risk of falls in the elderly. Research into the basic underlying reasons for aging-related worsening of postural control performance can provide new directions for clinicians and therapists in their attempt to minimize their patients’ risk of falling.

This project focuses on tripping: what happens when a person trips, and what recovery strategies are used? Our specific aim in this project is to understand the postural strategies used in trips and the biomechanical factors that cause trips-precipitated falls. In the long term, we think that such knowledge can be used to develop training and technology to reduce the risk of falls and their severity. To gain this knowledge we have subjects actually trip in the lab. Fundamental research barriers include 1) Not doing harm to the subjects, 2) Perturbing subjects in a natural and useful way, and 3) Making sense of the complex responses.

We are using long-standing safety practices we have used in previous studies of slipping to address barrier 1, subject safety. To address barrier 2, we have built a device that allows us to control obstacles placed in the path of the subject’s feet. At random intervals this device is used to trip the subject. As in our previous work on slipping, the first tripping trial is the best approximation to an unexpected trip. We will reproduce the first trip in simulations and vary temporal and magnitude characteristics of the postural strategy to understand their impact on falls. One hypothesis to test, for example, is that reaction time determines whether a recovery is successful or not. Other hypotheses are that reaction magnitude or direction of initial response determines success.

First Annual Report – Volume II 5

Page 7: cga/qolt/old/v2-cga.doc  · Web viewVolume II. Core Project Summaries. Mobility & Manipulation Thrust. Mobile Manipulation Planning. The goal of this project is to develop the algorithms

We have run over 12 subjects to collect data for developing analysis algorithms. We have constructed a simulation that allows us to explore a number of different reactive responses to a trip that have been proposed in the literature. The simulation is initialized just before the trip occurred using the joint angles and velocities recorded in the experimental setup. We have constructed control systems to create reactive responses using the supporting leg and the tripping leg and are exploring which responses provide simulated motion and forces that are the closest match to the data recorded in the laboratory. One strategy often seen is elevating the tripped foot. Another strategy is planting the tripped foot (lowering strategy). The elevating strategy is seen for trips early in the swing, and the lowering strategy is seen for trips late in the swing.

Figure 3 Result of simulating elevating strategy.

First Annual Report – Volume II 6

Page 8: cga/qolt/old/v2-cga.doc  · Web viewVolume II. Core Project Summaries. Mobility & Manipulation Thrust. Mobile Manipulation Planning. The goal of this project is to develop the algorithms

Figure 4 Result of simulating lowering strategy.The simulated results of the elevating and lowering strategies are shown in Figures 3 and 4, respectively. The simulation results were similar to actual motions of the subjects. We also verified that it produced a good match by looking at the ground reaction force in the elevating strategy as shown in Figure 5 and the trajectories of the tripped foot in the lowering strategy as shown in Figure 6. Figure 7 compares the trajectories of the tripped foot in the elevating strategy, and also shows the good match between the actual human reaction and our simulated reaction after tripping.

Figure 5 Comparison of actual and simulated ground reaction force in the elevating strategy. Top: measured ground reaction force described by Pijnappels et al, and bottom: simulated ground reaction force (red line).

First Annual Report – Volume II 7

Page 9: cga/qolt/old/v2-cga.doc  · Web viewVolume II. Core Project Summaries. Mobility & Manipulation Thrust. Mobile Manipulation Planning. The goal of this project is to develop the algorithms

Figure 6 Comparison of actual and simulated foot trajectories in the lowering strategy. Left: measured leading foot trajectory (light gray curves) described by Troy and Grabiner, and right: simulated foot trajectory (green curve).

Figure 7 Comparison of actual and simulated trajectories of a tripped foot in the elevating strategy. Blue: real trajectory obtained from motion capture data, and green: simulated trajectory. A vertical broken line represents a timing of collision.

Anticipated outcomes: Our emphasis on Personal Safety Systems stems from an opportunity identified in our discussions with the Person and Society Thrust and prospective end-users: monitoring for risky behavior that leads to accidents and falls in home manipulation as well as unassisted and assisted locomotion, and acting to change behavior to reduce risk. Our work to develop basic understanding of tripping will help guide a companion project, in collaboration with Bosch, to develop a wearable fall risk reduction system. Our work will be transformative in that it uses deep knowledge of biomechanics and

First Annual Report – Volume II 8

Page 10: cga/qolt/old/v2-cga.doc  · Web viewVolume II. Core Project Summaries. Mobility & Manipulation Thrust. Mobile Manipulation Planning. The goal of this project is to develop the algorithms

neural control, as well as statistical algorithms, to recognize human behaviors and assess risk. With reliable assessments of risk, it becomes possible to explore how to change human behavior on several time scales to reduce risk. We will bring together expertise in human behavior as well as a robotics perspective to analyze accidents. We will also develop simulation and robotic models of how accidents occur, and what can be done to reduce their occurrence and their effect. Personal safety systems will play an important system in the Active Home, as warning and supervisory systems in Personal Mobility and Manipulation Appliances, and as part of Virtual Coaches that reason and interact about safety. We also expect to develop much better robot control algorithms to avoid loss of balance for both legged and wheeled robots.

Future Plans: This project will continue as an Associated Project under NIH support. We are very happy to take advantage of additional funding from non-ERC sources, and to be able to redirect core funding to new initiatives.

Member company benefits: The results of this project will be relevant to the efforts of companies like Bosch who are developing monitoring and assistance systems. Reducing risk from falling and other accidents will be useful to companies providing assisted living and nursing home facilities.

ERC team members Project Leader: Rakie Cham (Pitt, BioE) Faculty: Rakie Cham, Jessica Hodgins (CMU, RI) Postdocs: Takaaki Shiratori (CMU, RI)

1.2. Active Home

Active Home

Project Goals

Our primary goal is to explore ways to provide physical assistance in the home. We are currently developing a mobile manipulation system as one way to deliver such assistance. The key research challenges are building a safe, easy to operate system that can reliably perform common household tasks in environments with uncertainty typical of homes. Currently, we are conducting experiments on a WAM arm and Barrett hand mounted on a Segway mobile base located at Intel Research Pittsburgh (one of the QoLT industry partners). We call this version of the household robot prototype “H.E.R.B” for Home Exploring Robot Butler.

Progress to date

First Annual Report – Volume II 9

Page 11: cga/qolt/old/v2-cga.doc  · Web viewVolume II. Core Project Summaries. Mobility & Manipulation Thrust. Mobile Manipulation Planning. The goal of this project is to develop the algorithms

Fig Z: The test platform for the active home project.

To evaluate and further develop algorithms created by the Mobile Manipulation and Perception thrusts, we developed a test platform that includes a mobile base (Segway RMP), and 7DOF arm (WAM) and a simple hand (4DOF Barrett Hand). (Fig Z) We have made extensive progress in implementing and evaluating the algorithms. Our grasping algorithms (evaluated on the test system and on humanoid robots available to us) are able to find grasps in environments where state-of-the-art algorithms cannot, while requiring only a few seconds of computation (Figure 1).

Figure 1: Some of the robot hardware used in our ActiveHome mobile manipulation experiments: the HRP2D humanoid robot at DHRC AIST (left); WAM arm and Barrett hand at Intel Research Pittsburgh (right)

We have implemented the Constrained Bi-directional Rapidly-exploring Random Tree (CBiRRT) motion planning algorithm on our Active Home test system. Our framework for handling constraints allows us to plan for manipulation tasks that were previously unachievable in the general case, such as solving complex puzzles and sliding and lifting heavy objects (Figure 2).

First Annual Report – Volume II 10

Page 12: cga/qolt/old/v2-cga.doc  · Web viewVolume II. Core Project Summaries. Mobility & Manipulation Thrust. Mobile Manipulation Planning. The goal of this project is to develop the algorithms

Figure 2: Snapshots from a trajectory executed by the WAM arm. The robot slides the heavy dumbbell closer to its body so it can lift it. After lifting the dumbbell, the robot places it down and slides it into its goal position. This trajectory is planed using the newly developed CBiRRT algorithm (ICRA2009 – to appear).

To simplify user interfaces, we have also made significant progress on addressing the problem of goal specification for a robot motion planner. We have developed easier ways to specify the set of robot configurations that make up the goal. We call such a set a Workspace Goal Region (WGR). A WGR defines a volume of goal poses in 6-dimensions that is intuitive to specify for many reaching and object placement tasks (Figure 3).

Figure 3: The WAM arm reaching and placing objects using novel planning algorithms that utilize active workspace goal regions (WGRs). The red line is the trajectory of the wrist. (a) Reaching to grasp a pitcher. (b) Reaching for a soda can. (c) Placing a soda can into a recycling bin. (d) Placing a bottle onto a cluttered refrigerator shelf.

Both the Active Home and PerMMA systems need to be able to open doors and cabinets. During the past year we have developed new algorithms for performing constrained tasks by combining object “caging grasps” and efficient search techniques to produce motion plans that satisfy the task constraints.

Figure 4: Planning autonomous motions for opening doors and cabinets using “caging grasps” (Humanoids2009).

We have evaluated the effectiveness of our approach with numerous simulated examples (Figure 4), as

First Annual Report – Volume II 11

Page 13: cga/qolt/old/v2-cga.doc  · Web viewVolume II. Core Project Summaries. Mobility & Manipulation Thrust. Mobile Manipulation Planning. The goal of this project is to develop the algorithms

well as real-world experiments using the WAM arm opening refrigerator doors and cabinets (Figure 5), as well as the PerMMA system opening doors autonomously (Figure 6). Particularly in the case of the PerMMA with the Manus arms, we have found that the use of a caging grasp framework within the planning process greatly improves the efficiency, solvability, as well as execution robustness of the door-opening task. The research has been published at the IEEE International Conference on Humanoid Robots (Humanoids2008), and an expanded journal version is now in preparation.

Figure 5: The WAM arm autonomously opening a cupboard, putting in a cup, and closing it using “caging grasps”. Wall clock times from start of planning are shown in each frame (Humanoids2008).

Figure 6: Caging grasp set generated for the Manus hand, and the PerMMA with Manus arm opening a door.

In order to facilitate planning for autonomous tasks, we are continuing to develop and support OpenRAVE, a planning architecture for autonomous robotics. We released OpenRAVE in March of 2008 as open-source and since then it has been quickly gaining popularity among the robotics community. By providing a plugin-based architecture, other researchers can easily integrate it with their existing software codebase and quickly take advantage of the powerful new planning and scripting features we provide along with the standard distribution. One of the main advantages of OpenRAVE is that it facilitates a rapid prototyping environment that can be used to test control, motion planning, and manipulation algorithms in an integrated development environment. Our primary goal with OpenRAVE is to create an open-source repository of important robot algorithms that researchers throughout the world can share and advance the state of the art in this important new emerging field of household robotics.

On the perception side of Active Home, an important ability for robots is to be ability to map the home environment for later navigation and localization tasks. In our initial set of experiments, we used cameras mounted on the Active Home test system to track visual features derived from patterns affixed to walls and planar surfaces in a kitchen environment. We are able to accurately map and keep track of

First Annual Report – Volume II 12

Page 14: cga/qolt/old/v2-cga.doc  · Web viewVolume II. Core Project Summaries. Mobility & Manipulation Thrust. Mobile Manipulation Planning. The goal of this project is to develop the algorithms

the state of important objects, such as multiple cabinet doors. The advantage of patterns on walls is that people can generally go about their daily tasks without worrying about interfering with or accidentally displacing them and confusing the robot. Recently, we have developed new algorithms that produce high accuracy maps of the environment compared with existing state-of-the-art SLAM (Simultaneous Localization and Mapping) techniques. We are currently investigating the feasibility of setting up a prototype system in a real home environment in order to measure various statistics, such as optimal placement of patterns to achieve high localization accuracy, as well as minimal overall time to complete a full map of a typical home. The experiments we have conducted to date are very encouraging, thus we are aiming to publish this work in the coming year after additional experimental results and analysis are completed.

Figure 7: Precision camera-based localization and mapping using visual landmarks (publication in preparation).

We are developing perceptual algorithms that are appropriate for mobile manipulation in unstructured environments. Towards that end we developed an approach for building metric 3D models of objects using local descriptors from several images (Fig Y). Each model is optimized to fit a set of calibrated training images, thus obtaining the best possible alignment between the 3D model and the real object. Given a new test image, we match the local descriptors to our stored models online, using a novel combination of the RANSAC and Mean Shift algorithms to register multiple instances of each object. A robust initialization step allows for arbitrary rotation, translation and scaling of objects in the test images. The resulting system provides markerless 6-DOF pose estimation for complex objects in cluttered scenes (Fig X). We generated experimental results demonstrating orientation and translation accuracy, as well a physical implementation of the pose output being used by an autonomous robot to perform grasping in highly cluttered scenes.

Our design goal for this algorithm is a combination of robustness, speed, and accuracy. The algorithm has a very low false positive rate, so the robot does not imagine objects that are not there. It detects all instances of each model, so it can handle several coke cans in view, for example. It is fast enough for interactive and other real-time applications. Future versions of the algorithm will be specialized for particular tasks.

First Annual Report – Volume II 13

Page 15: cga/qolt/old/v2-cga.doc  · Web viewVolume II. Core Project Summaries. Mobility & Manipulation Thrust. Mobile Manipulation Planning. The goal of this project is to develop the algorithms

<

Fig. X. Object grasping in a cluttered scene through pose estimationperformed with a single image. (top left) Scene observed by the robot’scamera, used for object recognition/pose estimation. Coordinate framesshow the pose of each object. (top right) Virtual environment reconstructedafter running a pose estimation algorithm. Each object is represented using asimple geometry. (bottom) Our robot platform in the process of grasping anobject, using only the pose information from this algorithm.

First Annual Report – Volume II 14

Page 16: cga/qolt/old/v2-cga.doc  · Web viewVolume II. Core Project Summaries. Mobility & Manipulation Thrust. Mobile Manipulation Planning. The goal of this project is to develop the algorithms

Fig. Y. Learning sparse 3D models. (a-b) Training images for a sodacan and learned 3D model and camera positions. (c-f) Learned models forsoda can (c), rice box (d), juice bottle (e) and notebook (f). Each box (c-f)contains an image of the object, a projection of its corresponding sparse 3Dmodel, and its virtual representation in the robot’s workspace.Future Plans

The Active Home project is a focus for integrative activities. It is a driver for better manipulation, perception, and interface technologies. An emphasis for new manipulation algorithms is knowing less in advance (less prior knowledge) and knowing less currently (less exact sensing and less knowledge of world geometry such as hinge axis for cabinet doors). An emphasis for perception is greater robustness and also requiring less prior knowledge. We will develop algorithms that handle both known and unknown objects and people simultaneously. PST stakeholder meetings have revealed that user interfaces are a major concern. An emphasis of the next few years is exploring and developing a number of alternative user interfaces, in collaboration with the HIS thrust. Direct manipulation (as used in the soft interaction project) seems very promising, as does teleoperation. A pointing interface (pointing with gestures or laser pointers, for example) is also useful. A speech interface is frequently requested by potential users. We also will be assessing the utility of our systems with real users.

ERC team members: Project Leader: James Kuffner (CMU, RI) Faculty: James Kuffner Students: Dmitry Berenson (grad, CMU, RI), Rosen Diankov (grad, CMU, RI), Alvaro Collet (grad,

CMU, RI), Mehmet Dogar (grad, CMU, RI). Industrial Participants: Siddhartha Srinivasa, Dave Ferguson, Casey Helfrich (all Intel Research

Pittsburgh)

PUBLICATIONS AND PRODUCTSPapers published

First Annual Report – Volume II 15

Page 17: cga/qolt/old/v2-cga.doc  · Web viewVolume II. Core Project Summaries. Mobility & Manipulation Thrust. Mobile Manipulation Planning. The goal of this project is to develop the algorithms

D. Berenson, J. Kuffner, and H. Choset, “An Optimization Approach to Planning for Mobile Manipulation,” in IEEE International Conference on Robotics and Automation (ICRA) 2008.

M. Zucker, J. Kuffner, And J. Andrew Bagnell, “Adaptive Workspace Biasing for Sampling-Based Planners,” in IEEE International Conference on Robotics and Automation (ICRA) 2008.

R. Diankov, N. Ratliff, D. Ferguson, S. Srinivasa and J. Kuffner, “Bi-Space Planning: Concurrent Multi-Space Exploration,” in Robotics Science and Systems (RSS), June 2008.

D. Berenson and S. Srinivasa, “Grasp synthesis in cluttered environments for dexterous hands,” in Robotics Science and Systems (RSS) Workshop on Robot Manipulation: Intelligence in Human Environments, June 2008.

D. Berenson and S. Srinivasa, “Grasp synthesis in cluttered environments for dexterous hands,” in IEEE-RAS International Conference on Humanoid Robots, 2008.

R. Diankov and S. Srinivasa and D. Ferguson and J. Kuffner. “Manipulation Planning with Caging Grasps,” in IEEE-RAS International Conference on Humanoid Robots, 2008.

C. G. Atkeson and B. Stephens, Random Sampling of States in Dynamic Programming, IEEE Transactions on Systems, Man, and Cybernetics, Part B, 38(4), 924-929, 2008.

FUTURE PUBLICATIONS

D. Berenson, S. Srinivasa, D. Ferguson, and J. Kuffner, “Manipulator path planning on constraint manifolds,” in IEEE International Conference on Robotics and Automation (ICRA) 2009. (to appear)

D. Berenson, S. Srinivasa, D. Ferguson, A. Collet, and J. Kuffner, “Manipulator path planning with workspace goal regions,” in IEEE International Conference on Robotics and Automation (ICRA) 2009. (to appear)

A. Collet, D. Berenson, S. Srinivasa, D. Ferguson, “Object Recognition and Full Pose Registration from a Single Image for Robotic Manipulation,” in IEEE International Conference on Robotics and Automation (ICRA) 2009. (to appear)

Project Related Websites

ActiveHome Personal Robotics:http://www.ri.cmu.edu/research_project_detail.html?project_id=639&menu_id=261

QoLT PerMMA:http://www.qolt.org/Research/Project.jsp?projectID=33

CMU Robotics Institute: Planning and Autonomy Labhttp://www.ri.cmu.edu/labs/lab_76.html

Intel Research Pittsburgh: Personal Roboticshttp://blogs.intel.com/research/2008/11/personal_robotics_in_pittsburg.php

Planning for Manipulation:http://www.ri.cmu.edu/research_project_detail.html?project_id=638&menu_id=261

OpenRAVE software: http://openrave.programmingvision.com/ (wiki and information page)http://sourceforge.net/projects/openrave (sourceforge main page)

Software

OpenRAVE: a powerful open-source planning software architecture for autonomous robotics.

First Annual Report – Volume II 16

Page 18: cga/qolt/old/v2-cga.doc  · Web viewVolume II. Core Project Summaries. Mobility & Manipulation Thrust. Mobile Manipulation Planning. The goal of this project is to develop the algorithms

We released OpenRAVE in March of 2008 as open-source and since then it has been quickly gaining popularity among the robotics community. By providing a plugin-based architecture, other researchers can easily integrate it with their existing software codebase and quickly take advantage of the powerful new planning and scripting features we provide along with the standard distribution. One of the main advantages of OpenRAVE is that it facilitates a rapid prototyping environment that can be used to test control, motion planning, and manipulation algorithms in an integrated development environment.

Our primary goal with OpenRAVE is to create an open-source repository of important robot algorithms that researchers throughout the world can share and advance the state of the art in this important new emerging field of household robotics. There are currently hundreds of researchers around the world who are active users of OpenRAVE, and it was recently selected to be used as the main programming interface API and environment for a new upcoming European joint project on grasping with human-size robotic hands.

Additional details and information is available at the main OpenRAVE wiki:http://openrave.programmingvision.com/

First Annual Report – Volume II 17