advanced vision guided robotics - aia - vision online · advanced vision guided robotics steven...
TRANSCRIPT
Advanced Vision Guided Robotics
Steven Prehn Robotic Guidance, LLC
Traditional Vision vs. Vision based Robot Guidance
• Traditional Machine Vision – – Determine if a product Passes or Fails
• Assembly Verification • Find Defects • Gauging/Metrology
• Vision Guided Robots –
– It all about Location, • Locate and Pick Parts • Locate and move relative to Parts for Assembly • Locate parts and remove flash or apply epoxy
Robotic System Adaptability
• Basic Premise: Vision Guidance is needed
when the part is not always in the same position
J2
J3
J1
J4
J6
J5
TCP Uframe
Understanding Cameras
Digital Imager Array Megapixels Pixels
Imager Chip
Horizontal Pixels
Vertical Pixels 1/3"
CCD – Charged Coupled Device
Camera
Color Image Converted to Gray Scale
Continuous Image Converted to “Picture Elements” Or Pixels
Voltage = 1.0
Voltage = 0
Voltage = .5
CCD
CCD
CCD
Volts
Black = 0
Brightest = 255
Mid Gray = 128
Gray Scale
Gray Scale = Measured Light Level
Picture Elements – Pixels
255 255 255 255 255 255 255 255
255 255 255 255 255 255 128 128
255 255 128 255 255 255 128 128
255 255 128 128 255 255 255 128
255 255 128 128 128 255 255 128
255 255 128 255 255 255 255 255
255 255 128 255 255 255 255 255
255 255 255 255 255 255 255 255
1,1 1,2 1,3 1,4 1,5 1,6 1,7 1,8
2,1 2,2 2,3 2,4 2,5 2,6 2,7 2,8
3,1 3,2 3,3 3,4 3,5 3,6 3,7 3,8
4,1 4,2 4,3 4,4 4,5 4,6 4,7 4,8
5,1 5,2 5,3 5,4 5,5 5,6 5,7 5,8
6,1 6,2 6,3 6,4 6,5 6,6 6,7 6,8
7,1 7,2 7,3 7,4 7,5 7,6 7,7 7,8
8,1 8,2 8,3 8,4 8,5 8,6 8,7 8,8
grid location and gray scale value
Pixels have Two Properties:
An imager chip is a bunch of little light meters
Lens Mathematics
• Focal Length is the distance in millimeters from the optical center of a lens to the imaging sensor (when the lens is focused at infinity).
• Calculate Viewing Area – Working Distance from Camera to Object – Desired Field of View; Width of Height – Image Array Size: 1/3”, ½”, 2/3” Etc. – Lens Size – Focal Length
6mm 25mm
Lens Calculations
You can solve for any one of the missing parameters, if you know the other three
C-mount provides a fixed standard distance to imager
Joint to Cartesian Space
• Robot kinematics is the study of the motion (kinematics) of robots. In a kinematic analysis the position, velocity and acceleration of all the links are calculated without considering the forces that cause this motion. -1
• Joint angles (provided by encoders) and arm segment lengths are combined to render position and orientation
-1 Reference Wikipedia
Understanding Cartesian Coordinates
• Consider the Robots Face Plate Position with respect to a part (X,Y and Z)
• Now consider a plane pivoting around this point. Rotating around X,Y and Z.
• This is part positioning with 6 degrees of freedom.
• Our ultimate goal: How can vision be used to guide the
robot to the position of a part?
Position Relationships
• Start with a Cartesian coordinate system for rendering position (X,Y,Z)
• R is the position of the platter relative to the room.
• F is the position of the furniture in the World coordinate system.
• P is the position of the platter in the furniture frame or furniture coordinate system.
• Now consider a table where adjacent legs are shorter and a platter where one side is significantly tilted
R
F
P
X
Z
Y
Room
Furniture
Platter
World and Tool Frames
+X
+Z
+X
+Z
World frame
Tool frame
+Y
+Z
Top view
Tool frame
Two Key Contributors to the Robots Position User Frames
The Primary Robot position Coordinate system is referred to as the World Frame
Other frames can be created that are positioned relative to the world
Tool Frames
Using Kinematics the world frame, and Tool Frame, the robot can be used to create positions and establish planes
• World frame - default frame of the robot • User frame - user defined frame • Tool frame - user defined frame
User Frame
Robot
World Coordinate
System
$MNUFRAME[1,frame_num]
$MNUTOOL[1,tool_num]
• Tool Center Point
• TCP
• TOOL
• UTOOL
• UT
• User Frame
• USER
• UFRAME
• UF
Frames Important to Vision
• When a point is recorded, it references both the Tool Frame and the User Frame
User Frame
Robot
Positional Data
World Coordinate
System
Tool Frame
Tool Frame and Programming
• The Tool Frame is what the robot “looks at”
when you ask it to do a Linear or Circular movement.
• User Frames are based off of the World Frame
• They can be taught at any location and any orientation
• Positions are taught with respect to a User Frame (as well as, a Tool Frame)
User Frame
Robot
Positional Data
World Coordinate System
Tool Frame
User Frame
Robot and Tool Positions Relative to the Part
• The robots position and orientation is reported with six degrees of freedom – X,Y,Z, Pitch, Yaw, Roll
• The tool has a position and orientation, too.
• Robot and Tool frames are combined to find a position relative to the part. – This relationship is resolved
before moving the robot the part.
R T
P
Simple Camera Calibration
• Place a grid with known spacing at the same height as the top of the parts to translate Pixels to Units of Measure – (Grid to Camera)
• Tie in the orientation of the camera to the grid – Which direction representing X and Y
• Tie in the relationship of the grid to the robot – (Grid to Robot)
• Now we can determine the parts position relative to the robot
X
Z
X
Z
Part Height Variation Problem
ΔX
Vision Calibration and the User Frame
In order for visual offset data to be useful to the robot, both vision and the robot must recognize the same coordinate system.
This involves establishing a correspondence between the camera's coordinate system and a robot user frame.
To accomplish this a robot frame is
taught that corresponds to a frame used to calibrate vision.
Basic Cameras are 2D
• What aspects of a parts position relative to the robot can we determine solely from a 2D image?
• Variables that you may need to know: – Distance from the camera – Magnification of the lens – Size of the part – Calibration (EG: pixels/mm) – Orientation of the camera in space – Orientation of the robot in space
• How does the distance away from the camera effect the part position calculation?
Image to Robot Relationship
In two-dimensional applications, the XY plane of the user frame specified here should be parallel to the target work plane. How do you compensate when this is not the case?
Vision To Robot Transformations Considerations
• Camera mounting style – Fixed position or Robot mounted camera
• Cycle time • Size of part (FOV) vs. accuracy needed
• Part Presentation issues – In which axis's is the part likely move?
• X, Y, Rotation, Z, Pitch and Yaw – Is the part consistent or is its presentation consistent – Is it possible to correlate position from different
perspectives? – Can structured light be used to help identify location?
• Origin is called the Tool Center Point or TCP – Defines location of where work is done
• Default location is center of the faceplate
• Origin of the tool frame must be offset to a fixed point on the physical tool
• All offset measurements for the Tool Frame are relative to origin of the Face Plate
Tool Frame
Matrix Multiplication
If A is an n × m matrix and B is an m × p matrix,
the matrix product A:B (denoted without multiplication signs or dots) is defined to be the n × p matrix
* Reference Source – Wikipedia Matrix multiplication
Basic Matrix Math Continued • Inverse Matrix
– If A is a square matrix, there may be an inverse matrix A−1 of A such that
– where I is the identity matrix of the same order.
* Reference Source – Wikipedia Matrix multiplication
• To isolate R (the robot’s position) we performing algebraic manipulation of the frame matrices as follows:
where T-1 is the inverse of the tool.
• Since
Example of Basic Robot Math
R T
F
P
Standard Robot Equation: R : T = F : P
11 :::: −− = TPFTTR
ITT =−1:
1::: −= TPFIR1:: −= TPFR
Guidance Summary 1. Robot’s Knowledge of Calibration Frame (Used by the
camera) 2. Camera Calibration Assigns the translation to be Used. 3. Vision Process is Used to locate the Part Position
– Requires a Z height of the part relative to the frame of reference – Record a part reference position
4. Move the robot to pick the part and record this position.
The Camera is used to find the part and calculate its position. The robot is moved to a location relative to the part with
offsets appiled.
Visual Tracking
• 2D Line Tracking – X, Y location and angle orientation PLUS conveyor
position – Cue Management
Offset calculation
Camera
Pulsecoder
Conveyor
Random feeding
Basic 3D Robot Guidance Methods
• 2.5 D – For Rough Z approximation • 2.5 D – With Structured Light Reference • Single View 3D using Geometric Relationships • Multiple laser or structured light pattern
triangulation methods • Advanced 3D – (depth analysis)
2D Guidance with a Change in Z
• The image created by the camera is like looking through a Cone
• The ratio of Pixels to Units of Measure changes as you move within the cone
• If the part distance from the camera is not identical to when the camera was calibrated, finding a parts position accurately requires adjustment of the transformation.
• How do you know the part height? • What can be leveraged?
– Part scale, height sensors – Lens mathematics
2D Single Camera - 2.5 D Camera Image
Height change creates subtle apparent size change. Are you sure the part size is not different – creating the same affect?
• Apparent Part Size can be used to calculate relative height
• The height designates where in the calibration cone it is
• The transformation is adjusted relative to the frame of reference
Top Part
Bottom Part
Depalletizing
The world is not flat…
Traditional cameras see a flat world – 2D and flat
Robots work in the real world and must compensate for a parts position with 6 degrees of freedom
World Frame
• World frame has its origin at the intersection of axes 1 and 2
• Aid to Remember Coordinate Relationships: the Right Hand Rule
+Y
Origin
+X
-X
-Z
+Z
-Y
2D Robotic Assumptions • 2D imaging systems can be used if:
– The part always sits flat on a surface or fixture (no pitch or yaw changes)
– The part is consistent in its size and shape – The tool is designed to compensate for any variation in height
(and subsequent X, Y error) • 2D is not a good solution when:
– Parts are stacked and may be subject to tipping – Parts are randomly placed in a bin for picking – Parts enter the robot cell on a pallet that is damaged, or on a
conveyor that wobbles – high accuracy assembly process like hanging a door on an
automobile
Example 3D Robot Applications
• Racking and Deracking • Palletizing and Depalletizing • Welding uneven surfaces • Grinding and flash removal • Machine load • High accuracy assembly • Parts on Hangers • Picking Stacked parts • Picking parts randomly located in
bins
Laser
Light Plane
Light Stripe on work piece
Laser Line/Structured Light
Camera sees this view
Camera is at 90° and Laser at 45°
Lasers Vertical Position determines Z
Structured Light in The Real World
Hex wrenches with laser line Curved surface with laser line
A 2D Change of Perspective
• As part orientation changes in pitch and yaw, surface points converge or diverge.
Camera Image
Plane 1 Plane 2
Distance Moved
Grid Camera Camera
Calibration – Two Plane Method
• Requires either the robot to move the camera or the robot to move the grid.
• Renders greater vector accuracy. • Helps improve lens math.
Applying Geometric Relationships
• Identify fixed and reliable geometric features (corners or holes)
• Apply Geometric Position Relationships between features
• Compensate for Perspective
Geometric Relationships • Start with a known shape • Extract feature Point Position
with respect to calibrated cameras
• The part shape is assumed to be constant although position is not
• Combine camera position relationship with found feature to extract new position
Advanced 3D
• Several Companies are advancing 3D point cloud image generation
• Technologies include: – Time of Flight Sensors – Morei Interferometry – Structured Light – Stereo Projection – others…
• Processing renders part position by matching features in 3D Space
• Translation of the found position with respect to original position remains consistent to previous robot math examples
3D Sensors – From Wikipedia
FANUC 3D Area Sensor
"TOF Kamera 3D Gesicht" by Captaindistance - Own work. Licensed under CC BY 3.0 via Wikimedia Commons - http://commons.wikimedia.org/wiki/File:TOF_Kamera_3D_Gesicht.jpg#mediaviewer/File:TOF_Kamera_3D_Gesicht.jpg
• Bin – Define the size and location of the bin
• Robot – Model the EOAT and setup check to keep robot and EOAT from contacting bin
Break Down of Vision Tasks
Bin Avoidance
Bin Picking
• Locate the Bin • Locate Candidate Parts • Move to pick Candidates without collision • Remove parts without Collision
Summary
• Machine vision has progressed significantly in the last 10 to 15 years
• Advances in technology continue to provide new capabilities
• Pay close attention to 3D enhancements for Robotic Guidance Applications