robot - vision (industrial and service robots) - handouts

55
23092011 1 VISION FOR INDUSTRIAL AND SERVICE ROBOTS AN INTRODUCTION R.SENTHILNATHAN RESEARCH SCHOLAR DEPARTMENT OF PRODUCTION TECHNOLOGY MIT CAMPUS, ANNA UNIVERSITY CHENNAI Revolution 2 Agricultural Revolution Industrial Revolution Electrification Transportation Communication Computers Industrial Robots Service Robots Analogies 3 Profound, Widespread and Global Mental to Physical Leverage Mainframes to Industrial Robots PCs to Service Robots The Price and Volume curves Software and Applications Third Party Applications Mobile, Personal and Household Industrial Robot 4 From ISO 8373 An automatically controlled, reprogrammable, multipurpose manipulator programmable in three or more axes, which may either fixed in place or mobile for use in an industrial automation application.

Upload: aswath-ram

Post on 02-Mar-2015

62 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

VISION FOR INDUSTRIAL AND SERVICE ROBOTS AN INTRODUCTION

R.SENTHILNATHAN RESEARCH SCHOLAR

DEPARTMENT OF PRODUCTION TECHNOLOGY MIT CAMPUS, ANNA UNIVERSITY CHENNAI

Revolution

2

Agricultural Revolution Industrial Revolution Electrification Transportation Communication Computers Industrial Robots Service Robots

Analogies

3

Profound, Widespread and Global

Mental to Physical Leverage

Mainframes to Industrial Robots

PCs to Service Robots

The Price and Volume curves

Software and Applications

Third Party Applications

Mobile, Personal and Household

Industrial Robot

4

From ISO 8373 An automatically controlled,

reprogrammable, multipurpose manipulator programmable in three or more axes, which may either fixed in place or mobile for use in an industrial automation application.

Page 2: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

Service Robot

5

Provisional from Working Group A service robot is a robot which operates

semi- or fully autonomously to perform services useful to the well-being of humans and equipment, excluding manufacturing operations.

Perception to Physical Access

How Humans do?

Locate with eyes

Calculate target with brain

Guide with arm and fingers

How Robots do?

Locate with camera

Calculate target with software

Guide with robot and grippers

6

Role of Perception in Robot Manipulation

7

Where am I relative to the world? sensors: vision, stereo, range sensors, acoustics problems: scene modeling/classification/recognition integration: localization/mapping algorithms (e.g. SLAM)

What is around me? sensors: vision, stereo, range sensors, acoustics, sounds, smell problems: object recognition, qualitative modeling integration: collision avoidance/navigation, learning

Role of Perception in Robot Manipulation

8

How can I safely interact with environment ? sensors: vision, range, haptics (force+tactile) problems: structure/range estimation, modeling, tracking,

materials, size, weight, inference integration: navigation, manipulation, control, learning

How can I solve “new” problems (generalization)? sensors: vision, range, haptics. problems: categorization by function/shape/context integrations: inference, navigation, manipulation, control, learning

Page 3: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

Vision for Robots

9

Vision for Industrial Robots: GIGI

Gauging

Inspection

Guidance

Identification

Vision for Service Robots

Preprocess Environment

Sensor Fusion

Industry vs. Service Sector

10

Strength if Industry Sector

Many Years of Experience

Motors, Speed, Precision

Vision, Force, Torque, Encoders

Vision Frontiers in Service Robots

Uncontrolled Environment and Safety

Sensor Fusion and Reliability

Building Blocks in Design of Vision Guided Robots Lighting

Technique (Frontlight, Backlight)

Source (Fluorescent tubes, Halogen and xenon lamps, LED, Laser)

Optics

Vision Cameras

Type of sensor (CCD, CMOS etc)

Spec. of Camera (Resolution, Frame rate, etc)

Type of Camera (Line Scan, Area Scan, Structured Light, Time of Flight)

Interface (Standalone, Computer Interface)

Software

Robot Types

Camera Mounting (Eye in Hand, Eye to Hand) 11

Scene Parts Discrete parts or endless material (e.g., paper) Minimum and maximum dimensions Changes in shape Description of the features that have to be extracted Changes of these features concerning error parts and

common product variation Surface finish Color Corrosion, oil films, or adhesives Changes due to part handling

12

Page 4: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

Contd. Part Presentation indexed positioning continuous movement

If there is more than one part in view, the following topics

are important: number of parts in view overlapping parts touching parts

13

Industrial Robots - Applications

14

Robot Configuration Industrial

1/2/3 Axis Cartesian

4-Axis SCARA

6-Axis Articulated

Gantry Type

15

Camera Mounting

Eye to Hand Eye in Hand

16

Page 5: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

Applications

2D • Indexed Conveyor

• Flexible Feeding

• Autoracking

• Packaging

2.5D

• Stacked Objects

• Geometry for depth

perception

3D • Auto racking

• Discrete Bag

Handling

• Palletizing

• Bin Picking

17

Advantages of Vision for Industrial Robots

Labor Savings: Often alone justifies

Throughput Gains in Production

Quality Improvements

Safety and Medical Cost Savings

Flexible Change to Multiple Products

Floor Footprint Reduction

Reutilize Conveyors, Racks, Bins

18

Building Blocks in Service Robots

19

3D Vision

3D Vision with Real Time Motion

3D Vision with GPS navigation

3D Vision with SLAM

Simultaneous Localization and Mapping

Sensing the Environment

Modeling the Environment

3D Vision with Sonar Navigation

3D Visualization with Haptic Controls

Classes of Service Robots

20

Aerospace

Spacecraft, Satellites, Aircraft

Land

Defenseand security, Farming, Wildlife, Food, Transportation, Outdoor

Logistics, Office and Warehouse, Health: Care, Rehabilitation, Surgical,

Entertainment, Entertainment

Water

Defense and security, Research and Exploration, Preventive Maintenance,

Rescue and Recovery.

Page 6: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

Software 2D Object Recognition Edge Detection Boundary analysis Geometric Pattern Matching

3D Object Recognition Mono camera if geometry is consistent Stereo matching: Redundant reliability Laser, Range or time of flight methods

Additional Techniques Projected points/Lines of Light 3D volume Scans Scene-specific Heuristics

21 22

Mobile Robots for Defence

Rehabilitation

Surgical

Innovation

Human Like

Swimming

Flying

IMAGING FUNDAMENTALS

23

Any Digital Image, irrespective of its type is a 2D array of numbers

24

Page 7: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

Types

Intensity images

Range images

25

Intensity Images Optical parametres Lens type Focal length FOV

Photometric parameters Intensity Direction of illumination Reflectance properties Sensor’s structure

Geometric parameters Types of Projections Pose of the camera

26

Basic Optics The Thin Lens model: Fundamental Equation where

27

Perspective Camera Model

28

Page 8: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

29

Pin Hole Model

30

31

Mapping Point to Camera

32

Page 9: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

33

34

35

36

Page 10: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

10 

Camera Parameters and Calibration

Extrinsic parameters

Intrinsic parameters

37

World Coordinate

38

World Coordinate

39

40

Page 11: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

11 

Perspective Projection A point

p[x,y,z] in the image plane is given by

x = f [ X / Z] y = f [ Y / Z] where p is the

image of the point P[X,Y,Z] in world space.

41

Range Images Reconstructing a 3D shape from a single intensity

image is DIFFICULT. Range Images Are also called as depth images, depth maps, xyz maps, surface

profiles and 2.5D images. Each pixel of a range image expresses the distance between a

known reference frame and a visible point in the scene.

Forms of Range Images Cloud of points (xyz form) Spatial form

42

Range Images – Display types

43

Agenda

44

Physics of Light

Optics

Camera Sensors

Camera Interface

Camera Calibration

Software

Applications and Case Study

Page 12: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

12 

45

PHYSICS OF LIGHT

Why Physics of Light ?

46

The laws of physics govern the properties of vision

systems.

Understanding the physics will allow you to predict the

behavior

You will understand the limitations of the performance

Vision Starts with Light

47

Light has a dual nature, obeying the laws of physics as

both a transverse wave (electromagnetic radiation) and as

a particle of energy (photon).

Properties of Light

48

Electromagnetic Radiation

Used to explain the propagation of light through various

substances.

Particle

Used to explain the interaction of light and matter that

result in a change in energy, such as in a video sensor

Page 13: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

13 

Light as an Electromagnetic Radiation

Light is a Transverse wave.

Points oscillate in the same

plane on a axis perpendicular

to the direction of motion.

The electrical wave

oscillates perpendicular to

the magnetic wave.

49

Electromagnetic Wave Characteristics Frequency (f) is the no. of oscillations per second.

Wavelength (λ) is the distance between two points in the same

position on the wave (nm)

f = c / λ where c is the speed of light

50

Energy vs Intensity

Energy is determined by the frequency of oscillation

Higher frequency

Shorter wavelength

Higher energy

Intensity is determined by the amount of radiation.

51

Electromagnetic Spectrum

52

Page 14: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

14 

Relation between Color of Light and its Wavelength Visible light contains a continuum of frequencies We perceive color as a result of predominance of certain

wavelengths of light The eye responds to visible light with varying efficiencies

across the visible spectrum Cameras have a very different response

We are concerned with a narrow region of the spectrum Ultraviolet – Visible – Infrared While the eye can see only in the visible spectrum, the energy

above and below visible light is also important to machine vision. 53

What happens to Light when it hits an Object ?

54

In Vision We are Concerned with Reflected Light

Reflected Light is controlled by engineering the lighting.

The reflected light (and therefore the digital image) is

impacted by

Geometry of everything

Color of the light

Color of the part

55

How do Objects of Different Color Respond to Light ?

56

Page 15: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

15 

How and Why Objects Have Color? Red light gets reflected from

red objects.

Your eyes see the reflected

light. The camera also see the

reflected light.

All other color gets

absorbed by the material. This

radiation gets turned into heat. 57

Additive Color

Demonstrates what happens

when colored lights are mixed

together

Additive primaries are red,

green and blue which altogether

make white

RGB used for color TV and

Cameras

58

Subtractive Color Used to describe why objects

appear the color they do

Pigments added to paint will

absorb all colors of that

wavelength

CMYK used for printing ink (K

is for carbon black, less

expensive pigment than other

colors)

59

Maxwell’s Triangle – A demonstration of Additive Color Mixing

CIE Chromaticity Diagram

60

Page 16: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

16 

White Light is Actually Very Colorful

The rainbow exiting a prism or

seen in the sky is the inverse of

the additive color wheel.

Both demonstrate that “white

light” is actually a very complex

function which needs precise

definition.

61 62

OPTICS

Optical Filter An optical device which selectively transmits light of

certain wavelengths and absorbs or reflects all other wavelengths.

63

Using Filters to Highlight Objects of Different Colors

Red light reflects off the red background but absoebed by the blue circle.

Blue light reflects off the blue circle but but absorbed by the red background.

64

Page 17: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

17 

Placement of Filters: Incident or Reflected Light

65

Resulting Images would be the same: No available red light to be reflected, so

red appears dark. Light is reflected from blue, appears

light.

An example

66

Spectral Response

67

How efficiently light is

emitted or received as

the wavelength (or

color) of the light

changes.

Filters can be

described by a spectral

response plot.

Spectral Reflectivity for Al and Ag

68

Page 18: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

18 

Spectral Response of Filters

69

Ideal Filter Real Filter

Spectral Response of CCD

70

Interaction of Light with Surfaces

71

Reflection Refraction

Why Reflection is Important ?

72

Majority of the vision systems record the reflected light.

A well designed lighting system provides high contrast

between the features of interest and background (noise).

Regions of high reflectivity, regions of minimal reflected light.

Spectral properties of light sources, combined with spectral

properties of surface can be used to provide high contrast.

Geometrical considerations are important for

understanding reflected light.

Page 19: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

19 

Interaction of Light with transparent surfaces

73

Why Refraction is Important ?

74

Refraction is the basic principle behind many optical

elements

Lenses

Filters

Mirrors and Prisms

Optical elements are not perfect

They do not transmit 100% of the light.

Chromatic Abberations.

Surface Finish

75

Complex Geometries

76

Page 20: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

20 

How is Light measured ?

77

Lumen and Lux are photometric parameters that represent the amount of light that falls upon a surface per second. Bright Sun Light: 100,000 lux Cloudy day: 10,000 lux Full moon night: 0.05 lux Over cast night: 0.00005 lux

The human eye is sensitive to this full range (10 orders of magnitude!)

But cameras are only sensitive to 3 orders of magnitude.

Lens

78

The Lens uses refraction to bend light as it passes through, generating image at the other side.

Lens and the Camera sensor

79

Specifications used to select Lens for a Machine Vision Application

80

Focal Length

Angular field of view or magnification

Working Distance or Field of view minimum at focus

Depth of Focus

Aperture

Resolution

Camera Sensor Size

Camera mounting configuration

Page 21: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

21 

Focal Length

81

f

The focal length is the distance between the optical centre and the image plane when the lens is focused at infinity.

82

Field Of View

83

The area imaged or the FOV is determined by the intersection of the stand off distance and the angle of viewing.

Field Of View

84

Page 22: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

22 

Moving closer

85

Focal Length and Stand off Distance

86

A shorter focal length lens can image the same field of view as a longer

focal length lens by decreasing the stand off distance.

Shorter focal length lens will have more parallax distortion (fish eye effect).

Stand off distance has a larger effect on magnification for short focal length

(wide angle) lenses.

What to do if you need to change the image size?

87

To increase magnification (smaller FOV)

Use a lens with longer focal length

Move the camera closer to the part

(To be cautious about distortion and ability to focus)

To decrease magnification (larger FOV)

Use a shorter focal length lens

Move the camera further from the part

Focus

88

Lenses are designed for specific imaging characteristics

Using the lens outside of the design region impacts image

quality.

For example, stand off distance for focusing can be reduced using

spacer rings.

Depth of focus is also dependent upon aperture or f/stop

setting

Wide open aperture: small depth of focus

Small aperture: large depth of focus

Page 23: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

23 

Extension Rings

89

Extension Rings are used to alter the

focusing distance of a lens

The rings increase the image

distance, and allow the lens to focus

at shorter distances

Lens Adaptor

Aperture and F/stop (F#)

90

Aperture is the clear opening of the lens

F/stop = Focal length / Aperture Diameter

An Example

91

Captured with a 100-mm lens with F/4 Captured with a 28-mm lens with F/4

Captured with a 100-mm lens with F/22 Captured with a 28-mm lens with F/22

92

LIGHTING

Page 24: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

24 

Lighting Concerns

93

Stability of the light source

Flicker rate

Change in spectral properties

Need to control diffusion of light (bright spots are bad)

Ambient lighting needs to be blocked off

Ambient temperature has very large effect on lighting

Depends on lighting and camera

Relations are non-linear

Illumination affects the color of the material

94

Sources have different spectral properties which cause

objects to look differently under different sources.

One More Example..

95

Characteristics of Light Sources

96

Thermal (Incandescent) 1000 lumens for 75W 5% efficiency (12 – 15 lumens/Watt) 1000 hours Rs 50/Klumen

Gas Discharge (Fluorescent) 10,000 lumens 25% efficiency (50 lumens/ Watt) 10,000 hours (output degrades, then fails) Rs 25/Klumen

LEDs (Solid state lighting) 30-35 lumens/Watt 100,000 hours (output degrades over time, not hard failure) Rs 2500/Klumen

Page 25: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

25 

50 Hz Noise Variation in Light Output

97

Use a high frequency ballast for Fluorescent lights (10 KHz)

Use DC sources for LEDs

Shroud your cell from ambient light if it is bright.

The ambient light source is most likely AC

Effect of Operating Temperature

98

Use Geometry to Meet the Objectives for Lighting Design

99

Optical Filtering

Highlight Features of Interest

Reduce Extraneous information

Use natural features of the part for contrast

Shadows

Specular Reflections

Design a system that is compatible with process constraints

Angle of Lighting depends on the part features

100

Page 26: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

26 

Lighting Techniques

101

Back Light

Diffuse

Collimated

Front Light

Diffused

Directed

Structured

Back Light

102

Placing a light behind the part such that the part is between the light and the camera, providing a silhouette of the part

Back Lighting Provides the Highest Contrast. But….

103

Its not always practical to implement

Parts on a conveyor or in a fixture often is difficult to be back

lit.

It provides the information about the part’s silhouette

only

Sometimes surface features are the ones we’re interested in

Types

104

Page 27: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

27 

Front Light

105

Placing the light in front of the part, on the same side of the camera.

Provides an Image with surface features and shading

Dark Field Illumination

106

Dark field illumination is used to

subdue background and highlight pin

stamped characteristics

Light positioned at an oblique angle to

the part.

Angle of incidence set up such that

angle of reflection is away from the

camera lens.

Any perturbations can reflect light into

the camera lens.

107

Bright Field Illumination Dark Field IIlumination

Lighting Component

108

Page 28: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

28 

109

110

111

112

CAMERA SENSORS

Page 29: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

29 

Digital Image

113

A digital image is a numerical

representation of a real physical object.

The objective is to obtain an accurate

spatial (geometric) and spectral (light)

representation with sufficient detail

(resolution).

Image sensors generate images by

measuring and recording the light that

strikes the sensor surface.

Any Digital Image, irrespective of its type is

a 2D array of numbers

Types

Intensity images

Range images

114

Recording the Field of View

115

Area Scan

Line Scan

Intensity Images – CV Terminologies Optical parametres

Lens type

Focal length

FOV

Photometric parameters

Intensity

Direction of illumination

Reflectance properties

Sensor’s structure

Geometric parameters

Types of Projections

Pose of the camera

116

Page 30: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

30 

Getting a good image

117

What is a good image? Features of interest are well defined High contrast with enough detail

Images are repeatable Features in the image exist in the physical world No noise or artifacts

Changes in the environment should have minimal impact on the image.

How to achieve this? Good lighting and optics Understanding the requirements Choosing the right camera for the application.

Properties of Sensors

118

Some materials will

generate electrical charge

proportional to the

number of photons

striking it.

These materials are used

for image sensors.

Sensor Types based on the Sensing Element Used

119

Vaccum Diode

Vidicon

Plumbicon

Photo Multiplier Tube (PMT)

Solid State (silicon)

Silicon Photo Diode

Positive Selective Detector (PSD)

Solid State Camera Sensors

CCD

COMS

Sensor analogy to film

120

In some ways, a sensor in a video

camera is lot like photographic film.

An image is focused on the sensor for a

preset exposure time.

The light pattern is captured and

transformed into a new medium.

There is a integral relationship between

the amount of light measured and

exposure time.

Page 31: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

31 

Comparison of a sensor to film

121

Film has a continuous surface, down to the

grain of the film

Video Sensors have discrete imaging surface

Sizes of solid state sensors

2/3 inch: 6.6 x 8.8 mm 1/2 inch: 6.4 x 4.8 mm 1/3 inch: 3.6 x 4.8 mm 1/4 inch: 2.4 x 3.2 mm

How sensors work ?

122

We can think of camera as imposing a

grid over the object being imaged, and

sampling the light

The individual square is called a photo

site, and is similar to a light meter.

The camera sensor is made up of an array

of these photo sites.

The individual photo sites in an video

sensor are called picture elements - PIXEL

How Sensors Measure Light?

123

Each photo site can be modeled by a bucket

to collect charges generated by photons

As photons strike the sensor, charge is

developed and the bucket begins to fill

How full the bucket gets is determined by

How much light (intensity)

How long you collect charge (exposure time or

shutter speed)

How efficiently the photons gets converted to

charge (spectral response)

124

The amount of light in each photo site is sampled and

converted into a number. This number, or gray scale value, is an indicator of brightness.

Page 32: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

32 

Exposure Time Analogy

125

How full the bucket gets is dependent upon:

How fast the faucet is running

- light intensity

How long you keep the bucket under the running water -

- exposure time

Photons

Electrons ANALOGY

Images Taken With Different Exposure Time

126

As you increase the exposure time, you allow more time for photons to get converted into electrons in the sensor; hence more charge accumulation for more brighter image.

INCREASING EXPOSURE TIME

How Many Pixels Are Required To Find An Object ?

127

2 x 2 grid 4 photo sites or (4 pixels)

When the blue object fills

more than 50% of the photo site, it will be turned black, otherwise the site is considered to be white.

Double the Resolution…..

128

4 x 4 grid 16 photo sites or (16 pixels)

When the blue object fills

more than 50% of the photo site, it will be turned black, otherwise the site is considered to be white.

Page 33: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

33 

Double it again....

129

8 x 8 grid 64 photo sites or (64 pixels)

When the blue object fills

more than 50% of the photo site, it will be turned black, otherwise the site is considered to be white.

Attributes of Sampling

130

You might not even detect the object if the sampling resolution is too low

If you sample at two times the resolution, the total number of sample sites is increased by a factor of 4

Other Attributes…

131

The new digitized information contains much less information A three dimensional scene is reduced to a 2D

representation No color information

Size and location are now estimates whose precision and

accuracy depends on the sampling resolution.

A close up look at pixels !

132

INC

REA

SIN

G Z

OO

M L

EVEL

Page 34: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

34 

Sensor Array Configuration

133

The sensor consists of an array of individual photo cells.

Typical array size in pixels is

640 x 480 or 768 x 480,

1280 x 760

1600 x 1200, and larger

For reference human vision is >100

million pixels

The array size is called PIXEL RESOLUTION

How big is a pixel ? - Resolution

134

When it comes to resolution, the following distinction is necessary

Number of Pixels in the image

- Camera Sensor Resolution

Usually between 5 and 10 microns

Impact Sensor Noise and Dynamic Range

Number of pixels covering feature

- Spatial Resolution

Impacts robustness of the vision algorithm

Smallest detail captured in the image

- Measurement Accuracy

Spatial Resolution

135

Spatial Resolution = FOV / No. of Pixels

How many pixels should cover the Features of Interest ?

136

Depends on the application, but in general, more is better

Trade off is that you image less of the scene.

Field of view should be large enough to accommodate variations in position.

Might require more than one camera

Page 35: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

35 

What if your camera does not have enough pixels ?

137

Use sub-pixels Interpolating between pixel boundaries for sizing or identifying

location

Sub pixels are only applicable to measurement, not detection

What If Pixel Arrays Are Not Big Enough And Sub-pixels Won’t Work?

138

Use Line Scan cameras: A digital camera with pixels arranged in a single line.

Can generate extremely large contiguous images not possible with area scan cameras

1K, 2K, 4K, 8K, 10K are some available sizes

Cost of the line scan sensor is low relative to large format array cameras (2000 x 2000)

Motion of the camera or part is required for the 2nd axis

Similar to scanners, copiers and fax machines

Can obtain images of continuously moving line (web inspection)

Line Scan Image Example

139

Field of View is 30” x 200”

100 dpi

3,000 x 20,000 pixel image

60 Mbyte image data

Some More Camera Sensor Parameters

140

Saturation

Blooming

Dynamic Range

Grayscale Resolution

Dark Current Noise

Fill Factor

Page 36: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

36 

Saturation

141

At certain light levels and exposure

times, the bucket (photo site) gets

filled with charge and can hold no

more.

The photo cell is now saturated

Any additional charge generated by

the sensor has to go somewhere

Where does it go?

Blooming

142

When light saturates in a pixel area it

spills over into adjacent pixels.

Spill over occurs

Into adjacent pixels

In CCD spillover also occurs in the

pixel columns

Prevent blooming by

Avoiding saturation

Cameras with anti blooming circuitry

Blooming causes loss of image data

Dynamic Range

143

Ratio of Amount of Light it takes to

saturate the sensor to the least amount

of light detectable above background

noise.

A good dynamic range allows very

bright and very dim areas to be viewed

simultaneously

Examples

144

Low Dynamic Range High Dynamic Range

Page 37: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

37 

Grayscale Resolution

145

The number of bits used to represent the amount of

light in the pixel

Digitizing to 8 bits gives 256 gray shades 2 = 256 8

Dark Current Noise

146

If no photons strike the sensor during the exposure time

No charge is created

The bucket should remain empty

However stray charge gets generated in the silicon from

the thermal energy causing low level noise

This charge is called dark current

Result is that black is not 0.0 volts

Dark current noise increases with temperature, doubles

with every 6 degree rise above room temperature.

Photosensitive Area of the Sensor

147

Photons which fall out of the photo sensitive area do not get converted into electric charge and are not detected by the sensor.

This will impact sensitivity to light and the ability to accurately measure between pixels for sub pixel tolerance.

Fill Factor

148

Percentage of the pixel area sensitive to light

Circuitry required to read out voltage obscures silicon beneath

traces

Coverage can be as low as 30%

Fill factor is shown in some camera specifications

Impacts quality of image

Sensitivity to Light

Page 38: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

38 

Fill Factor Considerations with CCD vs. CMOS

149

CCD Sensor CMOS Active Pixel Sensor

CCD Sensor reads out a single row of pixels at a time, after the charge is moved down the sensor “lock step” by rows

CMOS sensor has amplifier circuitry on each and every pixel in the array. Pixel values may be readout somewhat randomly.

CCD Sensor CMOS Active Pixel Sensor

CCD Sensor reads out a single row of pixels at a time, after the charge is moved down the sensor “lock step” by rows

Advantages and Disadvantages of each Technology

150

CCD High quality, low noise images

Good Pixel-to-Pixel uniformity

Electronic Shutter without artifacts

100% fill factor

Highest Sensitivity

High Power consumption

Multiple Voltages Required

Increased system integration complexity and cost

CMOS Low Power consumption

Camera functions and additional control circuitry can be implemented in the CMOS sensor chip itself

Random pixel read out capability (Windowing)

Fixed Pattern noise

Higher Dark Current Noise

Lower Light Sensitivity

Color vs. Monochrome

151

You would have 3x the amount of data to process, or 1/3 the spatial

resolution with color imaging

Need to evaluate the benefit of color information relative to

increased complexity and reduced resolution

Most machine vision applications use monochrome cameras

Machine vision implemented with color camera are suitable for

sorting, nor colorimetry

For robustness, colors being differentiated need to be widely spaced

Watch for uniform spectral output of your light source for color

applications (remember that the camera measures the reflected light)

152

CAMERA INTERFACE

Page 39: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

39 

How Vision Works

153

Take a picture

Process the image data

Make a decision or measurement

Do something useful with the results

“Standard ” Vision Components

154

Not everything enclosed in the box is required

Frame Grabber

Custom Image Processor

Computer

Hardware Common to Most Vision Systems

155

Camera

Sensor, format, interfaces

Processor

Frame Grabber, I/O, interfaces, packaging

Optics

Lenses and Accessories

Lighting

Source and Technique

Other Accessories

Enclosures, cables, power supplies.

Camera Types based on the Hardware Architecture

156

PC Based

Smart Camera

Embedded Vision

Camera

Hardware Architecture

Page 40: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

40 

In detail

157

PC-based vision is usually more effective for larger systems

Additional cameras can be very low incremental costs

PC is available for complex image processing or post processing tasks

PC can be used for storing images, collecting process data, programming system updates

Smart Cameras can be cost effective where

Small number of cameras are required

Operation of each smart cameras are independent of others in the cell

Minimal post-processing of data is required

No logic between cameras

Lower end vision algorithms sufficient

Embedded Vision System provides complete hardware packaging and software integration

solution

Signal Flow of Image from Camera to Computer (Analog)

158

Vision Camera Frame Grabber

Analog Signal RS 170 / CCIR

Digital Image Sensor

DAC Analog Signal ADC

Image Buffer

Signal Flow of Image from Camera to Computer (Digital)

159

Vision Camera Frame Grabber

Digital Signal

Digital Image Sensor

DAC Analog Signal ADC

Image Buffer

MAY BE, or becomes a part of the camera

Digital Serial or Parallel Interface

Bandwidth, Resolution and Frame Rate

160

The bandwidth of a interface protocol is shared by the resolution of the image and the frame rate.

Frame rate of a camera

depends upon the camera interface and also the camera electronics

Frame Rates could go up

to a million frames per second.

Page 41: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

41 

Standards for Digital Interface

161

INTERFACE BANDWIDTH COST EFFECTIVE

CABLE LENGTH

POWER OVER CABLE

CAMERA AVAILABLITY

CPU USAGE

STANDARD INTERFACE

CAMERA

LINK

250 MBps

1

3

No

4

Low

3

IEEE 1394

400 Mbps (a), 800 Mbps (b)

4

1

Yes

3

Moderate

5

USB

500 Mbps

5

1

Yes

1

Extensive

2

GigE

1Gbps

4

5

No

2

Moderate

5

5 – Excellent; 1 - Poor

162

CAMERA CALIBRATION

Camera Parameters and Calibration

Extrinsic parameters

Intrinsic parameters

The process of finding the intrinsic and the extrinsic parameters of a

camera is called camera calibration and it depends on the model

chosen for the camera

163

World Coordinate

164

Page 42: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

42 

World Coordinate

165

Camera Coordinates

166

Ideal Model of a Camera – The Perspective Projection

167

A point p[x,y] in

the image plane is

given by

x = f [ X / Z]

y = f [ Y / Z]

where p is the image of

the point P[X,Y,Z] in

world space.

An Approximate Model – Scaled Orthographic Model

168

An approximate linear model. = s Validity depends on the working distance and the relative

depths of objects in the scene

Page 43: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

43 

Failure of Orthographic Projection – An Example

169

170

MACHINE VISION SOFTWARE

Software is (just) a TOOL

171

Remember If u think that the only tool you have in your

hand is a Hammer,

“Everything around you tends to look like a Nail”

Lets First Look At How Humans Process Image Data

172

Shape

Color

Spatial Relationship

Context

Page 44: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

44 

Human Recognition By Shape and Color

173

Color aids in Recognition, But is not Necessary

174

Spatial Relationship & Context: Can You Read This?

175

Cna yuo raed this? It dseno’t mtaetr in

wtah oerdr the ltteres in the wrod are,

the olny iproamtnt tihng is taht the frsit

and lsat ltteer be in the rghit pclae.

Limitations of Vision Computers

176

Just as the camera is no match for Human Vision, we’ll see

that the computer cannot even begin to duplicate how

the human brain processes the image data.

Small subset of processing algorithms is generally used for

industrial vision.

Almost all are based on “a priori information”

Vision not up to the “anything, anywhere” problem

Page 45: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

45 

Scene Constraints in Vision Parts Discrete parts or endless material (e.g., paper) Minimum and maximum dimensions Changes in shape Description of the features that have to be extracted Changes of these features concerning error parts and

common product variation Surface finish Color Corrosion, oil films, or adhesives Changes due to part handling

177

Contd. Part Presentation (in conveyor) indexed positioning continuous movement

If there is more than one part in view, the following topics

are important: number of parts in view overlapping parts touching parts

178

What Vision Computers Do with Images?

179

Image Processing (Image Enhancement)

Perform mathematical or logical calculations on an image and

convert the image into another image where the pixel have

different values

Image Analysis

Perform mathematical or logical calculations on an image to

extract features which describe the image content in numerical

terms

When and Why We Process and Analyze Images

180

IMAGE ENHANCEMENT

Reduce or eliminate noise

Enhance Information

Subdue unnecessary or

confusing background

information

Make Decision analysis

easier

IMAGE ANALYSIS

To generate quantitative

information about the

complex image data for

Accept/Reject decisions

Identification

Sorting

Counting

To make decisions

Page 46: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

46 

Image Processing (Enhancement)

181

EDGE DETECTOR

Some Image Enhancement Techniques

182

Point Transformations

Threshold (Binarization)

Histograms (Equalization)

Neighborhood Processing Techniques

Spatial filtering

Filtering

Image Analysis

183

Output Feature Vectors Centroid Location Area Perimeter Bounding Box Compactness % Match

Algorithm

Feature Vectors Feature Vectors

How Vision Systems Extract and Use Features from the Image

184

Despite the wide range of feature vectors that can be

extracted from the image, what you do with the values is quite

consistent

Compare to a known good part

Calculate distance from one feature to another

Calculate the size of the feature

Locate feature in the field of view

Vision systems do not process all the pixels in the image

From a priori information, you know where the important features

are, and process pixels only in that region

Page 47: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

47 

Region of Interests (ROI)

185

Set up a window, or Region of Interest, and process only those pixels in

that region

Removes background or extraneous information that will not have to be

processed

Reduces a big image to a small subset

Encompasses only that area where the features appears

Allow enough extra coverage for part and fixture tolerances

Or use a tool to ‘find the part’ then automatically adjust the window location for

the new part location

Called fixturing

Typical Geometries for ROI

186

Information Content in Images

187

Spectral

Color or Brightness of pixel data

Spatial

Relationship of pixel information in space

Temporal

Changes in pixel values with time

Spectral Information for Image Analysis

188

Spectral Analysis

Can be used for presence or absence

No location information available in feature vector

Page 48: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

48 

Algorithms For Extracting Spectral Information

189

Binary Pixel Count

Grayscale Average Intensity

Histogram Analysis

These algorithms measure how light or dark image is, and

make decisions based on that measured value.

Setting Up a Binary Pixel Count Application

190

Define the region of interest

You can have more than one

They can touch or overlap

Threshold the grayscale image to binary

Count the pixels in each ROI (white or black)

Computer returns the number of pixels

Then? Compare the measured number of pixels to some

standard value to make decision

Counting Pixels is Not Precise

191

When you count the no. of pixels in the ROI, it may

change from image to image for the same part even

if the part and its location is maintained the same,

due to camera noise and lighting. If the part is moved

slightly you get more variation. If you measure

different parts, the variability increases more.

If you make multiple measures you can plot the

distribution of pixel counts in a histogram to study

how much variation you have in the process.

Setting Accept/Reject Threshold

192

Plot the histogram distribution

for both good and bad parts

Verify wide separation in

feature vector values

Set an accept/reject threshold

somewhere in between the

two.

Threshold for part OK

Red is bad part Green is good part Pixel count somewhere in between is set as threshold

Page 49: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

49 

Problem of False Accept and False Reject

193

All feature vectors measured by vision systems have normal process

variations. During setup you need to verify that you have sufficient

separation in the measurements the vision system makes between

good and bad parts

False Rejects

194

Threshold are set incorrectly at this level to guarantee that only

good parts are accepted by machine. Many specifications read

“SHALL ACCEPT NO BAD PARTS.” Result is falsely rejecting good

parts, which interferes with production efficiencies.

False Accepts

195

Thresholds are set incorrectly at this level to relieve

production concerns about rejecting too many parts that the

operator may say OK. Result is accepting bad parts

Grayscale Average Analysis

196

System calculates the average of the grayscale

values of the pixels in the ROI.

The measured value is compared with the value

for good and bad parts in order to make an accept

/ reject decision.

Can be used for presence or absence.

Page 50: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

50 

Histogram Analysis

197

System calculates the histogram of the grayscale values of the

pixels in an ROI

Features of the Histogram are compared to values for good

and bad parts to make an accept/reject decision.

Good for texture analysis, or for dynamically adjusting the

binary threshold

Spatial Analysis

198

Relationship of pixel information in space.

Types of spatial analysis

Connectivity

Edge analysis

Measurement

Location

Correlation

Geometric vector matching

Can be used for finding location, size

Connectivity Analysis

199

Set up an ROI

Threshold the image

Binary process only

Also known as blob analysis

Initiate algorithm

System returns a list of geometric features

about each blob in the image

Some Geometric Features from Connectivity Analysis

200

Area (no of white pixels)

Perimeter (blue + red)

Convex perimeter (blue + green)

Compactness

Ratio of perimeter to area

Roughness

Ratio of convex perimeter to area

Page 51: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

51 

More Geometric Features

201

Centre of Gravity (average in x and y) Bounding box (red) Minimum x (or y) coordinate No. of holes Aspect ratio (ratio of span in x to span

in y) No. of runs

How Geometric Features are Used

202

Location: The centre of gravity, or minimum (maximum) pixel

locations can be used for identifying where the object is in the image.

Identification: A family of Geometric Features can be used to

differentiate objects.

Verification: Similar to presence/absence evaluation with spectral

analysis, except that more information is present providing a more

robust decision.

Repeatability and Accuracy are Dependent on the Blob Feature

203

Centre of Gravity (RED – average in x and y) Averages the centre position of each line of

pixels in the rows and columns Provides sub-pixel accuracy

Bounding Box (BLUE) Each coordinate determined by the location of

one pixel

Power of Blob Analysis

204

Provides information on the object location and geometry

Better than pixel counting because you can count only contiguous pixels Eliminate unwanted features or noise, such as specular

reflections You can size the object Geometric verification of blob features provide additional

check that you are counting the right pixels

Downside is that it is a binary process

Page 52: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

52 

Edge Analysis

205

Identify edge pixels

Measurement tools available would give

distances in pixels

Measure from line to line (caliper tool)

Measure angle between two lines

Measure from point to line (perpendicular

to line)

Sub-pixel accuracy can be achieved if

contiguous pixels along an edge are

combined into a line used for

measurement

Edge Pixels

206

Vertical edges are identified where grayscale changes as you scan along horizontal direction

Horizontal edges are identified by scanning in the vertical

direction. Oblique edges are calculated from a combination of the horizontal and vertical edge strength

Template Matching

207

Matching a trained model to the image

Does not require the user to know much about the features or grayscale

values

Must understand features versus noise or background clutter

Good image contrast is important

Powerful technique used extensively in vision for electronics and printing

Normalized correlation or geometric vector matching

Normalized Grayscale Correlation

208

A model of a golden part is taught

Trained template is moved over the image

System records the percentage match between template and image

Template is scanned over the entire search region

Location of best fit and % match is returned

Model

Best Match

Page 53: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

53 

The Math Behind….

209

This comparison is done by multiplying the grayscale values of the pixels in the model by the grayscale values of the pixels in the search area, and summing all the results

Two values are returned Location of the best match % Match value – how close a

match For “Normalized grayscale

correlation, the average grayscale intensities of the model and the search area are made equal.

Potential Problems with Correlation based

210

Presence of similar features

Model Change in Scale

Angular Rotation

Change in Color

What's the Next Better Solution ? – Geometric Vector Match

211

Less sensitive to scale, rotation, color variation than Normalized Grayscale Correlation

Model Edge Image

List of Vectors

Pixels in search region

Edge Image

List of Vectors

Issues Encountered with Geometric Pattern Matching

212

System erroneously matches regions of image to template Edge strength too low % acceptable match might be too low Search region too large – includes background noise that could

be misclassified

Background clutter not in trained image causes a “no match found” condition Shadows can create additional features

It can be slow for large, complex regions

Page 54: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

54 

An Application Example

213

Badge Identification: Verify Correct Badge present V10

Failed Images: % Match Below Acceptable Threshold

214

Badge Too Dark Shadow

Trained Model

Results

215

How to Get Good Results With Geometric Vector Matching

216

Ensure high contrast, consistent images Use a ROI which minimizes background noise in the

search area Use software fixtures for the ROI

Rather than one large ROI for the template, use multiple smaller ROIs which includes unique features not seen elsewhere in the image Increase signal to noise ratio

Page 55: Robot - Vision (Industrial and Service Robots) - Handouts

23‐09‐2011 

55 

Summary Lighting flexibility and agility Camera resolution and speed Vision recognition tools Computational processing power Mathematical Algorithms Robot Work Volume Gripper design and Versatility Part and Material Handling

217