vision and the robot - philips bound... · problems inscanning the scenes,both inthe scanning...

15
Philips J. Res. 41,232-246,1986 VISION AND THE ROBOT byA. BROWNE Philips Research Laboratories, RedhilI Surrey RH] 5HA, England Abstract This paper is a history of research at PRL on techniques for advanced fac- tory automation; research which evolved from research on optical charac- ter recognition and which itself evolved into research on expert and know- ledge based systems. Activities in several areas - computer vision, other sensory systems, robot systems and robot operating languages - and the development of equipment based on specific factory tasks, are described. The paper is also intended to highlight important aspects in the design of advanced automation equipment. ES: 3.3. Rl128 1. A history of the research at PRL Bythe early 1970sour research into automated picture processing concerned the field of optical character recognition (OCR), and included the reading of printed characters and handprinted numerals. This was in support of the pro- duction of OCR machines by Philips. This led, in the mid-1970s, to the study of picture processing in the realm of factory automation. How could the addi- tion of vision to automatic machinery improve its performance and what were the solutions to problems encountered in achieving this aim? There were problems in scanning the scenes, both in the scanning technique (camera) and' the illumination, in the processing of the pictures to extract the relevant infor- mation, in the control of the mechanics, and in the control of and communi- cation between the parts of a system. Work was done in all of these areas, and many papers were published, until 1984 when the research was centred in Eindhoven and PRL was asked to expand the research into expert systems. Whilst a historical structure has been used for this article, the intention is to highlight the techniques which were found to be important in vision and robot systems.: 232 Phillps Journal of Research Vol.41 No. 3 1986

Upload: doananh

Post on 05-Jun-2018

214 views

Category:

Documents


0 download

TRANSCRIPT

Philips J. Res. 41,232-246,1986

VISION AND THE ROBOT

byA. BROWNEPhilips Research Laboratories, RedhilI Surrey RH] 5HA, England

Abstract

This paper is a history of research at PRL on techniques for advanced fac-tory automation; research which evolved from research on optical charac-ter recognition and which itself evolved into research on expert and know-ledge based systems. Activities in several areas - computer vision, othersensory systems, robot systems and robot operating languages - and thedevelopment of equipment based on specific factory tasks, are described.The paper is also intended to highlight important aspects in the design ofadvanced automation equipment.ES: 3.3.

Rl128

1. A history of the research at PRL

Bythe early 1970sour research into automated picture processing concernedthe field of optical character recognition (OCR), and included the reading ofprinted characters and handprinted numerals. This was in support of the pro-duction of OCR machines by Philips. This led, in the mid-1970s, to the studyof picture processing in the realm of factory automation. How could the addi-tion of vision to automatic machinery improve its performance and what werethe solutions to problems encountered in achieving this aim? There wereproblems in scanning the scenes, both in the scanning technique (camera) and'the illumination, in the processing of the pictures to extract the relevant infor-mation, in the control of the mechanics, and in the control of and communi-cation between the parts of a system. Work was done in all of these areas, andmany papers were published, until 1984 when the research was centred inEindhoven and PRL was asked to expand the research into expert systems.

Whilst a historical structure has been used for this article, the intention is tohighlight the techniques which were found to be important in vision and robotsystems .:

232 Phillps Journal of Research Vol.41 No. 3 1986

Vision and the robot

2. Simple scenesThe first research vehicle was a machine built to drill holes in a printed-circuit

board without prior knowledge of the hole positions and using only simple defi-nitions of what constituted a valid hole 1,2). A vidicon television camera scannedthe board, carried on a standard Ne (numerically controlled) circuit boarddrilling machine. The picture was processed as a black/white scenewith a com-puter searching for round areas ofbare board, ofthe correct size, surrounded bycopper. The positional data found was used to control the drillingmachine. Thisstudy revealed problems which are common to many vision systems. The scanpattern of a vidicon camera is not perfect, in that the X - Y scans are not straightor linear with time over the whole scan area, nor is it stable. Also it is neces-sary to interrelate the coordinate frames of the scanner and the mechanics.The interrelation of the frames can be achieved by using the mechanical sys-

tem to move a suitable pattern in the view of the scanner. By using the scan-ning system to locate a pattern on a board, in the scanner's coordinate frame,for several positions of the pattern in the machine's frame, it was possible todetermine positional, scale and linearity differences between the frames. Thisincluded the corrections for scanning imperfections. Although this calibrationwas a special process which could not be used during a drilling operation, itwas possible to check the relationship of the frames by observing the move-ments of the circuit board being drilled. This is an example of the vision sys-tem being used to check that an instruction to a mechanical system has beenexecuted correctly, and, if not, to modify the system data.

In .this application the position of the drill, hidden beneath the board, isvitally important. During the initial calibration phase the machine was in-structed to drill a hole in a plain area of board and the board was scanned tolocate the hole. In this way the position coordinates of the hole in the coor-dinate frame of the scanner were established. Again this could be checkedduring normal operation by observing the position of a drilled hole andrelating it to the intended position.

Another system was developed for the alignment of wafers of transistorchips for probe testing. The slice was scanned by a vidicon camera and thescribing lanes between the transistors were found by comparing the greyvalues between pairs of points in a test pattern of pixels. The brighter laneswere detected with an accuracy of 10 urn even when the slice was rotated by150• In a system for the location of pads on transistor chips for wire bonding,the lighter pads were found by a .similar process, the positions of the 60 urnpads being found to within 5 urn.

Another three projects concerned the orientation of components. In auto-matic assembly the feeding of components is an important aspect. The system

Philips Journalof Research· Vol. 41 No. 3 1986 233

A.Browne

is simplified significantly and can operate much faster if the components to behandled are fed in fixed positions and orientations. Although some compo-nents can be manufactured and supplied in a way that retains the orientation,this is not always possible. A machine to reorientate components can beplaced next to the assembly machine or may be used to load magazines for theassembly machine. The latter may be appropriate if the orientation systemoperates much faster than the assembly machine. Many conventional orien-tating feeders exist, e.g. vibratory bowl feeders, but they are not suitable forall parts. The components are passed through a 'mechanical filter' whichreacts against features of the component and moves them into the requiredorientation or rejects them. Making a mechanical filter is a difficult art as somecomponents do not have suitable features, they may jam or they may interlockone with another. For these, visual methods may provide a solution.

In each of the three systems back lighting was used to obtain a high-con-trast picture. In the 'Pick and Place' machine the components, having two, orperhaps more, stable rest states, were fed randomly onto a short translucentbelt 8,4). The belt was carried on an X, Y, e table so that the components couldbe moved and orientated. They were scanned by a vidicon camera with thebinary picture being processed in a computer. Various feature extraction pro-cedures were used to determine on which face the component lay and theorientation of the component. The table was moved to bring the componentto a standard position and orientation and, after a check on the position bythe vision system, the component was removed by a simple pick and place unitand placed in one of two jigs, depending upon the inversion state of the com-ponent. The standard position for removal was defined in the vision system byreversing the action of the pick and place unit and placing a component on thetable. This was scanned by the vision system. The relative calibration of thecoordinate systems of the camera and the table was performed by using thetable to move components in the field of view of the camera.

The system was also used with three-dimensional objects. From the verticalview the orientation was determined and the component rotated to a pre-defined orientation so that a second camera could take a view from the side.From these pictures the orientation could, for many components, be deter-mined completely.

The second machine was designed to demonstrate that a simple electronicsystem, incorporating a microprocessor, could be used in the orientation ofcomponents having two rest states 5). A translucent belt carried componentspast a linear diode-array camera, fig. 1. The binary output was processed tofind the coordinates of four points, those nearest to the two edges of the beltand the first and last to be seen. These coordinates were compared with coor-

234 Phillps Journalof Research Vol.41 No.3 1986

Vision and the robot

Fig. 1. A component orientation machine, showing the conveyor, camera and pick-upmechanism.

dinates in a table corresponding to components in 128 different orientations.From the table, stored in a read-only memory (ROM), the orientation angleand the position of the pick-up point was read and passed to a simple robotmechanism. This snatched the component from the belt, rotated it to a stan-dard orientation, and placed it in a jig. Unrecognized and inverted compo-nents were ignored and fell from the end of the conveyor from where theycould be returned to the start of the belt.A recent development was a self-learning vision system that could be used in

place of the tooling of a component feeder+"). When most components arefed along a track against a fence, they take one of a limited number of orien-tations. The vision system can distinguish between these orientations and passsignals to a mechanism that can reject those having unwanted orientations orreorientate the component. The components are illuminated to produce a highcontrast and scanned by a linear array camera whose binary output is pro-cessed by a single-board computer, see fig. 2. By pushing a button the machineis put into the learning phase and asks to see components in one orientation.From several components it finds the most common black/white line scan pat-terns. This is repeated for other orientations. Again using components in thefirst orientation the sequence of these line patterns is recorded. The sequenceis condensed to a standard length, to compensate for speed variations of 30to 1. The signatures from several components are combined to produce anaverage signature, the master signature. In the sorting mode the set of com-mon line patterns that have been learned are used to derive signatures whichare compared with this master signature. If the match is sufficiently close thecomponent is deemed to be in the specified orientation. An extension of theprocess can detect other orientations. The advantages of the system are that

Phlllps Journal of Research Vol. 41 No.3 1986 235

A. Browne

Camera

Feeder

Valve""-/Microprocessor

andcontrol unit

Fig. 2. A trainable component selection system.

there is no tooling in'which the components can jam, no tooling to design andget into operation, the system learns a new component with the minimum ofassistance, the feed velocity does not have to be accurately controlled and thecost is little more than that for producing the tooling for a difficult compo-nent. A prototype is in use at CFT, the Philips' centre for (manufacturing)technology, Eindhoven.

3. Vision equipment

Equipment was built to interface the high-speed data rate of the cameras tothe low-speed data rate of the computers. From simple systems building uppictures slowly from multiple scans, taking a single sample from each scanline, the designs progressed to systems sampling the picture at 15 Mpixels/secin 64level grey scale. The resulting byte stream was stored in a random accessmemory (RAM) buffer store from where it could be transferred at lower speedsto the computer memory. The later work was done in conjunction with ourcolleagues at CFT and the research laboratory in Eindhoven. This was the de-velopment of the Philips PAPS, picture acquisition and processing system").

236 Phlllps Journal of Research Vol.41 No.3 1986

Vision and the robot

This system now consists of many types of standard Eurocards for connectionto various scanning systems, buffer storage, full screen storage and varioustypes of picture processing. It is backed by alibrary of control and processingsoftware. The PAPS provides a solution to the time-consuming problem ofprocessing large amounts of data, representing a picture, in 'real time'. Thespeed is provided by the special hardware processing boards, and flexibility bythe modular arrangement of the system. Flexibility may also be obtainedthrough the programmability of hardware of a more general nature. Derivedfrom the OCR work but connected with PAPS, a high speed processing sys-tem for black/white pictures was developed at the PRL 9). The system operatedserially and achieved its speed, about 200 times that of a normal computerwhen used for picture processing, by using wide instruction words and look-up tables, fig. 3. The system was programmed in a high levellanguage fromwhich the machine code was compiled.

I FIFO I resultI QUEUE I '6

next state

I '6I STATE 1'6 REGISTER

t6

PICTURE LOOK- --- STORE picture I DATA I address UP

(RAM) dato '6 ISELECTOR r ~ TABLES

address R/W t(RAM) r--

X y control

'2 '2,

I32

I PICTURE ACCESSING CONTROLLER I branch

I 2

Fig. 3. The architecture of a high speed picture processor.

4. Robot and vision systemsThe use of vision to improve the operation of a robot system during as-

sembly was studied in several ways. An early task was the placing of rings ofdifferent diameters onto a tower having a corresponding set of diameters, butonly 0.1 mm less and not coaxial P'!"). A parallel projection optical systemhad been developed which produced an image in the camera whose size wasindependent of the distance of the object from the camera 12,13). The systemhad the aperture of the camera lens at the focal point of a large Fresnellensand, to obtain a large depth of field, the aperture of the camera lens was small(fig. 4). These optical systems were used in scanning the rings, which were

Phillps Journal of Research Vol. 41 No. 3 1986 237

A. Browne

Fig. 4. An optical system using parallel light to eliminate the variation of apparent object size withdistance.

placed on an illuminated table, and the tower, which stood in the view of twosystems set at an angle, fig. 5. Thus the positions of the rings and the towerwere known to the accuracy of the scanning systems. The coordinate framesof the three cameras were calibrated with respect to that of the robot by using

Fig. 5. The assembly of rings onto an eccentric tower. The positions are scanned by the camerasA, B andC,

238 Phlllps Joumnl of Research Vol.41 No. 3 1986

Vision and the robot

the robot to move a pointer in their fields of view. The resulting accuracy wasadequate for the robot to pick up the rings and carry them to the tower, butnot to place them onto the tower. The two tower viewing cameras were usedto determine the relative positions of each ring and the tower, and the robot'sposition was accordingly corrected. In this way alignment was achieved andthe use of vision feedback to control a robot, demonstrated. The PAPS wasused for the vision processing.The lessons learned from this work were applied to a typical production

problem, that of placing a deflection unit onto the neck of a television tube,fig. 614). The deflection unit was fed by a conveyor to a stop where it could bebacklit and its position determined by viewing the central hole. The positionof the neck of the tube, the tube being screen downwards, was determined bytwo cameras with parallel optical systems set at an angle (fig. 4). The system

Fig. 6. A vision-aided robot placing a deflection unit onto the neck of a television tube.

Philips Journalof Research Vol. 41 No. 3 1986 239

A. Browne

was calibrated by a procedure similar to that of the ring and tower system.The difference between the hole and the neck was about 1mm and a check oftheir positions just before assembly was not required. The orientation of thedeflection unit is important and was found, before pick up, by scanning thewindings which lie radially within the unit, fig. 7. This left a 1800 ambiguity

Fig. 7. A view of a deflection unit showing the centre hole, the windings and the three specialmarker holes.

which was resolved by having special holes designed into the plastic of theunit. The holes were detected by the vision system and also gave a more accu-rate measure of the orientation. This is a simple example of easing the auto-mation task by a minor modification to a component. To obtain the exactpositioning of the unit on the neck, it was rotated slowly through the nominalposition whilst being pushed down. A limited locking action which occurs atthe exact position was detected by a simple sensor and then the retainingclamp was tightened by an automatic screwdriver.An associated task studied was the removal of the deflection units from

their packing case 14). They were packed, in alternating orientations, in boxesof four, these small boxes forming several layers in each packing case. Thecardboard provided only an approximate positioning of the units. A specialgantry robot was made to remove the units. This carried lighting and a camera

240 Philip, Journalof Research Vol.4l No.3 1986

Vision and the robot

with a parallel optics viewing system. This was necessary as the distance fromthe camera to the scene increased as the case was emptied. The robot was pro-vided with a special gripper to hold the unit and to hold down the small box asthe unit was withdrawn. A sequential system, useful in many applicationsusing low-quality pictures, was used to determine the position of each deflec-tion unit. First, a clearly identifiable feature was found in the picture but thiscould not be used for location either because it could 'not be delineated pre-cisely or was not accurately positioned with respect to the body of the unit.From its position the approximate position of another feature was deducedand the feature located from a scan, and so on, until a feature was scannedwhich defined the unit's position to the required accuracy, although thisfeature would have been difficult to isolate at a first attempt.

5. Languages and operating systemsThe special features of a system incorporating a robot and sensors result in

conventional computer languages and operating systems being inconvenientto use. The system has to operate in real time and usually will have severalcomputers or processors, each controlling a robot or sensor. Often these areof different types as they will have been supplied as a package with the robotor sensor. The language used for much ofthis work was RTÎ../2, supplementedby Assembler when necessary. With SRI International (Stanford researchinstitute), an interpreted subset of this language was developed to provide ad-ditional appropriate facilities, and was called INDA 16). This enabled pro-grams to be modified without the need to recompile the whole program, andtherefore much time could be saved during system development. Commandscould be given in a condensed form and this simplified the programming task.It was found that to construct a new system most of the software had to be

totally rewritten although there were many similarities between the new sys-tem and preceeding systems. Differences existed in the control systems of thesensors and robots and, as a result, the software was not truly flexible or trans-portable. To avoid this time wasting disadvantage a robot operating system,ROBOS, was developed 16). The principal components are a supervisory sys-tem and a data structure, fig. 8. A number of facilities that are generally re-quired in the development of sensor controlled: robot systems are provided,analogous to the facilities of a computer operating system. The main controlprogram operates through the supervisory system and passes the necessarycommands on to the other processors. It checks that the set of commands ismeaningful and that, in the existing state of the system, they can be obeyed,Le. that the program is runnable. Also itperforms all of the coordinate trans-formations so that each unit has data only in its' own coordinates. It is res-

Phlllps Journol of Research Vol. 41 No. 3 1986 241

A.Browne

Tosk descriptionprogram

Data structur-eSystem

supervisor

RunnobilityCoordinate

Generatedata

structure

Fig. 8. An overview of the robot operating system; ROBOS.

ponsible for recovery procedures used in cases of errors. The programmer setsup the data structure to be used, with the task description program, by thesupervisor. The data structure contains a model of the robot, sensor systemand components to be used in the task. Communication between units of thesystem is through standard ROBOS communication channels, which may bephysical connections or implemented by shifting data in memory.

6~ Picture processing

. The image of a scene received from a camera by a computer is the viewof theobjects in the scene modified by the lighting of the scene, light scattering andshadowing between the objects, and the effects of the scanner and the videochannel. The task of the computer is to isolate objects in the scene and to deter-mine their positions. The basis of many systems is to delineate the boundariesof the areas in each of which some property is constant, or whose boundariesare marked by a rapid change of the property. The property may be greyness orcolour and work was done for both. If there is colour in the scene, or it can beprovided in the.component design, and the extra complexity of the scanningand processing is acceptable then the delineation task is simplified 17). After all,

242 Philip. Journal.of Research Vol.41 No.3 .1986

Vision and the robot

areas of different colours may produce the same grey values in a monochromesystem and only changing the colour of the lighting will separate them. Initi-ally, processes were used which attempted to classify the parts of each object ina scene and, when the parts had been found, to identify the complete object.. A better approach was to separate the areas as before and to relate them,where possible, to the a priori information about the object. Having madesome tentative matches the scene could be explored in the relevant areas tofind other expected features and thereby to confirm the presence of the object.When parts of the object could not be found then it was possible that it waspartially obscured by another object. The a priori information would have toinclude knowledge about the variation in the apparent shapes of objects withangle of view, e.g. an ellipse may be a circle seen at an angle.

Inforrnation about distances to parts of a scene can be helpful in interpretingits image. The view from a single camera does not provide distance informationalthough the range may be deduced using a priori information such as the sizeof the object. An alternative is to use a rangefinder and one was developedusing a laser beam reflected from a double galvanometer mirror system 18).The beam could be steered onto most parts of the scene which were visible toth~ camera. From the known deflection angle and the position of the spot asseen by the camera, the range to that part of the scene could be calculated.

7. Factory assistance

Although not a research activity, it was considered important to establishcontacts with the factories to learn what type of automation problems existedand also to stimulate interest at the factories in the possibilities offered by ad-vanced automation. Several studies arose from this in which solutions werefound for specific tasks, usually an application for vision and most frequentlyin the realm of inspection.Typical of these was the location of U-shaped glass tubes on a mesh conveyor

belt 19). The continuous nature of the process suggested the use of a linear arraycamera scanning across the belt. The discrimination between the glass andshiny areas on the belt was aided by the use of lamps placed close to the planeof the belt with the light reflected specularly from the sides of the tubes, fig. 9.This resulted in each leg ofthe tube being seen as two bright lines. To allow the'processing to be performed in a small computer the processing was taken out ofthe picture domain as early as possible. The a priori information was the ~ia-meter, length and separation of the legs, these data being different for eachbatch of tubes. In each line scan bright points separated by a specific distancewere possibly from a leg. The results from a scan were compared with past his-toryand the existence, with the start and finish points, for each leg established.

Phlllps Journalof Research Vol. 41 No.3 1986 243

A. Browne

Fig. 9. Tubes on a mesh conveyor revealed by an appropriate lighting system. One tube is broken.The direction of movement in the picture is vertical.

244 Philips Journalof Research Vol. 41 No. 3 1986

Simple logic allowed legs to be combined into tubes and the position of thetube on the belt passed to the controller of a robot which would remove thetubes. The system found legs or parts of legs and the logic processes to find theU-tubes or broken glass were appropriate to a conventional computer.

In several of the inspection tasks a very large number of pixels were scannedfrom each object. The task was to find faults and these would involve only afew of these pixels. Consequently the approach used was to reject as soon aspossible all data related to good areas of the object. For example, in lookingfor defects in glass sheets only the positions of those pixels whose grey leveldeparted by more than a specified amount from that of the surrounding areawere passed to the computer. Of these, many were isolated pixels due to de-fects below the specification limit or to noise and were eliminated in the com-puter. The computer then had time to examine those clusters of pixels whichmight represent faults.

8. Conclusions

Onlya few examples can be given here from the many lessons learned duringthis work. The task of processing a picture is simplified when the picture is ofgood quality. The correct arrangement and lighting of a scene can help to pro-duce a good picture in which the features of interest are easier to separate, andsome thought should be given to this aspect as it is probably the most impor-tant single factor in designing a successful system. Related to this is the choiceof scanning system. Picture processing involves the handling of large quan-tities of data, and preprocessing, such as conversion to a black/white picturerather than working with the multilevels of a grey image, will increase the pro-cessing rate. Hardware to perform special processing functions can produce a

Vision and the robot

significant increase in speed although some loss of flexibility in processing mayresult. The flexibility can be retained by the use of special programmable hard-ware but the drawback is now that of cost. The robot and the sensory systemswill have their own frames of reference and these frames may be distorted bynon-linearities. Means must be provided for correcting the distortions andrelating the frames. In some cases precalibration will be sufficient but inothers, where drifting may occur during operation, provision must be madefor calibration during the operating cycle. Rescanning after a movement canreveal any difference between the intention and the result.

Conventional computer languages and operating systems are not ideal forrobot and sensory systems. The languages require adaptation for ease of useand for writing real-time programs, and operating systems are required tosimplify the complex task of controlling multi-computer, multi-machine sys-tems. The modification of a system on the shop floor, to adapt to changes inthe task or workpieces, can be simplified if self-learing features are incor-porated into the programs. There is a vast range of vision processing tech-niques, thus it can be difficult to select which is the most appropriate for a taskand to implement it in the best way. An answer to this problem may be pro-vided by expert systems which will guide the system designer.It may be obvious to state, but before embarking on the design ofa robot/

sensor system it is important to assess the task änd to determine in what way,if any, the use of the system will improve production. Can advantages begained from the flexibility to adapt to design changes and component varia-tion, the longer working periods, improved quality and consistent perfor-mance that such systems can provide? Can the system be simplified by the re-design of the components or even small modifications? How can the knowninformation about the components and the environment, Le. the a prioriinformation, be used to aid the sensory processes? The correct answers tothese questions can have a significant effect upon the viability of a system.Although more application has been made so far of vision systems alone, forinspection, and of robots without complex sensors, the combination of robotswith sensors, particularly visual and tactile, is feasible within the factory.There is still much to be learned and many design aids to be developed, and inthe meantime the production of such systems requires specialists and will beexpensive but the opportunity to improve production must be taken.

Acknowledgements

This paper would have been much shorter but for the work in automationperformed by many people, some of whom are named in the references, underthe direction of their group leader, Tony Weaver.

Phllips Journal of Research Vol.41 No. 3 1986 245

A. Browne

REFERENCES1) J. A ..G. Hale and P. Saraga, Proc. 4th Int. Joint Conf. on Artificial Intelligence, Tblisi,

775 (1975).Also in Pattern Recognition: Ideas in Practice, B. Batchelor (ed.), 231, Plenum (1975).

2) D. Paterson, A. R. Turner-Smith, J. A. G. Hale and P. Saraga, Proc. 16th Int.Machine Tool Design and Research Conf., Manchester, 137 (1975).

3) P. Saraga and D. R. Skoyles, Proc. 3rd Int. Joint Conf. on Pattern Recognition,Coronado, 17 (1976).

4) P. Saraga and J. A. Weaver, Philips Technical Review 38,329 (1979).6) A. Browne, Assembly Automation 1, 30 (1980).6) A. Browne, Proc. 5th Int. Conf. on Assembly Automation, Paris (1984).7) A. Browne, Sensor Review 4, 143 (1984).8) F. L. Thissen, Journal A (Belgium) 26, 33 (1985).9) P. R. Wavish, Proc. 5th Int. Conf. on Pattern Recognition, Miami (1980).10) B. M. Jones and P. Saraga, Pattern Recognition 14,163 (1981).") B. M. J ones and P. Saraga, Digital Systems for Industrial Automation 1, 79 (1981).

Also in Robot Vision, A. Pugh (ed.), 209, IFS (Publications) Ltd. (1983).12) B. M. J ones and P. Saraga, Proc. 5th Int. Conf. on Pattern Recognition, Miami, 1094

(1980).13) P. Saraga and B. M. Jones, Proc. 1st RoViSeC, Stratford-upon-Avon, 99 (1981).14) P. Saraga, C. V. Newcomb, P. R. Lloyd, D. R. Humphreys and D. J. Burnett, Proc.

3rd RoViSeC, Boston, 488 (1983).Also in Optical Engineering 23, 512 (1984) and in Sensor Review 4, 78 (1984).

16) W. T. Park and D. J. Burnett, Proc. 9th Int. Symp. on Industrial Robots, Washington, 281(1979).

16) P. Saraga, D. J. Burnett and B. M. Jones, lEE Coli. on Robot Operating Systems, Lon-don (1983). '

17) D. M. Connah and C. A. Fishbourne, Proc. 1st RoViSeC, Stratford-upon-Avon, 340(1981).

18) D. M. Co n.n ah and C. A. Fishbourne, Proc. 2nd RoViSeC, Stuttgart, 223 (1982).19)" A. Browne, Sensor Review 4, 11 (1984).

246 PhllIps Joumnl of Research Vol.41 No.3 1986