neural net

7
Meas. Sci. Technol. 8 (1997) 1399–1405. Printed in the UK PII: S0957-0233(97)84508-0 The use of neural techniques in PIV and PTV I Grant and X Pan Fluid Loading and Instrumentation Centre, Heriot-Watt University, Edinburgh EH14 4AS, Scotland, UK Received 29 May 1997, accepted for publication 6 October 1997 Abstract. The neural network method uses ideas developed from the physiological modelling of the human brain in computational mechanics. The technique provides mechanisms analogous to biological processes such as learning from experience, generalizing from learning data to a wider set of stimuli and extraction of key attributes from excessively noisy data sets. It has found frequent application in optimization, image enhancement and pattern recognition, key problems in particle image velocimetry (PIV). The development of the method and its principal categories and features are described, with special emphasis on its application to PIV and particle tracking velocimetry (PTV). The application of the neural network method to important categories of the PIV image analysis procedure is described in the present paper. These are image enhancement, fringe analysis, PTV and stereo view reconciliation. The applications of common generic net types, feed-forwards and recurrent, are discussed and illustrated by example. The key strength of the neural technique, its ability to respond to changing circumstances by self-modification or regulation of its processing parameters, is illustrated by example and compared with conventional processing strategies adopted in PIV. 1. Introduction 1.1. Particle image velocimetry Particle image velocimetry (PIV) is a technique which allows the velocity of a fluid to be simultaneously measured throughout an illuminated region, which is commonly planar. ‘Seeding’, flow-following particles are introduced into the flow and their motion is used to estimate the kinematics of the local fluid. The velocity of the particles is recorded using multiple exposure, quantitative, image capture methods. The extraction of fluid velocity information from multiple-exposure images on film, holographic plate or CCD sensor array is the key process in PIV data analysis. The methodologies adopted to solve the paradigm have been defined largely by the historical perspective of the investigators, drawing on techniques and hardware developed in parallel studies using image capture and analysis in other regimes. For low-density images, for which the individual particle images are readily distinguishable, particle tracking may be used. When the seeding density is high, image correlation methods are often adopted, allowing a local average to be obtained. Early attempts at high- density analysis also used image-transparency interrogation by laser, producing a fringe pattern from which flow parameters were derived. A full description of the method has been given (Grant 1994, 1997). Emphasis has recently been placed on obtaining the third velocity component, normal to the illuminated plane, in PIV studies. No universal scheme for three velocity component measurements has so far been adopted. The methods used all have considerable experimental and processing difficulties. Most approaches may be classified as stereographic (Chang et al 1984), holographic (Thompson 1989) or multiple plane (light sheet) in methodology (Utami and Ueno 1984) with some studies combining the techniques (Grant et al 1991). Variants of the stereogrammetry method making the method more robust and simple to apply (Grant and Pan 1995) have been reported. 1.2. The neural network 1.2.1. An introduction to the concept. The human brain is thought to consistent of a three-dimensional matrix of interconnected processing units, neurons, and has the capacity for implementing simultaneous, non-linear, processing strategies. Typically the brain consists of 10 10 neurons, each of which is interconnected to 10 4 other neurons. The input signals from other neurons are modified by the interconnection efficiencies or weights. Each neuron sums, or integrates, the net inputs and then acts as a processing unit in that it either triggers, giving an electrical output, or remains dormant. The synaptic connections between the neurons are used to hold memories which can be modified and updated both during the learning (initialization) stage and during processing of data sets. The distributed nature of the processing elements is used in many neural networks. 0957-0233/97/121399+07$19.50 c 1997 IOP Publishing Ltd 1399

Upload: andrea-venturato

Post on 08-Sep-2015

215 views

Category:

Documents


0 download

DESCRIPTION

optimization

TRANSCRIPT

  • Meas. Sci. Technol. 8 (1997) 13991405. Printed in the UK PII: S0957-0233(97)84508-0

    The use of neural techniques in PIVand PTV

    I Grant and X PanFluid Loading and Instrumentation Centre, Heriot-Watt University,Edinburgh EH14 4AS, Scotland, UK

    Received 29 May 1997, accepted for publication 6 October 1997

    Abstract. The neural network method uses ideas developed from the physiologicalmodelling of the human brain in computational mechanics. The technique providesmechanisms analogous to biological processes such as learning from experience,generalizing from learning data to a wider set of stimuli and extraction of keyattributes from excessively noisy data sets. It has found frequent application inoptimization, image enhancement and pattern recognition, key problems in particleimage velocimetry (PIV). The development of the method and its principalcategories and features are described, with special emphasis on its application toPIV and particle tracking velocimetry (PTV). The application of the neural networkmethod to important categories of the PIV image analysis procedure is described inthe present paper. These are image enhancement, fringe analysis, PTV and stereoview reconciliation. The applications of common generic net types, feed-forwardsand recurrent, are discussed and illustrated by example. The key strength of theneural technique, its ability to respond to changing circumstances byself-modification or regulation of its processing parameters, is illustrated byexample and compared with conventional processing strategies adopted in PIV.

    1. Introduction

    1.1. Particle image velocimetry

    Particle image velocimetry (PIV) is a technique whichallows the velocity of a fluid to be simultaneously measuredthroughout an illuminated region, which is commonlyplanar. Seeding, flow-following particles are introducedinto the flow and their motion is used to estimate thekinematics of the local fluid. The velocity of the particlesis recorded using multiple exposure, quantitative, imagecapture methods.

    The extraction of fluid velocity information frommultiple-exposure images on film, holographic plate orCCD sensor array is the key process in PIV data analysis.The methodologies adopted to solve the paradigm havebeen defined largely by the historical perspective ofthe investigators, drawing on techniques and hardwaredeveloped in parallel studies using image capture andanalysis in other regimes.

    For low-density images, for which the individualparticle images are readily distinguishable, particle trackingmay be used. When the seeding density is high,image correlation methods are often adopted, allowing alocal average to be obtained. Early attempts at high-density analysis also used image-transparency interrogationby laser, producing a fringe pattern from which flowparameters were derived. A full description of the methodhas been given (Grant 1994, 1997).

    Emphasis has recently been placed on obtainingthe third velocity component, normal to the illuminated

    plane, in PIV studies. No universal scheme for threevelocity component measurements has so far been adopted.The methods used all have considerable experimentaland processing difficulties. Most approaches may beclassified as stereographic (Chang et al 1984), holographic(Thompson 1989) or multiple plane (light sheet) inmethodology (Utami and Ueno 1984) with some studiescombining the techniques (Grant et al 1991). Variantsof the stereogrammetry method making the method morerobust and simple to apply (Grant and Pan 1995) have beenreported.

    1.2. The neural network

    1:2:1. An introduction to the concept. The humanbrain is thought to consistent of a three-dimensionalmatrix of interconnected processing units, neurons, andhas the capacity for implementing simultaneous, non-linear,processing strategies. Typically the brain consists of 1010neurons, each of which is interconnected to 104 otherneurons. The input signals from other neurons are modifiedby the interconnection efficiencies or weights. Each neuronsums, or integrates, the net inputs and then acts as aprocessing unit in that it either triggers, giving an electricaloutput, or remains dormant. The synaptic connectionsbetween the neurons are used to hold memories whichcan be modified and updated both during the learning(initialization) stage and during processing of data sets.The distributed nature of the processing elements is usedin many neural networks.

    0957-0233/97/121399+07$19.50 c 1997 IOP Publishing Ltd 1399

  • I Grant and X Pan

    An important characteristic of neural networks whichlends them a degree of superiority over other systems istheir ability to learn by example, adapting their weightsin a manner determined by the processed data. A furtheradvantage of the neural network is its ability to toleratenoise in an input pattern. If a network has been trainedsufficiently, it is capable of performing well even wheninput patterns are noisy or incomplete.

    1:2:2. The historical development of the neural network.The first form of the neural network was devised byMcCulloch and Pitts (1943). They defined the adaptivestimulusresponse neuron model. Two early rules fortraining adaptive neuron elements were the perceptron ruleand the LMS algorithm (Widrow and Hoff 1960).

    In the 1970s the adaptive resonance theory (ART)was developed, a number of novel hypotheses about theunderlying principles governing biological neural systems(Grossberg 1976). These ideas served as the bases for laterwork involving three classes of ART architectures: ART1,ART2 and ART3 (Carpenter and Grossberg 1983, 1987,1990). These are self-organizing neural implementations ofpattern-clustering algorithms. Another important approachto self-organizing systems was pioneered by Kohonen(1982) with his work on feature maps.

    In the early 1980s, the outer product rules andequivalent approaches based on the early works of Hebbfor training a class of recurrent (signal feedback) networks,now called the Hopfield model, were introduced (Hopfield1982, 1984). More recently, Kosko (1987) extendedsome of the ideas of Hopfield and Grossberg to develophis adaptive bi-directional associative memory (BAM), anetwork model employing differential as well as Hebbiancompetitive learning laws.

    1:2:3. The neuron model. A representation of the basicfeatures of a neuron is shown in figure 1. The inputs tothe neuron arrive along the dendrites, which are connectedto the outputs from other neurons by specialized junctionscalled synapses. These junctions alter the effectivenesswith which the signal is transmitted. The axon servesas the output channel of the neuron. It is also a non-linear threshold device, producing a voltage pulse when theactivation level within the cell body rises above a certaincritical value.

    The neuron model may be considered as a multiple-input, single-output operator (figure 2). TheXi are the inputsignals from other neurons, the Wi are the interconnectingweights and Y is the output signal.

    In computational implementation each input is multi-plied by a corresponding weight, analogous to a synapticinter-connective strength, and all of the weighted inputsare then summed to determine the activation level of theneuron, denoted NET. The NET signal is further processedby an activation function f ./ to produce the output signalof the neural computational element. The activation func-tion f ./ may be, for instance, binary, sigmoid or thresholdlinear (figure 3).

    Figure 1. The basic features of a biological neuron.

    Figure 2. The artificial neuron.

    Figure 3. The three representative non-linear activationfunctions.

    Figure 4. The feed-forwards network structure.

    1:2:4. The network structure. The advantages of neuralcomputing are realized when the elements are connectedinto networks. A group of neurons may be arranged ina layer. Several layers may be interconnected. There aretwo basic network communication structures used withinthe layers, namely the feed-forwards network and the feed-back network.

    1400

  • The use of neural techniques in PIV and PTV

    Figure 5. The feed-back network.

    (i) In the feed-forwards network the output from a node,or neuron, is passed on as input to other nodes in thefollowing layer without any feedback to nodes in the samelayer or the preceding layer (figure 4). The layers otherthan the input and output layers are called hidden layers.They act as filters. For example, the input signal could bethe pixel pattern for the letter A. The network could thengenerate an output pattern, for example 010000001 (theASCII code for the input letter A). Even an imperfect orpartial letter could be recognized, depending on the pre-learning examples and activation functions used.

    (ii) Figure 5 shows a typical feed-back or recurrentnetwork. Each node receives the input signal from, andsends an output signal to, every other node. The Hopfieldneural network is an example of this type of representation.

    Neural networks developed from the two structures havedifferent properties and, therefore, different applications.Some neural networks use both of the structures. Thecharacteristics of these structures are illustrated in the neuralnetwork models described in the following sections.

    1:2:5. Learning. The most important property of aneural network is the ability to learn. The process oflearning consists of the neural network modifying itsweights in response to external inputs. The learningrule determines the neuron inter-connective weightsmodification algorithm. There are two types of learning:supervised and unsupervised. Learning must take placebefore the net becomes operational.

    (i) In supervised learning a training set consists of bothinputs and target outputs. A training input is presentedto the net and the output of the network is calculated andcompared with the desired output. The difference is fedback through the network in such a way that the weights aremodified to minimize the error between the desired outputand the current output. The same procedure is repeateduntil the error has been reduced to an acceptably low level.A typical supervised learning rule is back propagation.Using this technique, a network can be made to memorizeinformation from the training set and perform decisionmaking based on these rules. However, a large trainingset may be needed in order to produce satisfactory decisionmaking and the training phase can be very time consuming.

    (ii) Unsupervised learning requires no target outputsand consequently no comparisons with predetermined idealresponses. Only the inputs are used to train the network.

    The learning algorithm modifies the network weights toproduce outputs that are consistent, meaning that theapplication of similar inputs will produce the same outputpattern. Therefore the network looks for regularities ortrends in the input signals. The learning process extractsthe statistical properties of the inputs and groups similarinput patterns into classes. Unsupervised learning is used,for example, in the Kohonen self-organizing network andin adaptive resonant theory (ART).

    1:2:6. Neural network models. Many neural networkmodels have been developed for specific applications. Thefollowing are the most widely used models.

    (i) The back propagation (perceptron) network is usedmainly to recognize input patterns with pre-defined andlearnt classes.

    (ii) The Kohonen self-organizing network is capable ofconstructing clusters of input patterns without knowledge oftheir distribution. This property was used by Grant and Pan(1993, 1995) in their multi-layer neural networks designedfor the analysis of PIV images.

    (iii) The ART network is also used to classify inputpatterns. Its advantage is that it can generate new classesto cope with a continually varying environment withoutdestroying or damaging previously learned information.

    (iv) The Hopfield neural network works differentlyfrom the other network models. The algorithm ofa Hopfield neural network is equivalent to an energyoptimization function derived for a specific case in a mannerdefined by the mathematical model of the system.

    2. Neural network applications in PIV

    The analysis of PIV images typically involves patternrecognition, classification and feature extraction. A neuralnetwork used in pattern recognition will, in general, haverecognition rules established through the use of trainingdata sets or by self-learning. From this initialization thenetwork is able to proceed with independent identificationand decision making. The consequences of its operationmay be used in some cases to alter its rules or update itsmemory. This adaptive approach is used in most neural netmodels.

    2.1. The neural net applied to tracking mode PIV(PTV)2:1:1. Introduction. Particle image tracking is a keyprocedure in analysis of low-density PIV images. Thenearest neighbour and statistical analysis methods havebeen used to group the particle images and quantify thelocal displacement (Grant and Liu 1989). However, thesemethods are unsatisfactory for flows having rapid directionchanges. The neural network method has been used forthe image tracking recognition to improve the efficiency ofanalysis.

    Work on the use of the neural network in tracking PIVwas reported by Teo et al (1991). A fuzzy ART (figure6) was used to match the particle images from the second

    1401

  • I Grant and X Pan

    Figure 6. Adaptive resonance theory (ART).

    frame to the images in the first frame. It consisted of twolayers of interconnected neurons. The bottom-up weightsdetermined the classes fuzzy maximum reference point andtop-down weights determined the classes fuzzy minimumreference point. During processing, one frame was usedto create matching classes, then particle images in anotherframe were matched to the most similar class in an optimalway controlled by a vigilance parameter. The method wasreported to work well for particles that were well dispersed(low image density) and for images that were taken a shorttime apart. Cenedese et al (1992) studied a multi-layerfeed-forwards neural net and demonstrated the potential forneural net methods applied to PTV.

    The back-propagation network is a feed-forwardsnetwork using a back-propagation learning algorithm. Thistype of network has been used to distinguish all the imagepairs in a PIV image and provide different labels for eachpair (Yen Jia-Yush, Chen Ping-Hie and Chen Jian-Liang,private communication). The velocities of all the identifiedpairs are calculated and averaged to produce an averagevelocity. The experimental results show that the resultsfrom the proposed neural network match well those of theauto-correlation process in the uniform flow region and a78.1% success ratio in the stagnation flow region has beenclaimed. This method is more suitable for uniform flowthan it is for turbulent conditions.

    An analogous approach to the net optimization-functionapproach has been adopted by Okamoto et al (1996), whoused a mechanical analogy involving spring interconnectingconstraints between particle images to define an energyfunction which was minimized to obtain the best matchbetween particle images in two dimensions.

    2:1:2. A case study: multi-layer, feed-forwards networkfor time-coded, PTV image analysis. Grant and Pan(1993, 1995) demonstrated the use of three- and four-layer, feed-forwards neural net models (figure 4). Theywere used successfully as competitive, adaptive filters infeature-recognition tasks typical of PTV. The four-layernet was able to distinguish directions in flow imageswhich carry a time signature. The models exhibited asubstantial improvement in performance compared with

    Figure 7. The input and output of the three-layer neuralnetwork.

    earlier statistical, non-neural, windowing methods (Grantand Liu 1989, 1990).

    Every layer consisted of the same number of nodesdistributed on a two-dimensional plane. The first layeracted as an input buffer, whereas the following layers actedas filters or selection modules. The filtering characteristicsof the layers were stored in the interconnecting weightsand were acquired during the processing stage, following aself-learning algorithm approach inspired by the Kohonen,self-organizing, feature map.

    The analysis procedure commenced with a double-exposure, low-image-density, PIV image being pre-processed to extract particle image centre pixels. The imagewas then segmented into sub-images, each of which helda representative particle image centre at its origin. Thesub-image dimensions were chosen to allow all possibleimage partners, for the particle image at the origin, to fallwithin the segment. A two-dimensional array was used torepresent the segment pixel by pixel (figure 7).

    The segment was passed to the input layer of theneural network ready for processing. The spatially adjacentsegments were consecutively processed. This was animportant consideration since it allowed the memory ofthe flow displacements to be updated in a systematic andmeaningful fashion.

    In the three-layer net the element of the array was setto 1 if a particle image centre appeared at that pixel;otherwise it was set to 0. A competition took place ina winner-takes-all fashion on the output layer and gave thebest matched partner. If an element of the output array wasidentified by the net as a partner particle image, it gave a1 output while the others were 0. If all the elements inthe output layer were 0, no pair had been matched.

    The four-layer net was applied to tagged PIVimages which had been obtained using the image codingmethod whereby the image size was varied by alteringthe brightness of the laser flash (Grant and Liu 1990).Alternatively, if an electronically shuttered camera is usedwith continuous illumination, the duration of the shuttersopen time may be used in the coding (Grant and Wang1995). The method is easily adapted for use in colour-coded images.

    1402

  • The use of neural techniques in PIV and PTV

    Figure 8. (a) A typical turbulent flow simulation with a systematic change in direction. (b) The success ratio (SR) as afunction of the angular range. (c) The success ratio (SR) as a function of the image density.

    The input of the four-layer net held the area informationof each particle image. The directionally sensitive layer wasinserted immediately after the input layer. Each node in thislayer had two inputs from the input layer. It compared thesizes of the two input images and activated an output givingthe correct flow direction.

    (i) The performances of the three- and four-layernetworks were compared with the conventional statisticalmethod. Local means and standard deviations of thevelocity were obtained using sub-image sampling on astatistically significant numbers of particle image pairs(Grant and Liu 1989). A significant improvement wasfound using synthetic turbulence data with a systematicchange in direction. (figure 8(a)). Figure 8(b) and (c) arethe success ratios of matching as a function of the angularchange of direction and image density respectively.

    (ii) In a first experimental application, a vortexgenerator was used to produce a wing-tip type vortex ANd:YAG laser illuminated a plane normal to the mean flow.The images, captured on 35 mm film, were processed usingthe multi-layer, feed-forwards neural method (Grant andPan 1995). The image pairs were found to be correctlymatched on 91.7% of occasions (figure 9(a)). Conventionalstatistical tracking does not monitor such flows efficientlywhen large local changes in direction occur.

    (iii) In a second experimental application, the flowbehind a circular cylinder was measured using laserillumination in a plane containing the mean flow vector andthe normal to the cylinders principal axis. The illuminationwas given a time signature (Grant and Liu 1990) andthe four-layer neural net was used (Grant and Pan 1995)to measure the direction and magnitude of the particle

    velocities. A success ratio of 95.9% was obtained (figure9(b)).

    2.2. Neural networks used in feature extraction forPIV

    2:2:1. Introduction. Image enhancement techniquesare often applied to PIV images during a pre-processingstage to aid extraction of particle image characteristics.Owing to the noise often present in the PIV images,normal filtering, stretching, binarization methods are notnecessarily adequate. Neural network methods have beenused widely to solve the image recognition and featureextraction problems. Successful applications can be foundin object recognition, edge detection and image coding.

    2:2:2. A case study: identifying particle image centres.Carosone et al (1995) used the Kohonen neural network(figure 10) in the recognition of partially overlappingparticle images. The Kohonen network worked as anoptimum classifier. It allowed single particle images to bedistinguished from overlapping particle images by shapeanalysis. The input to the network was a vector setconsisting of geometrical features of a particle spot, suchas the first, second and third circularity measure, the aspectratio and the convexity. These parameters were invariant ontranslation, rotation and scaling of an image. The networkclassified the particle into two classes: single or overlap.Different methods of calculating the image barycentre wereapplied afterwards, according to the images class property.Single-exposure and multiply exposed synthetic imageswere tested. The neural network produced a sharp increasein the number of identified barycentres in PTV images withmany overlaps.

    1403

  • I Grant and X Pan

    Figure 9. (a) The matched image pairs obtained from the tip vortex image using the neural net method. (b) The matchedimage pairs obtained from the vortex shedding image using the neural net method.

    Figure 10. A Kohonen network.

    2:2:3. A case study: obtaining fringe characteristics.Carosone and Cenedese (1996) applied a back-propagation(BP) neural network to Youngs fringe pattern analysis,which had the advantage of significantly accelerating theanalysis compared with the inverse FFT method. Theauthors reported that a complexity analysis showed thatthe BP neural network reduced the computational load bya factor of 200 when run on the same computer. Thefringe image (256 256 pixels) was first compressed witha Kohonen FSCL network to the size of a 256-word string.It was sent to the BP network which employed 256 neuronsin the input layer, 96 neurons in the first hidden layer, 11on the second hidden layer and two in the output layer. Thetwo outputs represented the fringe spacing and orientation.The network was trained with large fringe images beforeoperation. The accuracy of the network was reported to be96.3% orientation and 93.6% spacing recognition.

    2.3. Using the neural network to solve the stereo visionreconciliation problem

    2:3:1. Introduction. Stereogrammetry is one of themethods used in three-dimensional measurement. It utilizestwo or more images obtained from different view points toestimate the three generalized co-ordinates of a point inobject space. A major difficulty in stereogrammetry, whichhas lead to the most significant errors in general, is viewreconciliation, whereby sub-images on the two views are

    matched to identify common areas representing identicalregions of the object volume.

    By using a neural network approach it is possibleto include constraints in the solution algorithm throughthe definition of a cost function representing the systemconstraints which is minimized using a distributed Hopfieldnetwork (Hopfield 1982, 1984, Hopfield and Tank 1984).The stereo view correspondence problem is developed asan optimization problem with a cost, or energy, functionrepresenting the system constraints. The approach takenbelow (section 2.3.2) is to define a Lyapunov functionwhich represents the overall behaviour of the network andwhich has a local minimum value when the network is in astable state, for which a best match of the views is obtained.

    The importance of the Hopfield net (figure 5) is that,while holding information about the system, it may take updynamically stable states. The optimization of the systemproceeds until the (system-defined) energy function findsits way to one of these minima, where it becomes trapped.

    2:3:2. A case study: matching stereogrammetry viewpairs obtained in PTV. Grant et al (1997) have appliedthis approach to the reconciliation of stereo image pairsobtained in PIV studies. Each of the stereo views is pre-processed to remove noise and to identify particle images.Each particle image on the respective view is identified byan index number. The stereo correspondence problem isthen represented by a two-dimensional array of neuronshaving row and column numbers i and j respectively.The row and column indices are used to represent the ithneuron in the first stereo view and the j th neuron in thesecond stereo view. A positive output from the neuron(i; j ) indicates a possible match.

    High success ratios have been reported for all imagedensities. The performance of the network, applied tosimulated flow data for calibration purposes, was presentedas a success ratio curve plotted against the particle matchingdensity. This was defined as the averaged number ofcandidates in each corresponding points interrogation area.In most instances the success ratio was greater than 97%,approaching 100% at lower image densities.

    1404

  • The use of neural techniques in PIV and PTV

    3. Conclusion

    The development and classification of the neural networkhas been described with particular emphasis on areasof possible application in particle image techniques.In particular, image enhancement, feature extraction,classification and optimization procedures have beendiscussed and their particular usage in PIV described.

    The unique self-learning capability of the neuralnetwork has been shown to be of great use in PTV flowdiagnostics when the flow is changing significantly over therecorded region. In particular, the ability of a network tomodify both the parameters defining its filtering proceduresand the rules chosen for evaluation makes the neural methoda powerful methodology suitable for the processing ofimages obtained from complex flows.

    The use of the method has been discussed by citingits application in image tracking, fringe analysis, imageenhancement and stereo view reconciliation. The successof its application in solving these diverse and complexproblems is an indication of its likely further developmentand application in the field of PIV and PTV imageprocessing.

    References

    Carasone F and Cenedese A 1996 Youngs fringes analysis byneural network Proc. 8th Conf. on Laser Anemometry,Lisbon (Lodoan-IST) paper 21

    Carosone F, Cenedese A and Querzoli G 1995 Recognition ofpartially overlapped particle images using the Kohonenneural network Exp. Fluids 19 22532

    Cenedese A, Paglialunga A, Romano G P and Terlizzi M 1992Neural net for trajectories recognition in a flow Int Symp onLaser Anemometry, Lisbon paper 27.1

    Carpenter G A and Grossberg S 1983 A massively parallelarchitecture for a self-organising neural pattern recognitionmachine Computer Vision Graphics Image Processing 3754115

    1987 Art2: self-organisation of stable category recognitioncodes for analog output patterns Appl. Opt. 26 491930

    1990 Art3 hierarchical search: chemical transmitters inself-organising pattern recognition architectures Proc. Int.Joint Conf. on Neural Networks, Washington, DC vol 2,pp 303

    Chang T P, Wilcox N A and Tatterson G B 1984 Application ofimage processing to the analysis of three-dimensional flowfields Opt. Eng. 23 2837 (reprinted in Grant (1994))

    Grant I (ed) 1994 Selected Papers on PIV (Bellingham, WA: TheSociety of Photo-Optical Instrumentation Engineers)

    1997 Particle image velocimetry: a review Proc. Inst. Mech.Eng. C 211 5576

    Grant I, Fu S, Pan X and Wang X 1995 The application of anin-lin, sterescopic, PIV system to 3 comonent velocitymeasurements Exp. Fluids 19 21422

    Grant I and Liu A 1989 Method for the efficient incoherentanalysis of particle image velocimetry images Appl. Opt. 2817458 (reprinted Grant (1994))

    1990 Directional ambiguity resolution in particle imagevelocimetry by pulse tagging Exp. Fluids 10 716(reprinted Grant (1994))

    Grant I and Pan X 1993 The neural network method applied toparticle image velocimetry Proc. Optical Diagnostics inFluid and Thermal Flow, SPIE Technical Conf. 2005,San Diego Paper 2005-43

    1995 An investigation of the performance of multi layerneural networks applied to the analysis of PIV images Exp.Fluids 19 15966

    Grant I, Pan X, Wang X and Romano F 1997 The neuralnetwork method applied to stereo image correspondenceproblem in 3 component PIV Appl. Opt. submitted

    Grant I and Wang X 1995 Directionally-unambiguous, digitalparticle image velocimetry studies using a image intensifiercamera Exp. Fluids 18 35862

    Grant I, Zhao Y, Tan Y and Stewart J N 1991 Three componentflow mapping; experiences in stereoscopic PIV andholographic velocimetry Proc. Fourth Int. Conf. on LaserAnemometry, Advances and Applications (New York:American Society of Mechanical Engineers) pp 36571(reprinted in Grant (1994))

    Grossberg S 1976 Adaptive pattern classification and universalrecording: Feedback, expectation, olfaction and illusionsBiol. Cybernetics 23 187202

    Hopfield J 1982 Neural networks and physical systems withemergent collective computational abilities Proc. Natl Acad.Sci., USA 79 25548

    1984 Neurons with graded response have collectivecomputational properties like those of two-state neuronsProc. Natl Acad. Sci., USA 81 308892

    Hopfield J and Tank D W 1984 Neural computation ofdecisions in optimisation problems Biol. Cybernetics 5214152

    Kohonen Th 1982 Self-organised formation of topologicallycorrect feature maps Biolog. Cybernetics 43 5969

    Kosko B 1987 Adaptive bidirectional associative memoriesAppl. Opt. 26 494760

    McCulloch W S and Pitts W A 1943 A logical calculus of theideas immanent in nervous activity Bull. Math. Biophys. 511533

    Okamoto K, Koizumi M, Madarame H and Hassan Y 1996Temporal mapping of the spring model for particle imagevelocimetry Proc. Eighth Int. Symp. on Application of LaserTechniques to Fluid Mechanics (Lisbon) paper 27.1

    Thompson B J 1989 Holographic methods for particle size andvelocity measurementrecent advances Proc. SPIE 113630826 (reprinted Grant (1994))

    Teo C L, Lim K B, Hong G S and Yeo M H T 1991 A neuralnet approach in analysing photographs in PIV Proc. IEEEInt. Conf. on Systems, Man and Cybernetics, Charlottesville,VA vol 3, pp 15358

    Utami T and Ueno T 1984 Visualization and picture processingof turbulent flow Exp. Fluids 2 2532 (reprinted Grant(1994))

    Widrow B and Hoff M E 1960 Adaptive switching circuits 1960IRE Western Electric Show and Convention Record vol 4,pp 96104

    1405