edge detection2.pdf

Upload: santosh

Post on 08-Mar-2016

219 views

Category:

Documents


0 download

TRANSCRIPT

  • IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART B: CYBERNETICS, VOL. 31, NO. 4, AUGUST 2001 589

    Comparison of Edge Detection Algorithms Using aStructure from Motion Task

    Min C. Shin, Member IEEE, Dmitry B. Goldgof, Senior Member, IEEE, Kevin W. Bowyer, Fellow, IEEE, andSavvas Nikiforou

    AbstractThis paper presents an evaluation of edge detectorperformance. We use the task of structure from motion (SFM) asa black box through which to evaluate the performance of edgedetection algorithms. Edge detector goodness is measured by howaccurately the SFM could recover the known structure and motionfrom the edge detection of the image sequences. We use a variety ofreal image sequences with ground truth to evaluate eight differentedge detectors from the literature. Our results suggest that ratingsof edge detector performance based on pixel-level metrics and onthe SFM are well correlated and that detectors such as the Cannydetector and Heitger detector offer the best performance.

    Index TermsEdge detection, experimental comparison of algo-rithms, performance evaluation, structure from motion.

    I. INTRODUCTION

    TABLE I gives a summary of the literature dealing withedge detector performance evaluation. Many of the olderworks attempted to evaluate edge detector performance usingonly synthetic images [1][5]. The motivation for using syn-thetic images is that it is easy to have reliable pixel-based groundtruth. More recently, some authors have developed evaluationmethods using pixel-based ground truth on real images. Use ofreal images, rather than synthetic, should inspire greater confi-dence in the results.1 Heath et al. [6] approach the problem froma different perspective, rating performance using human eval-uation of edge detector output for real images. However, highperformance on pixel-based metrics or human evaluation willnot necessarily translate to high performance on computer vi-sion tasks which follow the edge detection step.

    Our work is the first to focus on task-based empirical eval-uation of edge detector performance. We use structure from mo-tion (SFM) as a black box through which to evaluate the per-formance of edge detection algorithms [7], [8]. We have created18 real image sequences of laboratory scenes. These representthree different scenes of each of two types, and three densitiesof motion sampling ( sequences). A mechanical

    Manuscript received June 23, 2000; revised February 3, 2001. This work wassupported by NSF Grants CDA-9724422, EIA-9729904, IRI-9619240, and IRI-9731821. This paper was recommended by Associate Editor B. Draper.

    The authors are with the Department of Computer Science and En-gineering, University of South Florida, Tampa, FL 33620 USA (e-mail:[email protected]; [email protected]; [email protected]; [email protected]).

    Publisher Item Identifier S 1083-4419(01)04862-2.1This paper has supplementary downloadable material available at http://iee-

    explore.ieee.org, provided by the authors. This includes the wood block andLego house image data sets, a set of Solaris C programs for manipulating andinterpreting the images, a setup script and readme file. This material is 34.3 MBin size

    TABLE ISUMMARY OF EDGE DETECTION COMPARISON METHODS

    rotation stage was used to independently record accurate motionground truth. Knowledge of the angle between selected scenefeatures is used to independently record ground truth for struc-ture. Edge detector goodness is measured by how accurately theSFM motion recovers the known structure and motion from theedge detection of the image sequences.

    We have conducted experiments with eight edge detectors.For all but the Canny and the Sobel detectors, the edge detectorimplementation was obtained from its original authors or wasvalidated against their results. We have conducted experimentswith different SFM algorithms, due to Taylor and Kriegman [9]and Weng et al. [10]. A train-and-test methodology is used forcomputing performance metrics. We have designed a large andappropriately challenging dataset. The magnitude of the errorin the test results shows that the real image sequences we useare sufficiently challenging for the SFM implementations andindeed result in meaningful ranking of the detectors. The resultis an objective, quantitative, and fully reproducible comparisonof edge detector performance.

    II. BACKGROUND

    Table II lists 19 edge detector papers published in a selec-tion of major journals in 1995 through 2000. The importantpoint in this table is that there is no generally accepted formalmethod of evaluating edge detection. The most common formof evaluation is to show the results of a proposed new de-tector side-by-side with those of a version of the Canny de-tector, and appeal to subjective evaluation of the reader. The ap-

    10834419/01$10.00 2001 IEEE

  • 590 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART B: CYBERNETICS, VOL. 31, NO. 4, AUGUST 2001

    TABLE IISUMMARY OF JOURNAL-PUBLISHED EDGE DETECTION ALGORITHMS

    TABLE IIIPARAMETER RANGE OF EDGE DETECTORS

    peal to the readers visual evaluation may be aided by pointingout specific differences in some area(s) of the edge images. Thelack of an accepted formal method of evaluating edge detectorperformance is not the result of a lack of possible approaches.Table I lists 14 different approaches proposed in the literaturesince 1975. The important point here is that to date no methodhas become generally adopted by the research community.

    A. Edge DetectorsWe have evaluated eight edge detectors in this framework. In

    this section, the description of edge detectors, their parameters(Table III), and the source of the implementations are given. Wehave selected the algorithms so that there is a diversity in the ap-proaches of algorithms. We have used the same implementationsof detectors as other edge detection evaluation frameworks [6],[38], [39], so that the results of evaluations could also be com-pared. The implementations of the Bergholm and SUSAN de-tectors were obtained from the original authors. The implemen-tation of the Rothwell detector was translated from the Cimplementation in the IUE. The implementation of the robust

    anisotropic edge detector was checked against the original bycomputing values in [40]. The Canny detector was rewritten atUSF from an implementation obtained from the University ofMichigan. The Sobel was written at USF. Note that nonmax-imal suppression and/or hysteresis were added to several detec-tors, in order to have all of the detectors produce edge maps withsingle-pixel edges.

    The robust anisotropic edge detector [40] applies nonuniformsmoothing to the image and selects the outliers of the robust es-timation framework as edges. It estimates the noise level withinthe image ( ) using median absolute deviation (MAD). Thedetector is intended to be parameterless, computing the MADautomatically for each image. The implementation we use scalesthe values of , giving one parameter. Nonmaximal suppres-sion (directly taken from the Canny detector implementation)was added to the algorithm.

    The Bergholm edge detector [41] applies a concept of edgefocusing to find significant edges. The image is smoothed usinga coarse resolution (high smoothing) from . In addition,after applying nonmaximal suppression, the edges exceeding acontrast threshold are identified. Then the locations of theseedges are refined by tracking them to a finer resolution (lesssmoothing) of .

    The Canny edge detector [42] is widely considered the stan-dard method of edge detection. The image is smoothed with aGaussian filter where is the standard deviation of the filter.Then, the edge direction and strength are computed. The edgesare refined with nonmaximal suppression for thin edges and hys-teresis thresholding (low and high) for continuous edges. Theimplementation used in this work produces at least asmoothing window for any positive value.

    The Heitger edge detector [43] uses a suppression and en-hancement concept. First, the image is smoothed by odd andeven symmetrical Gabor filters. The filter responses that do notcorrespond to the position and type of a given image curve aresuppressed while features that do correspond are enhanced. Theedges are thinned with nonmaximal suppression. The originalimplementation contains 14 parameters. We have fixed 12 pa-rameters at the default values, and tuned only for Gaussianenvelope and the edge strength threshold .

    The Rothwell edge detector [44] is similar to the Canny edgedetector except 1) nonmaximal suppression is not used since itis claimed that it fails at junction points, and 2) hysteresis is notused due to the belief that the edge strength is not relevant for thehigher level image processing tasks. Rather than hysteresis, thedetector uses dynamic thresholding where the thresholdingfor determining the edges varies across the image to improvethe detection of the junctions. The image is smoothed with aGaussian filter using the size of . The pixels are classified asedges if the gradient is greater than . Then, the edges arethinned.

    The SarkarBoyer edge detector [45] is an optimal zerocrossing operator (OZCO). In order to obtain a good approx-imation and optimal response, the optimal infinite lengthresponse is approximated recursively. First, the infinite impulseresponse (IIR) filter (smoothing size determined by ) isapplied to the image. Then, after computing the third derivativeof the smoothed image, the edges with negative directional

  • SHIN et al.: COMPARISON OF EDGE DETECTION ALGORITHMS 591

    slopes are eliminated. Then, the false positive edges are furtherreduced by applying hysteresis.

    The Sobel detector [46] uses the gradient operators along theand direction to compute the gradient magnitude. In the im-

    plementation, we added nonmaximal suppression and the hys-teresis directly from the Canny detector.

    The smallest univalue segment assimilating nucleus [47](SUSAN) detector calculates the edge strength using thenumber of edges with similar brightness within circular masks.The USAN of each pixel is computed by placing the maskscenter at the pixel. Then, if the area of the USAN is abouthalf the area of the mask, the pixel is considered as an edgecandidate. Then, nonmaxima suppression is used to thin edges.

    B. Structure from Motion

    A structure from motion (SFM) algorithm determines thestructure (depth information) of a scene and the motion of thecamera. The Taylor algorithm [9] is a nonlinear algorithm andrequires multiple iterations for convergence of a solution, whilethe Weng algorithm [10] is a linear algorithm. These algorithmswere tested against each other in [9]. Since the Weng algorithmcan only handle three-frame sequences, it is tested only onthree-frame sequences. Therefore, the results are presentedin two different settings: 1) using long sequences with onlyTaylor algorithm, and 2) using three-frame sequences withboth algorithms. We have obtained the implementations of theTaylor and Weng algorithms from the authors of [9].

    The Taylor (nonlinear) SFM algorithm extracts the 3-D lo-cation of each line in the camera coordinates, and computesthe motion of the camera given images with correspondedlines. It solves the problem in terms of an objective functionthat measures the disparity between the projected 3-D line andits corresponding observed two-dimensional (2-D) line. The al-gorithm iterates searching for the structure and motion estimateswhich minimize . A minimum of three images and six linesis required by the SFM algorithm, and more images and linesare allowed. The algorithm generates an initial random guessof camera position for each iteration using the initial motioninformation. It is found that without providing any initial mo-tion information, the algorithm usually manages to converge toa solution but after a far greater number of iterations. In orderto speed up the optimization process for our experiments, theground truth (GT) rotation angle is provided as a good initialguess.

    The Weng (linear) algorithm uses image sequences with threeframes only. It accepts corresponded lines that are presentin all three frames. Given the pixel location of the end pointsof the lines, the SFM algorithm estimates the rigid motion ofthe camera in terms of rotation and translation. Lines are rep-resented using parametric form. The intermediate parameters( ) are computed from the set of lines. Then, the mo-tion parameters are computed from , and . The structurevector including the sign of the vector is computed using themajority positive- assumption which states that for the most ofthe lines, the point on the line that is closest to the origin haspositive component. It requires minimum of 13 lines.

    Fig. 1. Overview of the framework: intensity images, Canny edge maps, linesused for SFM, and the projection of the SFM output for corresponding cameramotion of estimated structure. Note that the edges found on the right boundaryof the intensity image are due to intensity difference along the background andthe rightmost columns of the image. This did not affect the results as the edgesfound along those columns are not used for the SFM algorithm since they werenot specified as the GT lines for correspondence.

    III. FRAMEWORK

    The comparison methodology involves four steps:1) edge detection;2) intermediate processing;3) SFM;4) evaluation of the result.

    An overview of this process is depicted in Fig. 1.

    A. Intermediate Processing

    Intermediate processing involves extracting and determiningthe correspondence of lines from the edge maps. The algorithmsused are not necessarily the most sophisticated. Since the overallgoal is to compare the edge detectors, it is only important thatthe intermediate processing algorithms not give any relative ad-vantage to any particular detector.

    1) Line Extraction: Initial pixels of edge chains are foundin a raster scan. The eight-connected neighboring edge pixelsare recursively linked until 1) there are no more neighbors or 2)there is more than one neighbor, indicating a possible junction.Then, edge chains are divided at high curvature points. Next,edge chains are further broken using the polyline splitting tech-nique [48]. Then, least-squares is used to fit a line to each chain[49]. The line segment is represented by its endpoints, found bythe position of the end pixels in the chain in the line equation.

  • 592 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART B: CYBERNETICS, VOL. 31, NO. 4, AUGUST 2001

    Fig. 2. Manually selected GT lines of LegoHouse1 (LH1) and WoodBlock1(WB1) image sequences. Note that the lines around the LegoHouse roof are notselected since the lines were not straight due to the bubble of the Lego blocks.

    Finally, lines shorter than (50.0 pixels) are elimi-nated.

    2) Line Correspondence: The input to the SFM is a set ofline correspondences across a sequence of frames. Manuallymatching the lines would provide the most error-free matching.However, it is not practical to manually match lines from 18image sequences for eight edge detectors where each edge de-tector will be tested on a minimum of 177 (for three parameters),41 (for two parameters), or 17 (for one parameter) parameter set-tings (see Section III-E for determining the minimum numberof attempted parameter settings.) Therefore, an automated linematching program was developed

    1) to provide correct correspondence information to theSFM algorithm so that the quality of the SFM output isdue to the quality of the edge map;

    2) to make the framework practical since manual matchingis not feasible;

    3) not to give any particular advantage or disadvantage toany edge detector.

    First, the ground truth (GT) lines in all images of the se-quence ( for image and line ) are manually defined andcorresponded. An example of the GT lines for correspondenceis shown in Fig. 2. In GT, we defined the lines that 1) are straightand 2) have enough separation from other lines since lines thatare too close might lead to wrong correspondences. Second, themachine estimated (ME) lines ( for image and line ) fromthe line extraction step are corresponded automatically using theGT lines. If a ME line ( ) matches to a GT line ( ), is la-beled with the index of . Two lines are corresponded if thefollowing two conditions are met.

    1) Collinearity: If the sum of the perpendicular dis-tance between the endpoints of to line is lessthan (5.0 pixels), they are considered to becollinear.

    2) Overlap: Two line segments could be collinear, yet notbelong to the same part of the object/image. So, isprojected to , resulting in . and could be ori-ented in four different ways (refer to Fig. 3). Obviously,the orientation #1 indicates overlap while #4 indicatesnonoverlap. Since one GT line segment could be brokendown into several ME line segments in one image, if

    is completely within (#2), they are corresponded.Also, if they partially overlap (#3) by at least %(80.0%) of the GT line, they correspond. Note that cor-respondence of many GT lines to one ME line is not al-lowed.

    Fig. 3. Four possible orientations of collinear lines.

    Fig. 4. Image scenes. First images of six original sequences are shown.

    The SFM requires a minimum of three correspondences(correspondences over minimum of three frames) for each line.Lines with fewer than three correspondences are dropped.

    B. Imagery DesignTables I and II show that almost all previous comparisons

    have used only a handful of images (synthetic and real). Ide-ally an evaluation dataset should be large and thorough in orderfor the users of the methodology to have confidence in the eval-uation.

    1) Long Sequences: In this work, image sequences werecaptured for two different scene types: LegoHouse (LH) andWoodBlock (WB) (Fig. 4) For each scene type, three originalsequences were obtained (two scene types three originalsequences per scene type six original sequences). Then,two shorter derived sequences were extracted from eachoriginal sequence (six original two derived per original12 derived), resulting in 18 total sequences (six original 12derived). This large dataset of 18 sequences containing 133unique images (278 including the repeated images in derivedsequences) is carefully designed considering different aspectsinfluencing the edge detectors and the structure from motiontask, such as

    1) number of images in a sequence;2) total rotation angle;3) number of lines in the scene;4) average number of correspondences per line;5) average line length;6) number of contrast levels within the image;7) amount of occlusion, transparency (which results in nat-

    ural noise) (refer to Table IV).2) Three-Frame Sequences: In order to compare the Taylor

    and Weng algorithms, sequences consisting of three frames arecreated. Twelve sequences are created from the LegoHousescenes by taking nonoverlapping sub-sequences of originalLegoHouse sequences (refer to Table V). For instance, Lego-House.3 frame 1.A, B, and C are created by taking three

  • SHIN et al.: COMPARISON OF EDGE DETECTION ALGORITHMS 593

    TABLE IVPROPERTIES OF LONG IMAGE SEQUENCES WITH DERIVED SEQUENCES

    DENOTED BY .A OR .B SUFFIX

    TABLE VPROPERTIES OF THREE-FRAME IMAGE SEQUENCES

    nonoverlapping subsequences from LegoHouse1. The threeframes are selected so that 1) the minimum required number ofline segments in the scene is available and 2) the total motionis enough to allow accurate estimation (at least 20 ).

    C. Ground TruthThe ground truth is manually defined in terms of motion and

    structure. The motion of the object is described by the rotationaxis and the rotation angle between the frames. Image sequencesare captured by rotating an object on a rotation stage by a prede-termined angle between frames. These angles correspond to theGT rotation angles [refer to Fig. 5(a)]. In order to determine theGT rotation axis of the stage in camera coordinates, a cube isplaced on the calibrated rotating stage so that the straight edgeof the cube is aligned with the rotation axis. Intensity and rangeimages are taken using a K T structured-light range sensor. Thethreedimensional (3-D) rotation axis is computed by

    1) picking two points defining the endpoints of the rotation;2) getting the 3-D locations of two points using the range

    image;3) normalizing the vector defined by two points. The struc-

    ture GT is defined by a set of pairs of lines on the objectand the angle between them [refer to Fig. 5(b)].

    (a)

    (b)Fig. 5. Motion and structure ground truth.

    Fig. 6. Motion error measurement.

    For instance, lines #1 and #3 in Fig. 5 are a pair making a 90angle.

    D. Performance MetricsThe machine estimated (ME) result is compared with GT

    in motion and structure. The ME motion of the camera whichSFM produces is converted to the motion of the object byreversing the sign of the rotation angle while keeping thesame rotation axis. Two measurements for the motion (rota-tion axis and rotation angle) are combined by the followingmethod (refer to Fig. 6). First, an arbitrary point atis set for and . For each camera orientation ,

    is computed by moving with andwhile is computed with and

    . Then the motion error is computed by (MotionError%), where is the distance

    traveled from to , and is the distancebetween and . The structure error is measured bycomputing the absolute angle difference between a ME angleand its corresponding GT angle.

    E. Parameter TrainingThis section describes the automated method of training the

    parameters of the edge detectors. The goal is to objectively and

  • 594 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART B: CYBERNETICS, VOL. 31, NO. 4, AUGUST 2001

    automatically find the parameter setting of the edge detector thatyields the best SFM performance.

    The evaluation of detectors depends on parameters of theedge detector (up to three parameters), line extraction (three pa-rameters), and line correspondence (two parameters). Findingthe best setting of up to eight parameters is computationally in-feasible. Therefore, the following method was established.

    First, good parameter settings for line extraction and linecorrespondence are found after observing many runs of theexperiment: 5.0 pixels,90.0 , 50.0 pixels, 5.0 pixels, and

    80.0%. These values are fixed for all experimentswith all detectors.

    An adaptive method is used to search for the best edge de-tector parameter values. A initial uniform sam-pling of parameter range is tested. The area around the best pa-rameter point in this coarse sampling is further subsampled at

    with the previous best at the center. A min-imum of two subsamplings is executed, resulting in a minimumof 177 differentparameter points for three-parameter edge detectors. Subsam-pling is continued while there is a 5% or greater improvementfrom the previous best. The parameters are trained separatelyfor structure and motion. In our experiment, the best parametersetting was found after an average of 3.46 subsamplings and amaximum of six subsamplings.

    The adaptive search is not guaranteed to find the globallybest parameter setting. In fact, in some instances, bettermotion performance was observed with the structure-trainedparameters than with the motion-trained parameters, or viceversa. However, only 18 such occurrences (eight for theAnisotropic, one for the Bergholm, three for the Canny, onefor the Heitger, one for the Rothwell, one for the Sarkar andthree for the SUSAN) were found during 252 (18 sequencesseven edge detectors two error metrics) trainings. In addi-tion, the differences were minimal. The mean difference was2.12% (motion) and 1.06 (structure). Eleven of 18 instancesare from the one-parameter edge detectors (the anisotropicdetector and the SUSAN detector). The maximum motiondifference of 8.34% was observed by the SUSAN detectortrained for WoodBlock #2.B sequence with the motion errorof 18.04% (motion-trained) and 9.70% (structure-trained). Themaximum structure difference of 3.06 was observed by theAnisotropic detector trained for LegoHouse #3.B sequencewith the structure error of 2.38 (motion-trained) and 5.44(structure-trained).

    To further check the effectiveness of the adaptive search, wemade one comparison against an exhaustive search. Since theaverage number of subsamplings observed was 3.46, we havetaken the exhaustive search to a similar resolution by param-eter sampling of . We have trained the Cannydetector on WoodBlock #1.A. The best motion performancefound by the exhaustive search was 3.80% ( 0.272 63, low

    0.631 58, high 0.947 37). Compared to the best of adap-tive search (3.82%), the difference (0.02%) was minor. In fact,it was better than the fourth best found by the exhaustive search(3.90%). For structure, the best exhaustive performance was0.310 ( 1.323 16, low 0.894 74, high 0.947 37) com-

    pared to 0.920 of adaptive search (which was better than 25thbest found by the exhaustive search), the difference was 0.610 .

    IV. RESULTS

    The results are divided into three major sections:1) long sequence experiment;2) three-frame sequence experiment;3) effect of line characteristics on SFM convergence.

    In the long sequence experiment, the Taylor algorithm has beenused as the task. The section is subdivided into

    1) train;2) test within the same scene type;3) test across the scene types.

    In the three-frame sequence experiment, the Taylor and Wengalgorithms have been used as the task. The goal of the three-frame section is to examine 1) the effect of linear and nonlinearnature of SFM algorithms and 2) the effect of using minimumnumber of frames required in a sequence. The section is dividedinto 1) train and 2) test within the same scene. The effect ofline characteristics on SFM convergence section analyzes howcharacteristics of edges have influenced the performance. It isdivided into

    1) line input characteristics;2) SFM setting;3) line characteristic analysis for the Sobel detector.Notice that detailed results of the Sobel are absent. This is

    because the SFM algorithm could not converge with any of theSobels edge output of parameter settings with Wood-Block #1, #2, and #3 sequences. We lowered thethreshold to 25 pixels, which should result in more lines andcorrespondences, and the Sobel still did not converge. We low-ered even further to 15 pixels, and got convergenceonly with the WoodBlock #1 sequence. The trained motion errorwas 4.92, which is poor compared to the other detectors. An in-vestigation of the reasons for the poor performance of the Sobeldetector is given in Section IV-C.

    A. Long SequencesIn this section, 18 long sequences are tested on seven detec-

    tors. First, the train section describes the results of trainingthe parameters of edge detectors for each sequence. Second, thetest within the same scene type section takes the trained pa-rameters and tests on the sequences within the same scene type.Third, the test across scene type section tests the trained pa-rameters of a sequence of one scene type on the sequences ofother scene type. All the results are divided into four categoriesof:

    1) LH-motion;2) LH-structure;3) WB-motion;4) WB-structure.Each section contains a table containing mean of error and

    convergence rate (Tables VI, VIII, and XVII) and a table of rel-ative ranking (Tables VII, XVI, and XVIII). The mean erroris computed from all converging training or testing sequenceswithin each category. Since the nonconvergence of the SFMalgorithm occurs during training and testing, the convergence

  • SHIN et al.: COMPARISON OF EDGE DETECTION ALGORITHMS 595

    TABLE VITRAIN RESULTS AND THE CONVERGENCE RATE OF THE ALL SEQUENCES

    WHERE FIRST NUMBER IS MEAN ERROR. CONVERGENCE RATE IS SPECIFIED BYNUMBER OF CONVERGING SEQUENCES/NUMBER OF SEQUENCE IN THE SCENE

    TYPE. BEST IN EACH CATEGORY IS HIGHLIGHTED IN BOLDFACE. MOTIONUNIT IN DISTANCE % DIFFERENCE AND STRUCTURE UNIT IN ANGLE

    DIFFERENCE IN DEGREES

    TABLE VIIRELATIVE TRAINING PERFORMANCE WHERE EACH CELL INDICATES a/bWHERE THE EDGE DETECTOR IN THE ROW PERFORMED a INCIDENCES

    BETTER THAN THE EDGE DETECTOR IN THE COLUMN OUT OF b TIMES AND* HAS BEEN PLACED IN THE CASES OF THE EDGE DETECTOR IN THE ROWSIGNIFICANTLY OUTPERFORMING THE EDGE DETECTOR IN THE COLUMN

    is shown in ( ) format where is the number of conver-gence and is the number of attempts. The table of rela-tive ranking is shown to examine the statistical significance oftraining and testing. Each table consists of four subtables corre-sponding to four categories. Each subtable is amatrix holding ( ) in each cell. is the number of se-quences that both detectors in the row and the column con-

    TABLE VIIITEST WITHIN THE SAME SCENE RESULTS AND THE CONVERGENCE RATE

    WHERE BEST IN EACH CATEGORY IS HIGHLIGHTED IN BOLDFACE, MOTIONUNIT IN DISTANCE % DIFFERENCE, AND STRUCTURE UNIT IN ANGLE

    DIFFERENCE IN DEGREES

    verged. is the number of times the detector in the row out-performed the detector in the column. is added in the cellwhere the detector in the row significantly outperformed thedetector in the column, one is better than other two-thirds ormore of the times. If the entire row contains , it indicatesthat the detector in the row significantly outperformed all otherdetectors. If the entire column contains , it indicates that allother detectors outperformed the detector in the column.

    1) Train: First, the training results show that in all 28 in-stances (four categories seven edge detectors 28), eachedge detector performed better in WoodBlock scenes than Lego-House scenes, indicating that the LegoHouse scenes are harder.Second, the Bergholm detector performed the best in two cat-egories of WB-motion and WB-structure (refer to Table VI).The Canny detector and the Rothwell detector shared secondplace by each having the best performance in one category. TheAnisotropic, the Heitger, and the Sarkar detectors ranked closelyas fourth. The SUSAN detector was last in all four categories.Third, referring to the relative ranking (Table VII), the Bergholmdetector was significantly better than other detectors 23 out of24 times (six other detectors four categories 24) while theSUSAN detector was significantly outperformed by all other de-tectors nearly all times (23 out of 24). Though the mean errormight show that the Sarkar detectors performance was similarto the Rothwell detector, Table VI shows that the Rothwell de-tector significantly outperformed the Sarkar detector three outof four categories.

    As can be seen in Figs. 7 and 8, false positive edges donot seem to play a significant role since they were usually shortedge chains that were eliminated during the short line removalstep. Also, a general trend of more and longer lines giving betterresults seems visually verified in Fig. 7.

    In Tables IXXV, the trained parameter settings in 18 se-quences are shown. Note that the parameter settings are sparselydistributed within the parameter space.

    2) Test within the Same Scene Type: The parametersselected by training for one sequence were tested on othersequences within the same scene using leave one out typeof testing. Each sequence is tested on all other sequences ofthe same type, for motion and structure separately. Therefore,for each edge detector we had 72 tests (nine sequenceseight trained parameter settings from other sequences) forfour categories (motion and structure, and LegoHouse andWoodBlock).

    The test edge maps sometimes resulted in a set of corre-sponded lines that the SFM algorithm could not converge intoany solution even after 1750 iterations. In order to take this

  • 596 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART B: CYBERNETICS, VOL. 31, NO. 4, AUGUST 2001

    Fig. 7. Dataset where the least motion error difference between a pair of edgedetector is shown: LegoHouse #2.A. Canny detector ( = 0.01, low = 1.0,high = 0.812) with 6.82 and the SUSAN (T = 12.5) with 68.43.

    Fig. 8. Dataset where the least motion error difference between a pair of edgedetector is shown: LegoHouse #3.A. Canny detector ( = 0.321 88, low =0.968 75, high = 0.781 25) with 6.002. The Heitger ( = 1.906 250, T =9.375) with 6.005.

    problem into consideration, 1) the convergence rates are com-pared and 2) the test results of all converging trials of each edgedetector are presented.

    The Canny detector showed the best convergence rate at98.6%, while the SUSAN detector had the worst at 58.0% (refer

    TABLE IXTRAINED PARAMETER SETTINGS FOR THE ANISOTROPIC EDGE DETECTORWHERE n/a INDICATES THAT NONE OF PARAMETER SETTINGS RESULTED

    IN SFM CONVERGENCE

    TABLE XTRAINED PARAMETER SETTINGS FOR THE BERGHOLM EDGE DETECTOR

    to Table VIII). Lower convergence rates were observed forLegoHouse scenes (75.8% for motion and 73.4% for structure)compared to WoodBlock scenes (91.7% for motion and 92.9%for structure). This suggests that the LegoHouse scene type ismore difficult than the WoodBlock, as was also shown in trainresults.

    The Canny detector performed the best in three test cate-gories: motion-LH, motion-WB, and structure-WB (refer toTable VIII). The Anisotropic detector placed first in struc-ture-LH. As shown in the training performance, the SUSANdetector performed the worst in all four categories. TheBergholm detector, which showed the best train performance,was behind the Heitger detector. The Canny detector was neveroutperformed (significantly) by any other edge detector. Infact, the Canny detector outperformed the Rothwell detectorand the SUSAN detector in all four categories. The Heitger

  • SHIN et al.: COMPARISON OF EDGE DETECTION ALGORITHMS 597

    TABLE XITRAINED PARAMETER SETTINGS FOR THE CANNY EDGE DETECTOR

    TABLE XIITRAINED PARAMETER SETTINGS FOR THE HEITGER EDGE DETECTOR

    detector significantly outperformed the SUSAN detector in allfour categories.

    3) Test Across Scene Type: We also tested the parameterstrained for all nine WoodBlock sequences on LegoHouse #1 andthe parameters trained for all LegoHouse sequences on Wood-Block #1. The results are organized in a similar fashion. Thereare four categories:

    1) tested on LegoHouse motion;2) tested on LegoHouse structure;3) tested on WoodBlock motion;4) tested on WoodBlock structure.

    We present the results of all converging sequences.The Canny detector performed the best in two categories (mo-

    tion-LH1 and motion-WB1) and had the highest convergencerate of 100%. Those two observations are consistent with theobservations from testing within scene type. We conclude thatthe Canny detector has the best motion testing performancewith the most robust convergence. The Anisotropic, Bergholm,

    TABLE XIIITRAINED PARAMETER SETTINGS FOR THE ROTHWELL EDGE DETECTOR

    TABLE XIVTRAINED PARAMETER SETTINGS FOR THE SARKAR EDGE DETECTOR

    and Heitger detectors performed similarly, ranking behind theCanny detector.

    The Canny detector significantly outperformed others themost number of times (16 out of 24). The Anisotropic detectoroutperformed all other detectors in LH-structure category. TheSUSAN detector was significantly outperformed by others 23out of 24 times.

    B. Three-Frame SequencesThis section describes the results of using two different SFM

    algorithms on three-frame sequences. Due to the difficulty ofconvergence for the Taylor algorithm on three-frame sequences,the adaptive parameter setting started with

    parameter sampling. The results are using nine LegoHousethree-frame sequences (Table V). All other experiment methodson train, test, and other analysis are identical to the procedurefollowed in long sequence section.

    1) Train: The train results of using three-frame sequencesare shown in Table XIX. First, note the great performance

  • 598 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART B: CYBERNETICS, VOL. 31, NO. 4, AUGUST 2001

    TABLE XVTRAINED PARAMETER SETTINGS FOR THE SUSAN EDGE DETECTOR WHERE

    n/a INDICATES THAT NONE OF PARAMETER SETTINGS RESULTEDIN SFM CONVERGENCE

    TABLE XVIRELATIVE TESTING WITHIN THE SAME SCENE PERFORMANCE

    degradation between long sequences and three-frame se-quences. For the Sarkar detector, the error with three-framesequences was nearly five times greater. Second, given the

    TABLE XVIITEST ACROSS SCENE OF ALL SEQUENCES RESULTS AND THE CONVERGENCE

    RATE WHERE THE RESULTS UNDER EACH COLUMN INDICATE THEPERFORMANCE WHICH WAS TESTING ON THE DATASET AND CONVERGENCE

    IS SHOWN BY # OF CONVERGING SEQUENCES/# OF SEQUENCE IN THESCENE TYPE, AND THE BEST IN EACH CATEGORY IS HIGHLIGHTED INBOLDFACE, MOTION UNIT IN DISTANCE % DIFFERENCE, STRUCTURE

    UNIT IN ANGLE DIFFERENCE IN DEGREES

    TABLE XVIIIRELATIVE TESTING ACROSS THE SCENE TYPE PERFORMANCE WHERE EACHCELL INDICATES a/b WHERE THE EDGE DETECTOR IN THE ROW PERFORMED

    a INCIDENCES BETTER THAN THE EDGE DETECTOR IN THE COLUMN OUT OFb TIMES AND * HAS BEEN PLACED IN THE CASES OF THE EDGE

    DETECTOR IN THE ROW SIGNIFICANTLY OUTPERFORMING THE EDGEDETECTOR IN THE COLUMN

    three-frame sequences, the edge detectors performed betterwith the Taylor algorithm than with the Weng algorithm. Third,the relative ranking among results 1) using Taylor algorithmwith three-frame sequences and 2) using Taylor algorithmwith long sequences, are the same. In addition, for the relativeranking among all three (including Wengs three-frame), theCanny detector was the first or the second in all three, theSarkar detector was the worst in all three, and the Cannydetector and the Bergholm detector were both close in ranking.

  • SHIN et al.: COMPARISON OF EDGE DETECTION ALGORITHMS 599

    TABLE XIXMOTION TRAINED RESULTS OF THE THREE-FRAME SEQUENCES WITH

    MOTION UNIT IN DISTANCE % DIFFERENCE

    TABLE XXMOTION TEST RESULTS OF THE THREE-FRAME SEQUENCES WITH MOTION

    UNIT IN DISTANCE % DIFFERENCE

    2) Test: The test results are based on 72 test attempts (ninetrained sequences eight test sequences). First, all edge detec-tors suffered greatly on the convergence rate (refer to Table XX).All edge detectors went below 50%. Interestingly, the Cannydetector which showed the highest convergence rate in longsequence study, plunged to nearly the worst convergence rate.Second, the Canny detector performed the best in all three cat-egories, which again shows that the Canny detector has the besttest results. Third, the relative test rankings among results usingTaylor algorithm with long sequences and using Taylor algo-rithm with three-frame sequences, are the same. This indicatesthat the relative testing rankings are preserved with differentlength of the sequence. Fourth, compared to the results for longsequences, the edge detectors performed much (six to ten times)worse.

    C. Effect of Line Characteristics on SFM ConvergenceThis section analyzes reasons behind the performance level of

    edge detectors for the task of SFM. Two factors could affect theSFM algorithm: the input for the SFM and the SFM setting. Theline input is characterized by the number of lines, the number ofcorrespondences for each line, and the length of the lines. TheSFM algorithms setting is the number of global iterations.

    1) Line Input Characteristics: The average line charac-teristics of converging and nonconverging points are shownin Table XXI. Note that in average over all edge detectors,converging cases have more than twice as many lines as thenonconverging cases. In addition, higher numbers of corre-spondences and line lengths are observed with convergingcases. The Bergholm detector has the least difference betweenconvergences and nonconvergences. In fact, the average linelength is actually higher with nonconvergences.

    In order to examine the role of minimum criteria on non-convergence, the points in nonconvergences are organized infour categories as shown in Table XXII. Obviously, all the con-verging cases met the minimum criteria. Since the line corre-spondence program deletes the lines with less than minimumcorrespondences, the category of corr and line is the caseswhere no lines were used as the input of the SFM algorithm.This does not indicate that the edge detectors did not detect

    TABLE XXIEFFECT OF LINE CHARACTERISTICS ON CONVERGENCE

    TABLE XXIIMEETING MINIMUM CRITERIA WITH NONCONVERGENCE (NUMBERS

    IN PERCENT)

    any edges. It is possible that the lines that were formed fromedges could not meet the minimum line length or the minimumnumber of correspondence criteria. Note the percentage of non-convergences coming from valid SFM inputs. Except for theAnisotropic detector, all edge detectors have greater than 70%of nonconvergences from valid inputs. This raises several ques-tions regarding the SFM algorithm. 1) Practical minimum cri-teria: Even though the theoretical minimum criteria are satisfied,the recovery of motion is not guaranteed. 2) Practical numberof SFM global iterations: The current train setting of 50 globaliterations might not be enough for some of the valid inputs.

    2) SFM Setting: There were instances where the algorithmwas able to converge with more iterations. We examined threedifferent aspects of whether 50 iterations for training and 1750for testing are good enough.

    First, the feasibility of executing the algorithm for a largenumber of iterations needs to be examined. With 50 global it-erations, the algorithm consumes up to 10 min of CPU time ona Sun Ultra Sparc. The training attempts on the average nearly200 parameter settings on 18 image sequences for each edgedetector in two different metrics (motion and structure), cor-responding to executions of the SFMalgorithm and about 1200 h of CPU time. This could be multi-plied by a factor of two for every 50 extra global iterations forevery edge detector. Without tremendous computing power, in-creasing the global iterations would be infeasible.

    Second, consider the average number of iterations used incases that did converge. The average of SFM attempts during thetrain for all image sequences is computed for each edge detector:

    1) Anisotropic (3.4);2) Bergholm (3.5);3) Canny (3.3);4) Heitger (3.6);5) Rothwell (3.6);6) Sarkar (3.4);7) Sobel (4.4).

  • 600 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART B: CYBERNETICS, VOL. 31, NO. 4, AUGUST 2001

    TABLE XXIIIEFFECT OF LINE CHARACTERISTICS ON CONVERGENCE COMPARED

    WITH THE SOBEL

    The average of all attempts for all edge detectors is 3.46. Notethat the previous number is different from the average of alledge detectors average, since the edge detectors have a differentnumber of parameter attempts during the train. This indicatesthat if the SFM input is to converge, it will converge with veryfew iterations.

    Third, the effect of a high number of iterations on the trainand test performance is examined. We have tested the Bergholmdetector, which tends to have the most overlapping betweenconvergence and nonconvergence, by performing the adaptiveparameter searching with 100 iterations on a sequence (Wood-Block1.A). It is verified that the best parameter setting did notchange with 100 iterations; therefore, no change of train and testperformance.

    3) Line Characteristic Analysis for the Sobel Detector: Aswe have mentioned in the beginning of the results section, theSobel did not converge into any solution during the training withthree WoodBlock sequences. We will compare the line charac-teristics for the Sobel with other edge detectors.

    First, the data from the only three image sequences that theSobel detector was tested with (WoodBlock #1, #2, #3) areshown in Table XXIII. The characteristic of the nonconver-gences of the Sobel falls between the mean of the convergencesand nonconvergences of other five edge detectors. Even withthe lowered to 25, the line characteristic couldnot reach the mean convergences. Though the reason for theSobels performance cannot be totally explained by the linecharacteristics, it certainly strengthens the argument of morelines, longer lines, and more correspondences tend to convergemore often.

    V. CONCLUSION

    The Canny detector had the best test performance and the bestrobustness in convergence. It is also one of the faster-executingdetectors. Thus we conclude that it performs the best for thetask of structure from motion. This conclusion is similar to thatreached by Heath et al. [6] in the context of a human visual edgerating experiment, and by Bowyer et al. [38] in the context ofROC curve analysis.

    The Bergholm detector had the best test-on-training perfor-mance in three categories (Table VI) while the Canny detectorand the Rothwell detector were second. With separate test data,

    the Canny detector had the best performance with all image se-quences for motion (Table VIII). Theoretically, it can be con-cluded that once the optimal parameter setting for the image se-quences is found, the Bergholm detector can achieve the bestperformance, since it was the best performer in the test-on-train. However, in practice, the Canny detector performed betterwith any deviation from the training sequence. The SUSAN de-tector consistently was the worst performer in all categories intraining, testing within same scene type, testing across scenetype, and convergence rates.

    The results obtained by varying length of dataset to the min-imum 3) shows that the relative rankings of train and testing re-sults were identical. When the SFM algorithm was varied byusing the Wengs algorithm, the results were not as consistent.However, the Canny detector was still the best in test results.

    ACKNOWLEDGMENT

    The authors would like to thank C. Tayor and D. Kriegmanfor willingly answering numerous questions regarding the SFMalgorithms. The authors would also like to thank M. Heath, S.Dougherty, C. Kranenburg, and S. Sarkar for many valuable dis-cussions; and the authors of the edge detectors for making theimplementations available. Early results of this work appear in[7], [8].

    REFERENCES[1] L. Kitchen and A. Rosenfeld, Edge evaluation using local edge coher-

    ence, IEEE Trans. Syst., Man, Cybern., vol. SMC-11, pp. 597605,Sept. 1981.

    [2] T. Peli and D. Malah, A study of edge detection algorithms, Comput.Graph. Image Process., vol. 20, no. 1, pp. 121, Sept., 1982.

    [3] W. Lunscher and M. Beddoes, Optimal edge detector evaluation, IEEETrans. Syst., Man, Cybern., vol. SMC-16, pp. 304312, Apr. 1986.

    [4] S. Venkatesh and L. Kitchen, Edge evaluation using necessary compo-nents, CVGIP: Graph. Models Image Understand, vol. 54, pp. 2330,Jan. 1992.

    [5] R. N. Strickland and D. K. Chang, Adaptable edge quality metric, Opt.Eng., vol. 32, pp. 944951, May 1993.

    [6] M. Heath, S. Sarkar, T. Sanocki, and K. W. Bowyer, A robust visualmethod for assessing the relative performance of edge detectionalgorithms, IEEE Trans. Pattern Anal. Machine Intell., vol. 19, pp.13381359, Dec. 1997.

    [7] M. Shin, D. Goldgof, and K. W. Bowyer, An objective comparisonmethodology of edge detection algorithms for structure from motiontask, in Proc. IEEE Conf. Comput. Vision Pattern Recognit., Santa Bar-bara, CA, 1998, pp. 190195.

    [8] , An objective comparison methodology of edge detection algo-rithms for structure from motion task, in Empirical Evaluation Tech-niques in Computer Vision, K. W. Bowyer and P. J. Phillips, Eds. LosAlamitos, CA: IEEE Comput. Soc. Press, 1998, pp. 235254.

    [9] C. Taylor and D. Kriegman, Structure and motion from line segmentsin multiple images, IEEE Trans. Pattern Anal. Machine Intell., vol. 17,pp. 10211032, Nov. 1995.

    [10] J. Weng, Y. Liu, T. Huang, and N. Ahuja, Structure from line correspon-dences: A robust linear algorithm and uniqueness theorems, in Proc.IEEE Conf. Comput. Vision Pattern Recognit., 1988, pp. 387392.

    [11] J. R. Fram and E. S. Deutsch, On the quantitative evaluation of edge de-tection schemes and their comparison with human performance, IEEETrans. Comput., vol. C-24, pp. 616628, June 1975.

    [12] D. J. Bryant and D. W. Bouldin, Evaluation of edge operators usingrelative and absolute grading, in Proc. IEEE Comput. Soc. Conf. PatternRecognit. Image Process., Chicago, IL, Aug. 1979, pp. 138145.

    [13] I. E. Abdou and W. K. Pratt, Quantitative design and evaluation ofenhancement/thresholding edge detectors, Proc. IEEE, vol. 67, pp.753763, May 1979.

    [14] G. B. Shaw, Local and regional edge detectors: Some comparison,Comput. Graph. Image Process., vol. 9, pp. 135149, Feb. 1979.

  • SHIN et al.: COMPARISON OF EDGE DETECTION ALGORITHMS 601

    [15] V. Ramesh and R. M. Haralick, Performance characterization of edgedetectors, SPIE Applicat. Artificial Intell. X: Mach. Vision Robot., vol.1708, pp. 252266, Apr. 1992.

    [16] X. Y. Jiang, A. Hoover, G. Jean-Baptiste, D. Goldgof, K. Bowyer, andH. Bunke, A methodology for evaluating edge detection techniques forrange images, in Proc. Asian Conf. Comput. Vision, Singapore, Dec.1995, pp. 415419.

    [17] M. Salotti, F. Bellet, and C. Garbay, Evaluation of edge detectors:Critics and proposal, in Proc. Workshop Performance CharacterizationVision Algorithms, Cambridge, U.K., Apr. 1996.

    [18] Q. Zhu, Efficient evaluations of edge connectivity and width unifor-mity, Image Vision Comput., vol. 14, pp. 2134, Feb. 1996.

    [19] J. Siuzdak, A single filter for edge detection, Pattern Recognit., vol.31, pp. 16811686, Nov. 1998.

    [20] M. Brown, K. Blackwell, H. Khalak, G. Barbour, and T. Vogl, Multi-scale edge detection and feature binding: An integrated approach, Pat-tern Recognit., vol. 31, pp. 14791490, Oct. 1998.

    [21] B. Chanda, M. Kundu, and Y. Padmaja, A multi-scale morphologicedge detector, Pattern Recognit., vol. 31, pp. 14691478, Oct. 1998.

    [22] P. Tadrous, A simple and sensitive method for directional edge detec-tion in noisy images, Pattern Recognit., vol. 28, pp. 15751586, Oct.1995.

    [23] T. N. Tan, Texture edge detection by modeling visual cortical chan-nels, Pattern Recognit., vol. 28, pp. 12831298, Sept. 1995.

    [24] J. Shen, Multi-edge detection by isotropical 2-D ISEF cascade, PatternRecognit., vol. 28, pp. 18711885, Dec. 1995.

    [25] S. Iyengar and W. Deng, An efficient edge detection algorithm usingrelaxation labeling technique, Pattern Recognit., vol. 28, pp. 519536,Apr. 1995.

    [26] D. Park, K. Nam, and R. Park, Multiresolution edge detection tech-niques, Pattern Recognit., vol. 28, pp. 211229, Feb. 1995.

    [27] S. Ando, Image field categorization and edge/corner detection fromgradient covariance, IEEE Trans. Pattern Anal. Machine Intell., vol.22, pp. 179190, Feb. 2000.

    [28] J. Elder and S. Zucker, Local scale control for edge detection and blurestimation, IEEE Trans. Pattern Anal. Machine Intell., vol. 20, pp.699716, July 1998.

    [29] I. Matalas, R. Benjamin, and R. Kitney, An edge detection techniqueusing the facet model and parameterized relaxation labeling, IEEETrans. Pattern Anal. Machine Intell., vol. 19, pp. 328341, Apr. 1997.

    [30] T. Law, H. Itoh, and H. Seki, Image filtering, edge detection, and edgetracing using fuzzy reasoning, IEEE Trans. Pattern Anal. Machine In-tell., vol. 18, pp. 481491, May 1996.

    [31] Z. Wang, K. Rao, and J. Ben-Arie, Optimal ramp edge detection usingexpansion matching, IEEE Trans. Pattern Anal. Machine Intell., vol.18, pp. 10921097, Nov. 1996.

    [32] L. A. Iverson and S. W. Zucker, Logical/linear operators for imagecurves, IEEE Trans. Pattern Anal. Machine Intell., vol. 17, pp.982996, Oct. 1995.

    [33] F. Heijden, Edge and line feature extraction based on covariancemodels, IEEE Trans. Pattern Anal. Machine Intell., vol. 17, pp. 1633,Jan. 1995.

    [34] R. Mehrotra and S. Zhan, A computational approach to zero-crossing-based two-dimensional edge detection, Graph. Models Image Process.,vol. 58, pp. 117, Jan. 1996.

    [35] P. Papachristou, M. Petrou, and J. Kittler, Edge postprocessing usingprobabilistic relaxation, IEEE Trans. Syst., Man, Cybern. B, vol. 30,pp. 383402, June 2000.

    [36] L. Ganesan and P. Bhattacharyya, Edge detection in untextured andtextured imagesA common computational framework, IEEE Trans.Syst., Man, Cybern. B, vol. 27, pp. 823834, Oct. 1997.

    [37] G. Chen and Y. Yang, Edge detection by regularized cubic B-splinefitting, IEEE Trans. Syst., Man, Cybern., vol. 25, pp. 636643, Apr.1995.

    [38] K. W. Bowyer, C. Kranenburg, and S. Dougherty, Edge detector evalu-ation using empirical ROC curves, in Proc. IEEE Conf. Comput. VisionPattern Recognit., vol. 1, Fort Collins, CO, June 1999, pp. 354359.

    [39] M. Shin, D. Goldgof, and K. W. Bowyer, Comparison of edge detectorsusing an object recognition task, in Proc. IEEE Conf. Comput. VisionPattern Recognit., vol. 1, Fort Collins, CO, June 1999, pp. 360365.

    [40] M. Black, G. Sapiro, D. Marimont, and D. Heeger, Robust anisotropicdiffusion, IEEE Trans. Image Processing, vol. 7, pp. 421432, Mar.1998.

    [41] F. Bergholm, Edge focusing, IEEE Trans. Pattern Anal. Machine In-tell., vol. PAMI-9, p. 726, 741, Nov. 1987.

    [42] J. Canny, A computational approach to edge detection, IEEE Trans.Pattern Anal. Machine Intell., vol. PAMI-8, pp. 679698, Nov. 1986.

    [43] L. Rosenthaler, F. Heitger, O. Kubler, and R. von der Heydt, Detectionof general edges and keypoints, in Proc. ECCV, G. Sandini, Ed., 1992,pp. 7886.

    [44] C. A. Rothwell, J. L. Mundy, W. Hoffman, and V. D. Nguyen, Drivingvision by topology, in Proc. Int. Symp. Comput. Vision, Coral Gables,FL, Nov. 1995, pp. 395400.

    [45] S. Sarkar and K. L. Boyer, Optimal infinite impulse response zerocrossing based edge detectors, Comput. Vision, Graph., ImageProcess.: Image Understanding, vol. 54, pp. 224243, Sept. 1991.

    [46] I. E. Sobel, Camera Models and Machine Perception. Stanford, CA:Stanford Univ. Press, 1970, pp. 277284.

    [47] S. M. Smith and J. M. Brady, SUSANA new approach to low levelimage processing, Int. J. Comput. Vision, vol. 23, no. 1, pp. 4578, May1997.

    [48] R. Jain, R. Kasturi, and B. G. Schunck, Mach. Vision. New York: Mc-Graw-Hill, 1995.

    [49] W. Press et al., Numerical Recipes in C: The Art of Scientific Com-puting. Cambridge, U.K.: Cambridge Univ. Press, 1992.

    Min C. Shin (M88) received the B.S. and M.S. degrees, with honors, in com-puter science from the University of South Florida (USF), Tampa, in 1992 and1996, respectively. He is currently pursuing the Ph.D. degree at USF.

    His research interests include nonrigid motion analysis and evaluation of al-gorithms.

    Mr. Shin is a member of ACM, UPE, and the Golden Key Honor Society.

    Dmitry B. Goldgof (S88M89SM93) received the M.S. degree in electricalengineering from Rensselaer Polytechnic Institute, Troy, NY, in 1985 and thePh.D. degree in electrical engineering from the University of Illinois, Urbana-Champaign, in 1989.

    He is currently a Professor in the Department of Computer Science and En-gineering at the University of South Florida, Tampa. His research interests in-clude motion analysis of rigid and nonrigid objects, computer vision, imageprocessing and its biomedical applications, and pattern recognition. He has pub-lished 39 journal and 90 conference publications, 14 books chapters, and onebook. He is currently the North American Editor for Image and Vision Com-puting Journal and a member of the Editorial Board of Pattern Recognition.

    Dr. Goldgof has served as General Chair for the IEEE Conference onComputer Vision and Pattern Recognition (CVPR98) and as an Area Chairfor CVPR99. He has just recently completed his term as Associate Editorfor IEEE TRANSACTIONS ON IMAGE PROCESSING. He is a member of theIEEE Computer Society, IEEE Engineering in Medicine and Biology Society,SPIEThe International Society for Optical Engineering, Pattern RecognitionSociety, and the Honor Societies of Phi Kappa Phi and Tau Beta Pi.

    Kevin W. Bowyer (F91) received the Ph.D. degree in computer science fromDuke University, Durham, NC.

    He is currently a Professor in Computer Science and Engineering at the Uni-versity of South Florida (USF), Tampa. He was previously on the faculty of theDepartment of Computer Science, Duke University, and at the Institute for In-formatics of the Swiss Federal Technical Institute, Zurich. His current researchinterests include general areas of image understanding, pattern recognition, andmedical image analysis. From 1994 to 1998, he served as North American Ed-itor of the Image and Vision Computing Journal. In addition, he is a memberof the editorial boards of Computer Vision and Image Understanding, MachineVision and Applications, and the International Journal of Pattern Recognitionand Artificial Intelligence.

    Dr. Bowyer served as Editor-in-Chief of IEEE TRANSACTIONS ON PATTERNANALYSIS AND MACHINE INTELLIGENCE from 1999 to 2000. He served as Gen-eral Chair for the 1994 IEEE Computer Vision and Pattern Recognition Confer-ence and served as Chair of the IEEE Computer Society Technical Committeeon Pattern Analysis and Machine Intelligence from 1995 to 1997.

    Savvas Nikiforou received the Higher National Diploma in electrical engi-neering from the Higher Institute of Technology, Cyprus, Greece, in 1991 andthe B.Sc. degree in computer engineering from the University of South Florida,Tampa, in 1999, where he was also named the outstanding graduate of theComputer Science and Engineering Department. He is currently pursuing theM.S. degree in computer science at USF.

    His research interests include artificial intelligence.