texture analysis - bem-vindo ao lcad

41
1 Texture Analysis Mihran Tuceryan Department of Computer and Information Science, Indiana University -Purdue University at Indianapolis, 723 W. Michigan St. Indianapolis, IN 46202-5132 and Anil K. Jain Computer Science Department, Michigan State University East Lansing, MI 48824-1027 USA Internet: [email protected] This chapter reviews and discusses various aspects of texture analysis. The concentration is on the various methods of extracting textural features from images. The geometric, random field, fractal, and signal processing models of texture are presented. The major classes of texture processing prob- lems such as segmentation, classification, and shape from texture are discussed. The possible applica- tion areas of texture such as automated inspection, document processing, and remote sensing are summarized. A bibliography is provided at the end for further reading. Keywords: Texture, segmentation, classification, shape, signal processing, fractals, random fields, Gabor filters, wavelet transform, gray level dependency matrix. 1. Introduction In many machine vision and image processing algorithms, simplifying assumptions are made about the uniformity of intensities in local image regions. However, images of real objects often do not exhibit regions of uniform intensities. For example, the image of a wooden surface is not uniform but contains variations of intensities which form certain repeated patterns called visual texture. The patterns can be the result of physical surface properties such as roughness or oriented strands which often have a tactile quality, or they could be the result of reflectance differences such as the color on a surface. We recognize texture when we see it but it is very difficult to define. This difficulty is dem- onstrated by the number of different texture definitions attempted by vision researchers. Coggins [1] has compiled a catalogue of texture definitions in the computer vision litera- ture and we give some examples here. “We may regard texture as what constitutes a macroscopic region. Its structure is simply attributed to the repetitive patterns in which elements or primitives are ar- ranged according to a placement rule.” [2] “A region in an image has a constant texture if a set of local statistics or other local Chapter 2.1 The Handbook of Pattern Recognition and Computer Vision (2nd Edition), by C. H. Chen, L. F. Pau, P. S. P. Wang (eds.), pp. 207-248, World Scientific Publishing Co., 1998.

Upload: others

Post on 04-Feb-2022

9 views

Category:

Documents


0 download

TRANSCRIPT

n

b-a-re

arerealof ataincer they

m-hers.era-

re isar-

ocal

The Handbook of Pattern Recognition and Computer Vision (2nd Edition), by C. H. Chen, L. F. Pau,P. S. P. Wang (eds.), pp. 207-248, World Scientific Publishing Co., 1998.

Texture Analysis

Mihran TuceryanDepartment of Computer and Information Science,

Indiana University -Purdue University at Indianapolis,723 W. Michigan St. Indianapolis, IN 46202-5132

and

Anil K. JainComputer Science Department, Michigan State University

East Lansing, MI 48824-1027 USAInternet: [email protected]

This chapter reviews and discusses various aspects of texture analysis. The concentration is othe various methods of extracting textural features from images. The geometric, random field, fractal,and signal processing models of texture are presented. The major classes of texture processing prolems such as segmentation, classification, and shape from texture are discussed. The possible appliction areas of texture such as automated inspection, document processing, and remote sensing asummarized. A bibliography is provided at the end for further reading.

Keywords: Texture, segmentation, classification, shape, signal processing, fractals, random fields,Gabor filters, wavelet transform, gray level dependency matrix.

1. Introduction

In many machine vision and image processing algorithms, simplifying assumptionsmade about the uniformity of intensities in local image regions. However, images ofobjects often do not exhibit regions of uniform intensities. For example, the imagewooden surface is not uniform but contains variations of intensities which form cerrepeated patterns calledvisual texture. The patterns can be the result of physical surfaproperties such as roughness or oriented strands which often have a tactile quality, ocould be the result of reflectance differences such as the color on a surface.

We recognize texture when we see it but it is very difficult to define. This difficulty is deonstrated by the number of different texture definitions attempted by vision researcCoggins [1] has compiled a catalogue of texture definitions in the computer vision litture and we give some examples here.

• “We may regard texture as what constitutes a macroscopic region. Its structusimply attributed to the repetitive patterns in which elements or primitives areranged according to a placement rule.” [2]

• “A region in an image has a constant texture if a set of local statistics or other l

Chapter 2.1

1

tely

urerga-x-ingh thates, it

po-ts aret of

thelly,to de-bvi-dic)

nlyssi-com-ill

ess-rents atface,emsof,

ocalize,(iii)

ofhetheries

as

properties of the picture function are constant, slowly varying, or approximaperiodic.” [3]

• “The image texture we consider is nonfigurative and cellular... An image textis described by the number and types of its (tonal) primitives and the spatial onization or layout of its (tonal) primitives... A fundamental characteristic of teture: it cannot be analyzed without a frame of reference of tonal primitive bestated or implied. For any smooth gray-tone surface, there exists a scale sucwhen the surface is examined, it has no texture. Then as resolution increastakes on a fine texture and then a coarse texture.” [4]

• “Texture is defined for our purposes as an attribute of a field having no comnents that appear enumerable. The phase relations between the componenthus not apparent. Nor should the field contain an obvious gradient. The intenthis definition is to direct attention of the observer to the global properties ofdisplay — i.e., its overall “coarseness,” “bumpiness,” or “fineness.” Physicanonenumerable (aperiodic) patterns are generated by stochastic as opposedterministic processes. Perceptually, however, the set of all patterns without oous enumerable components will include many deterministic (and even periotextures.” [5]

• “Texture is an apparently paradoxical notion. On the one hand, it is commoused in the early processing of visual information, especially for practical clafication purposes. On the other hand, no one has succeeded in producing amonly accepted definition of texture. The resolution of this paradox, we feel, wdepend on a richer, more developed model for early visual information procing, a central aspect of which will be representational systems at many diffelevels of abstraction. These levels will most probably include actual intensitiethe bottom and will progress through edge and orientation descriptors to surand perhaps volumetric descriptors. Given these multi-level structures, it seclear that they should be included in the definition of, and in the computationtexture descriptors.” [6]

• “The notion of texture appears to depend upon three ingredients: (i) some l‘order’ is repeated over a region which is large in comparison to the order’s s(ii) the order consists in the nonrandom arrangement of elementary parts, and

(b)

D77 D55

D84 D17

D24

FIGURE 1. (a) An image consisting of five different textured regions: cotton canvas (D77), strawmatting (D55), raffia (D84), herringbone weave (D17), and pressed calf leather. [8]. (b) The goaltexture classification is to label each textured region with the proper category label: the identities of tfive texture regions present in (a). (c) The goal of texture segmentation is to separate the regions inimage which have different textures and identify the boundaries between them. The texture categothemselves need not be recognized. In this example, the five texture categories in (a) are identifiedseparate textures by the use of generic category labels (represented by the different fill patterns).

(a) (c)

2

ions

byrallyletely

val-manyagedif-

bonefying

eachin

heseearchn

ndertheti-Theas

onch asdue to

rs.urel.

the parts are roughly uniform entities having approximately the same dimenseverywhere within the textured region.” [7]

This collection of definitions demonstrates that the “definition” of texture is formulateddifferent people depending upon the particular application and that there is no geneagreed upon definition. Some are perceptually motivated, and others are driven compby the application in which the definition will be used.

Image texture, defined as a function of the spatial variation in pixel intensities (grayues), is useful in a variety of applications and has been a subject of intense study byresearchers. One immediate application of image texture is the recognition of imregions using texture properties. For example, in Figure 1(a), we can identify the fiveferent textures and their identities as cotton canvas, straw matting, raffia, herringweave, and pressed calf leather. Texture is the most important visual cue in identithese types of homogeneous regions. This is calledtexture classification. The goal of tex-ture classification then is to produce a classification map of the input image whereuniform textured region is identified with the texture class it belongs to as shownFigure 1(b). We could also find the texture boundaries even if we could not classify ttextured surfaces. This is then the second type of problem that texture analysis resattempts to solve —texture segmentation. The goal of texture segmentation is to obtaithe boundary map shown in Figure 1(c).Texture synthesisis often used for image com-pression applications. It is also important in computer graphics where the goal is to reobject surfaces which are as realistic looking as possible. Figure 2 shows a set of syncally generated texture images using Markov random field and fractal models [9].shape from textureproblem is one instance of a general class of vision problems known“shape from X”. This was first formally pointed out in the perception literature by Gibs[10]. The goal is to extract three-dimensional shape information from various cues sushading, stereo, and texture. The texture features (texture elements) are distorted

FIGURE 2. A set of example textures generated synthetically using only a small number of paramete(a) Textures generated by discrete Markov random field models. (b) Four textures (in each of the foquadrants) generated by Gaussian Markov random field models. (c) Texture generated by fractal mod

(a)

(b) (c)

3

sur-

turale tex-ionm

eathcam-

ingofariousrest,of thetigerg thetex-

thatof theair inh ofdi-ever,

e

the imaging process and the perspective projection which provide information aboutface orientation and shape. An example of shape from texture is given in Figure 3.

2. Motivation

Texture analysis is an important and useful area of study in machine vision. Most nasurfaces exhibit texture and a successful vision system must be able to deal with thtured world surrounding it. This section will review the importance of texture perceptfrom two viewpoints — from the viewpoint of human vision or psychophysics and frothe viewpoint of practical machine vision applications.

2.1. Psychophysics

The detection of a tiger among the foliage is a perceptual task that carries life and dconsequences for someone trying to survive in the forest. The success of the tiger inouflaging itself is a failure of the visual system observing it. The failure is in not beable to separatefigure from ground. Figure-ground separation is an issue which isintense interest to psychophysicists. The figure-ground separation can be based on vcues such as brightness, form, color, texture, etc. In the example of the tiger in the fotexture plays a major role. The camouflage is successful because the visual systemobserver is unable to discriminate (or segment) the two textures of the foliage and theskin. What are the visual processes that allow one to separate figure from ground usintexture cue? This question is the basic motivation among psychologists for studyingture perception.

Another reason why it is important to study the psychophysics of texture perception isthe performance of various texture algorithms is evaluated against the performancehuman visual system doing the same task. For example, consider the texture pFigure 4(a), first described by Julesz [11]. The image consists of two regions eacwhich is made up of different texture tokens. Close scrutiny of the texture image will incate this fact to the human observer. The immediate perception of the image, how

FIGURE 3. We can extract the orientation of the surface from the variations of texture (defined by thbricks) in this image.

4

uni-sslyrmn. Inputerm thatr, ofthatthertex-

ationthatial sta-ing

cond-

n-romual

val-tic.

yced

irs of

ond-Thistical.plesenti-

tex-etc.)pointsles inhavethenot

tex-lf isthis

exist-

tion in

does not result in the perception of two different textured regions; instead only oneformly textured region is perceived. Julesz says that such a texture pair is not “effortlediscriminable” or “preattentively discriminable.” Such synthetic textures help us fohypotheses about what image properties are important in human texture perceptioaddition, this example raises the question of how to evaluate the performance of comalgorithms that analyze textured images. For example, suppose we have an algorithcan discriminate the texture pair in Figure 4(a). Is this algorithm “correct?” The answecourse, depends on the goal of the algorithm. If it is a very special purpose algorithmshould detect such scrutably different regions, then it is performing correctly. On the ohand, if it is to be a computational model of how the human visual system processesture, then it is performing incorrectly.

Julesz has studied texture perception extensively in the context of texture discrimin[11,12,13]. The question he posed was “When is a texture pair discriminable, giventhey had the same brightness, contrast, and color?” Julesz concentrated on the spattistics of the image gray levels that are inherent in the definition of texture by keepother illumination-related properties the same.

To discuss Julesz’s pioneering work, we need to define the concepts of first- and seorder spatial statistics.

(i) First-order statisticsmeasure the likelihood of observing a gray value at a radomly-chosen location in the image. First-order statistics can be computed fthe histogram of pixel intensities in the image. These depend only on individpixel values and not on the interaction or co-occurrence of neighboring pixelues. The average intensity in an image is an example of the first-order statis

(ii) Second-order statisticsare defined as the likelihood of observing a pair of gravalues occurring at the endpoints of a dipole (or needle) of random length plain the image at a random location and orientation. These are properties of papixel values.

Julesz conjectured that two textures are not preattentively discriminable if their secorder statistics are identical. This is demonstrated by the example in Figure 4(a).image consists of a pair of textured regions whose second-order statistics are idenThe two textured regions are not preattentively discriminable. His later counter-examto this conjecture were the result of a careful construction of texture pairs that have idcal second-order statistics (see Figure 4(b)) [14,15,16].

Julesz proposed the “theory of textons” to explain the preattentive discrimination ofture pairs. Textons are visual events (such as collinearity, terminations, closure,whose presence is detected and used in texture discrimination. Terminations are endof line segments or corners. Using his theory of textons, Julesz explained the exampFigure 4 as follows. Recall that both texture images in Figure 4 have two regions thatidentical second-order statistics. In Figure 4(a), the number of terminations in bothupper and lower regions is the same (i.e., the texton information in the two regions isdifferent), therefore the visual system is unable to preattentively discriminate the twotures. In Figure 4(b), on the other hand, the number of terminations in the upper hathree, whereas the number of terminations in the lower half is four. The difference intexton makes the two textured regions discriminable. Caelli has also proposed theence of perceptual analyzers by the visual system for detecting textons [17]. Becket al.[18] have conducted experiments and argue that the perception of texture segmenta

5

t the

ienta-bellerns.f vari-

ssing.sinu-lls arevision

omer role,are

icalcationariesd (c)ssifi-

imagehas an inetec-

ateds and

istwos.

certain types of patterns is primarily a function of spatial frequency analysis and noresult of higher level symbolic grouping processes.

Studies in psychophysiology have suggested that a multi-channel, frequency and ortion analysis of the visual image formed on the retina is performed by the brain. Campand Robson [19] performed psychophysical experiments using various grating pattThey suggested that the visual system decomposes the image into filtered images oous frequencies and orientations. De Valoiset al. [20] have studied the brain of themacaque monkey which is assumed to be close to the human brain in its visual proceThey recorded the response of the simple cells in the visual cortex of the monkey tosoidal gratings of various frequencies and orientations and concluded that these cetuned to narrow ranges of frequency and orientation. These studies have motivatedresearchers to apply multi-channel filtering approaches to texture analysis.

2.2. Applications

Texture analysis methods have been utilized in a variety of application domains. In sof the mature domains (such as remote sensing) texture already has played a majowhile in other disciplines (such as surface inspection) new applications of texturebeing found. We will briefly review the role of texture in automated inspection, medimage processing, document processing, and remote sensing. Images from two applidomains are shown in Figure 5. The role that texture plays in these examples vdepending upon the application. For example, in the SAR images of Figures 5(b) antexture is defined to be the local scene heterogeneity and this property is used for clacation of land use categories such as water, agricultural areas, etc. In the ultrasoundof the heart in Figure 5(a), texture is defined as the amount of randomness whichlower value in the vicinity of the border between the heart cavity and the inner wall thathe blood filled cavity. This fact can be used to perform segmentation and boundary dtion using texture analysis methods.

2.2.1. Inspection

There has been a limited number of applications of texture processing to autominspection problems. These applications include defect detection in images of textileautomated inspection of carpet wear and automobile paints.

(a) (b)

FIGURE 4. Texture pairs with identical second-order statistics. The bottom halves of the images consof texture tokens that are different from the ones in the top half. (a) Humans cannot perceive the tregions without careful scrutiny. (b) The two different regions are immediately discriminable by human

6

ain ofintwhich. Tex-fier isenc-lowscturalfromtons,

ivid-t cat-

thistosis ofricesuralne.

d fromenceiques

tedthey”

t.

In the detection of defects in texture images, most applications have been in the domtextile inspection. Dewaeleet al. [21] used signal processing methods to detect podefects and line defects in texture images. They have sparse convolution masks inthe bank of filters are adaptively selected depending upon the image to be analyzedture features are computed from the filtered images. A Mahalanobis distance classiused to classify the defective areas. Chetverikov [22] defined a simple window differing operator to the texture features obtained from simple filtering operations. This alone to detect the boundaries of defects in the texture. Chen and Jain [23] used a struapproach to defect detection in textured images. They extract a skeletal structureimages, and by detecting anomalies in certain statistical features in these skeledefects in the texture are identified. Connerset al. [24] utilized texture analysis methods todetect defects in lumber wood automatically. The defect detection is performed by ding the image into subwindows and classifying each subwindow into one of the defecegories such as knot, decay, mineral streak, etc. The features they use to performclassification is based on tonal features such as mean, variance, skewness, and kurgray levels along with texture features computed from gray level co-occurrence matin analyzing pictures of wood. The combination of using tonal features along with textfeatures improves the correct classification rates over using either type of feature alo

In the area of quality control of textured images, Siewet al. [25] proposed a method forthe assessment of carpet wear. They used simple texture features that are computesecond-order gray level dependency statistics and from first-order gray level differstatistics. They showed that the numerical texture features obtained from these techncan characterize the carpet wear successfully. Jainet al. [26] used the texture featurescomputed from a bank of Gabor filters to automatically classify the quality of painmetallic surfaces. A pair of automotive paint finish images is shown in Figure 6 whereimage in (a) has uniform coating of paint, but the image in (b) has “mottle” or “blotchappearance.

(a)

(b) (c)

FIGURE 5. Examples of images from various application domains in which texture analysis is importan(a) The ultrasound images of a heart. (b), (c) example aerial images using SAR sensors .

7

tions.agel tis-actedrties

used.singman-

sed touited

to dis-an iso-nergyained

ke-s andformtotaland

r, thepes

graymatri-

signifi-tissue

th

2.2.2. Medical Image Analysis

Image analysis techniques have played an important role in several medical applicaIn general, the applications involve the automatic extraction of features from the imwhich are then used for a variety of classification tasks, such as distinguishing normasue from abnormal tissue. Depending upon the particular classification task, the extrfeatures capture morphological properties, color properties, or certain textural propeof the image.

The textural properties computed are closely related to the application domain to beFor example, Sutton and Hall [27] discuss the classification of pulmonary disease utexture features. Some diseases, such as interstitial fibrosis, affect the lungs in such aner that the resulting changes in the X-ray images are texture changes as oppoclearly delineated lesions. In such applications, texture analysis methods are ideally sfor these images. Sutton and Hall propose the use of three types of texture featurestinguish normal lungs from diseased lungs. These features are computed based ontropic contrast measure, a directional contrast measure, and a Fourier domain esampling. In their classification experiments, the best classification results were obtusing the directional contrast measure.

Harmset al. [28] used image texture in combination with color features to diagnose leumic malignancy in samples of stained blood cells. They extracted texture micro-edge“textons” between these micro-edges. The textons were regions with almost unicolor. They extracted a number of texture features from the textons including thenumber of pixels in the textons which have a specific color, the mean texton radiustexton size for each color and various texton shape features. In combination with colotexture features significantly improved the correct classification rate of blood cell tycompared to using only color features.

Landeweerd and Gelsema [29] extracted various first-order statistics (such as meanlevel in a region) as well as second-order statistics (such as gray level co-occurrenceces) to differentiate different types of white blood cells. Insanaet al. [30] used texturalfeatures in ultrasound images to estimate tissue scattering parameters. They madecant use of the knowledge about the physics of the ultrasound imaging process andcharacteristics to design the texture model. Chenet al. [31] used fractal texture features to

(a) (b)

FIGURE 6. Example images used in paint inspection. (a) A non-defective paint which has a smootexture. (b) A defective paint which has a mottled look.

8

edge

h asrt (seeentri-d as anted

onseactalnoise

id tis-pre-

areaplica-s. Inationte the

upons. Forused

Aeae

classify ultrasound images of livers, and used the fractal texture features to doenhancement in chest X-rays.

Lundervold [32] used fractal texture features in combination with other features (sucresponse to edge detector operators) to analyze ultrasound images of the heaFigure 7). The ultrasound images in this study are time sequence images of the left vcle of the heart. Figure 7 shows one frame in such a sequence. Texture is representeindex at each pixel, being the local fractal dimension within an window estimaaccording to the fractal Brownian motion model proposed by Chenet al. [31]. The texturefeature is used in addition to a number of other traditional features, including the respto a Kirsch edge operator, the gray level, and the result of temporal operations. The frdimension is expected to be higher on an average in blood than in tissue due to theand backscatter characteristics of the blood which is more disordered than that of solsue. In addition, the fractal dimension is low at non-random blood/tissue interfaces resenting edge information.

2.2.3. Document Processing

One of the useful applications of machine vision and image analysis has been in theof document image analysis and character recognition. Document processing has aptions ranging from postal address recognition to analysis and interpretation of mapmany postal document processing applications (such as the recognition of destinaddress and zip code information on envelopes), the first step is the ability to separaregions in the image which contain useful information from the background.

Most image analysis methods proposed to date for document processing are basedthe characteristics of printed documents and try to take advantage of these propertieexample, generally newspaper print is organized in rectangular blocks and this fact is

(a)

(b)

(c)

FIGURE 7. The processing of the ultrasound images of the heart using textural features. (a)ultrasound image from the left ventricle of the heart. (b) The fractal dimension used as th

texture feature. The fractal dimension is lower at the walls of the ventricle. (c) Image segmentation fromk-means clustering algorithm. The white region is cluster 2 which corresponds to the blood. Thclustering uses four features, one of which is the fractal texture feature.

128 432×

11 11×

9

d on. Fortify

Kas-ns-

oriesgmen-

ges toThemage.ture,d isge isass forand. The

ns athe

byture

in a segmentation algorithm proposed in [33]. Many methods work on images baseprecise algorithms which one might consider as having morphological characteristicsexample, Wang and Srihari [33] used projection profiles of the pixel values to idenlarge “text” blocks by detecting valleys in these profiles. Wahlet al. [34] used constrainedrun lengths and connected component analysis to detect blocks of text. Fletcher andturi [35] used the fact that most text blocks lie in a straight line, and utilized Hough traform techniques to detect collinear elements. Taxtet al. [36] view the identification ofprint in document images as a two-category classification problem, where the categare print and background. They use various classification methods to compute the setation including contextual classification and relaxation algorithms.

One can also use texture segmentation methods for preprocessing document imaidentify regions of interest [37,38,39]. An example of this can be seen in Figure 8.texture segmentation algorithm described in [40] was used to segment a newspaper iIn the resulting segmentation, one of the regions identified as having a uniform texwhich is different from its surrounding texture, is the bar code block. A similar methoused for locating text blocks in newspapers. A segmentation of the document imaobtained using three classes of textures: one class for the text regions, a second clthe uniform regions that form the background or images where intensities vary slowly,a third class for the transition areas between the two types of regions (see Figure 9)text regions are characterized by their high frequency content.

(a) (b)

FIGURE 8. Locating bar code in a newspaper image. (a) A scanned image of a newspaper that contaibar code. (b) The two-class segmentation using Gabor filter features in [40]. The bar code region inimage has a distinct texture.

(a) (b) (c)

FIGURE 9. Text/graphics separation using texture information. (a) An image of a newspaper captureda flatbed scanner. (b) The three-class segmentation obtained by the Gabor filter based texsegmentation algorithm. (c) The regions identified as text.

10

nd use

tion.sedfour

hey

fromnowl-wereifica-

es byntify

Fractaltex-

% forfea-d theLee

ardsichceneveryten-f tex-itive

is agh-size

nal

tionres-

theks

thent-re is

2.2.4. Remote Sensing

Texture analysis has been extensively used to classify remotely sensed images. Laclassification where homogeneous regions with differenttypesof terrains (such as wheat,bodies of water, urban regions, etc.) need to be identified is an important applicaHaralick et al. [41] used gray level co-occurrence features to analyze remotely senimages. They computed gray level co-occurrence matrices for a distance of one withdirections ( , , , and ). For a seven-class classification problem, tobtained approximately 80% classification accuracy using texture features.

Rignot and Kwok [42] have analyzed SAR images using texture features computedgray level co-occurrence matrices. However, they supplement these features with kedge about the properties of SAR images. For example, image restoration algorithmsused to eliminate the specular noise present in SAR images in order to improve classtion results. The use of various texture features was studied for analyzing SAR imagSchistad and Jain [43]. SAR images shown in Figures 5(b) and (c) were used to ideland use categories of water, agricultural areas, urban areas, and other areas.dimension, autoregressive Markov random field model, and gray level co-occurrenceture features were used in the classification. The classification errors ranged from 25the fractal based models to as low as 6% for the MRF features. Du [44] used texturetures derived from Gabor filters to segment SAR images. He successfully segmenteSAR images into categories of water, new forming ice, older ice, and multi-year ice.and Philpot [45] also used spectral texture features to segment SAR images.

3. A Taxonomy of Texture Models

Identifying the perceived qualities of texture in an image is an important first step towbuilding mathematical models for texture. The intensity variations in an image whcharacterize texture are generally due to some underlying physical variation in the s(such as pebbles on a beach or waves in water). Modelling this physical variation isdifficult, so texture is usually characterized by the two-dimensional variations in the insities present in the image. This explains the fact that no precise, general definition oture exists in the computer vision literature. In spite of this, there are a number of intuproperties of texture which are generally assumed to be true.

• Texture is a property of areas; the texture of a point is undefined. So, texturecontextual property and its definition must involve gray values in a spatial neiborhood. The size of this neighborhood depends upon the texture type, or theof the primitives defining the texture.

• Texture involves the spatial distribution of gray levels. Thus, two-dimensiohistograms or co-occurrence matrices are reasonable texture analysis tools.

• Texture in an image can be perceived at different scales or levels of resolu[10]. For example, consider the texture represented in a brick wall. At a coarseolution, the texture is perceived as formed by the individual bricks in the wall;interior details in the brick are lost. At a higher resolution, when only a few bricare in the field of view, the perceived texture shows the details in the brick.

• A region is perceived to have texture when the number of primitive objects inregion is large. If only a few primitive objects are present, then a group of couable objects is perceived instead of a textured image. In other words, a textu

0° 45° 90° 135°

11

crib-in

irec-inde-ty ofs so

xture

e ofon lit-n

sed.rela-

mma-rices

statis-hich

as

s

©

perceived when significant individual “forms” are not present.

Image texture has a number of perceived qualities which play an important role in desing texture. Laws [47] identified the following properties as playing an important roledescribing texture: uniformity, density, coarseness, roughness, regularity, linearity, dtionality, direction, frequency, and phase. Some of these perceived qualities are notpendent. For example, frequency is not independent of density and the properdirection only applies to directional textures. The fact that the perception of texture hamany different dimensions is an important reason why there is no single method of terepresentation which is adequate for a variety of textures.

3.1. Statistical Methods

One of the defining qualities of texture is the spatial distribution of gray values. The usstatistical features is therefore one of the early methods proposed in the machine visierature. In the following, we will use to denote a

image with gray levels. A large number of texture features have been propoBut, these features are not independent as pointed out by Tomita and Tsuji [46]. Thetionship between the various statistical texture measures and the input image is surized in Figure 10 [46]. Picard [48] has also related the gray level co-occurrence matto the Markov random field models.

3.1.1. Co-occurrence Matrices

Spatial gray level co-occurrence estimates image properties related to second-ordertics. Haralick [10] suggested the use of gray level co-occurrence matrices (GLCM) whave become one of the most well-known and widely used texture features. Thegray level co-occurrence matrix for a displacement vector is defined

follows. The entry of is the number of occurrences of the pair of gray level

and which are a distance apart. Formally, it is given as

(11.1)

I x y,( ) 0 x N 1–≤ ≤ 0 y N 1–≤ ≤, , N N× G

G G×Pd d dx dy,( )=

i j,( ) Pd i

j d

Original Image Fourier Transform

Co-occurrenceMatrix

AutocorrelationFunction

Power Spectrum

Difference Statistics AutoregressionModel

FIGURE 10. The interrelation between the various second-order statistics and the input image [46].Reprinted by permission of Kluwer Academic Publishers.

Pd i j,( ) r s,( ) t v,( ),( ):I r s,( ) i I t v,( ), j= = =

12

:

r of

off-

.

co-nce

tex-cen-mentuted

and

wellrenceures

where , , and is the cardinality of aset.

As an example, consider the following image containing 3 different gray values

The gray level co-occurrence matrix for this image for a displacement vecto is given as follows:

Here the entry of is 4 because there are four pixel pairs of that are

set by amount. Examples of for other displacement vectors is given below

Notice that the co-occurrence matrix so defined is not symmetric. But a symmetricoccurrence matrix can be computed by the formula . The co-occurre

matrix reveals certain properties about the spatial distribution of the gray levels in theture image. For example, if most of the entries in the co-occurrence matrix are contrated along the diagonals, then the texture is coarse with respect to the displacevectord. Haralick has proposed a number of useful texture features that can be compfrom the co-occurrence matrix. Table 1 lists some of these features.

Here and are the means and and are the standard deviations of

, respectively, where and .

The co-occurrence matrix features suffer from a number of difficulties. There is noestablished method of selecting the displacement vector and computing co-occurmatrices for different values of is not feasible. For a given , a large number of feat

r s,( ) t v,( ), N N×∈ t v,( ) r dx+ s dy+,( )= .

4 4×

1 1 0 0

1 1 0 0

0 0 2 2

0 0 2 2

3 3×d 1 0,( )=

Pd

4 0 2

2 2 0

0 0 2

=

0 0,( ) Pd 0 0,( )1 0,( ) Pd

d 0 1,( )= Pd

4 2 0

0 2 0

0 0 2

=

d 1 1,( )= Pd

3 1 1

1 1 0

1 0 1

=

P Pd P d–+=

µx µy σx σy Pd x( )

Pd y( ) Pd x( ) Pd x j,( )j

∑= Pd y( ) Pd i y,( )i

∑=

dd d

13

aturerrencetasks

turess themage.

. This). If, itit

formurier

isf therly

can be computed from the co-occurrence matrix. This means that some sort of feselection method must be used to select the most relevant features. The co-occumatrix-based texture features have also been primarily used in texture classificationand not in segmentation tasks.

3.1.2. Autocorrelation Features

An important property of many textures is the repetitive nature of the placement of texelements in the image. The autocorrelation function of an image can be used to asseamount of regularity as well as the fineness/coarseness of the texture present in the iFormally, the autocorrelation function of an image is defined as follows:

(11.2)

The image boundaries must be handled with special care but we omit the details herefunction is related to the size of the texture primitive (i.e., the fineness of the texturethe texture is coarse, then the autocorrelation function will drop off slowly; otherwisewill drop off very rapidly. For regular textures, the autocorrelation function will exhibpeaks and valleys.

The autocorrelation function is also related to the power spectrum of the Fourier trans(see Figure 10). Consider the image function in the spatial domain and its Fo

transform . The quantity is defined as the power spectrum wherethe modulus of a complex number. The example in Figure 11 illustrates the effect odirectionality of a texture on the distribution of energy in the power spectrum. Ea

Texture Feature Formula

Energy

Entropy

Contrast

Homogeneity

Correlation

Pd2

i j,( )j

∑i

Pd i j,( ) Pd i j,( )logj

∑i

∑–

i j–( )2Pd i j,( )j

∑i

∑Pd i j,( )

1 i j–+----------------------

j∑

i∑

i µx–( ) j µy–( )Pd i j,( )j

∑i

∑σxσy

------------------------------------------------------------------------

TABLE 1. Some texture features extracted from gray level co-occurrence matrices.

I x y,( )

ρ x y,( )

I u v,( )I u x v y+,+( )v 0=

N

∑u 0=

N

I 2 u v,( )

v 0=

N

∑u 0=

N

∑-------------------------------------------------------------------------=

I x y,( )

F u v,( ) F u v,( ) 2 .

14

s (forfre-

ns is

m.r

teded in

approaches using such spectral features would divide the frequency domain into ringfrequency content) and wedges (for orientation content) as shown in Figure 12. Thequency domain is thus divided into regions and the total energy in each of these regiocomputed as texture features.

(a) (b)

FIGURE 11. Texture features from the power spectrum. (a) A texture image, and (b) its power spectruThe directional nature of this texture is reflected in the directional distribution of energy in the powespectrum.

FIGURE 12. Texture features computed from the power spectrum of the image. (a) The energy compuin each shaded band is a texture feature indicating coarseness/fineness, and (b) the energy computeach wedge is a texture feature indicating directionality.

u

v v

u

f r1 r2, F u v,( ) 2drdθr2

r1

∫0

∫=

r u2

v2

+= θ v u⁄( )atan=

f θ1 θ2, F u v,( ) 2drdθ0

∫θ1

θ2

∫=

r u2

v2

+= θ v u⁄( )atan=

(a) (b)

15

thodss” orthesee twom theextract

etric

ies ofosedse theons.n be as

ents

l forkensthatarbi-ofhalf

int

tion

her

set

llationresel-

t pat-

y oftherop-

kensray

3.2. Geometrical Methods

The class of texture analysis methods that falls under the heading of geometrical meis characterized by their definition of texture as being composed of “texture elementprimitives. The method of analysis usually depends upon the geometric properties oftexture elements. Once the texture elements are identified in the image, there armajor approaches to analyzing the texture. One computes statistical properties froextracted texture elements and utilizes these as texture features. The other tries tothe placement rule that describes the texture. The latter approach may involve geomor syntactic methods of analyzing texture.

3.2.1. Voronoi tessellation Features

Tuceryan and Jain [49] proposed the extraction of texture tokens by using the propertthe Voronoi tessellation of the given image. Voronoi tessellation has been propbecause of its desirable properties in defining local spatial neighborhoods and becaulocal spatial distributions of tokens are reflected in the shapes of the Voronoi polygFirst, texture tokens are extracted and then the tessellation is constructed. Tokens casimple as points of high gradient in the image or complex structures such as line segmor closed boundaries.

In computer vision, the Voronoi tessellation was first proposed by Ahuja as a modedefining “neighborhoods” [50]. Suppose that we are given a set of three or more to(for simplicity, we will assume that a token is a point) in the Euclidean plane. Assumethese points are not all collinear, and that no four points are cocircular. Consider antrary pair of points and . The bisector of the line joining and is the locuspoints equidistant from both and and divides the plane into two halves. The

plane ( ) is the locus of points closer to ( ) than to ( ). For any given po

, a set of such half planes is obtained for various choices of . The intersec

defines a polygonal region consisting of points closer to than any ot

point. Such a region is called the Voronoi polygon [51] associated with the point. Theof complete polygons is called theVoronoi diagramof [52]. The Voronoi diagramtogether with the incomplete polygons in the convex hull define aVoronoi tessellationofthe entire plane. Two points are said to beVoronoi neighborsif the Voronoi polygonsenclosing them share a common edge. The dual representation of the Voronoi tesseis theDelaunay graphwhich is obtained by connecting all the pairs of points which aVoronoi neighbors as defined above. An optimal algorithm to compute the Voronoi teslation for a point pattern is described by Preparata and Shamos [53]. A simple 2D dotern and its Voronoi tessellation are shown in Figure 13.

The neighborhood of a token is defined by the Voronoi polygon containing . Manthe perceptually significant characteristics of a token’s environment are manifest ingeometric properties of the Voronoi neighborhoods (see Figure 13). The geometric perties of the Voronoi polygons are used as texture features.

In order to apply geometrical methods to gray level images, we need to first extract tofrom images. We use the following simple algorithm to extract tokens from input glevel textural images.

S

P Q P QP Q

HPQ

HQP

P Q Q P

P Q

HPQ

Q S∈ Q P≠,∩ P

S

P P

16

ta-

s-woand

G

. Arg-age.nary

arest

ronoixture

that. The

rdi-

2

f tex-son ofs isxture

stics.

1. Apply a Laplacian-of-Gaussian (LoG or ) filter to the image. For compu

tional efficiency, the filter can be approximated with a difference of Gausians (DoG) filter. The size of the DoG filter is determined by the sizes of the tGaussian filters. Tuceryan and Jain used for the first Gaussian

for the second. According to Marr, this is the ratio at which a Do

filter best approximates the corresponding filter [54].

2. Select those pixels that lie on a local intensity maximum in the filtered imagepixel in the filtered image is said to be on a local maximum if its magnitude is laer than six or more of its eight nearest neighbors. This results in a binary imFor example, applying steps 1 and 2 to the image in Figure 14(a) yields the biimage in Figure 14(b).

3. Perform a connected component analysis on the binary image using eight neneighbors. Each connected component defines a texture primitive (token).

The Voronoi tessellation of the resulting tokens is constructed. Features of each Vocell are extracted and tokens with similar features are grouped to construct uniform teregions. Moments of area of the Voronoi polygons serve as a useful set of featuresreflect both the spatial distribution and shapes of the tokens in the textured image

order moments of area of a closed region with respect to a token with coo

nates are defined as [55]:

(11.3)

where . A description of the five features used is given in Table

where are the coordinates of the Voronoi polygon’s centroid.

The texture features based on Voronoi polygons have been used for segmentation otured images. The segmentation algorithm is edge based, using a statistical comparithe neighboring collections of tokens. A large dissimilarity among the texture featureevidence for a texture edge. This algorithm has successfully segmented gray level teimages as well as a number of synthetic textures with identical second-order stati

FIGURE 13. Voronoi tessellation: (a) An example dot pattern, and (b) its Voronoi tessellation.

(a) (b)

............ . . . . . ..........

.

.

.

.

.

.

.

.

..

............................... .

... ..

.. .... ....

............ . . . . .......... .

.

..............

......................

............. ... .

.... ..

............

............ . . . .......... .

.......

............ . . .......... ..

.

......

....

..

..........

.

.....................

.......................

.......................... . ....

......

............ . ...........

....

............ .......

.....................

............ ..................

........................

........................

........................

.. .

. .. ....

.

....

..

.

............ . . . . . ..........

.

.

.

.

.

.

.

.

..

............................... .

... ..

.. .... ....

............ . . . . .......... .

.

..............

......................

............. ... .

.... ..

............

............ . . . .......... .

.......

............ . . .......... ..

.

......

....

..

..........

.

.....................

.......................

.......................... . ....

......

............ . ...........

....

............ .......

.....................

............ ..................

........................

........................

........................

.. .

. .. ....

.

....

..

.

G∇2

G∇2

σ1 1=

σ2 1.6σ1=

G∇2

p q+( )th R

x0 y0,( )

mpq x x0–( )py y0–( )qdxdy∫

R∫=

p q+ 0 1 2 …, , ,=

x y,( )

17

men-

itives.lace-ling

: (a)

efinest oflobsg thethisro-LoGapealson theperties

ell

Figure 14(a) shows an example texture pair and Figure 14(c) shows the resulting segtation.

3.2.2. Structural Methods

The structural models of texture assume that textures are composed of texture primThe texture is produced by the placement of these primitives according to certain pment rules. This class of algorithms, in general, is limited in power unless one is deawith very regular textures. Structural texture analysis consists of two major stepsextraction of the texture elements, and (b) inference of the placement rule.

There are a number of ways to extract texture elements in images. It is useful to dwhat is meant by texture elements in this context. Usually texture elements consiregions in the image with uniform gray levels. Voorhees and Poggio [56] argued that bare important in texture perception. They have proposed a method based on filterinimage with Laplacian of Gaussian (LoG) masks at different scales and combininginformation to extract the blobs in the image. Blostein and Ahuja [57] perform similar pcessing in order to extract texture tokens in images by examining the response of thefilter at multiple scales. They integrate their multi-scale blob detection with surface shcomputation in order to improve the results of both processes. Tomita and Tsuji [46]suggest a method of computing texture tokens by doing a medial axis transform oconnected components of a segmented image. They then compute a number of prosuch as intensity and shapes of these detected tokens.

Texture Feature Computation

f 1 m00

f 2x

2y2+

f 3 y x⁄( )atan

f 4

m20 m02–( )24m11

2+

m20 m02 m20 m02–( )24m11

2++ +---------------------------------------------------------------------------------------

f 5 2m11

m20 m02–------------------------

atan

TABLE 2. Voronoi polygon features used by the texture segmentation algorithm [49]. Here, gives thmagnitude of the vector from the token to the polygon centroid, gives its direction, gives the overaelongation of the polygon ( for a circle), and gives the orientation of its major axis.are the coordinates of the Voronoi polygon’s centroid.

f 2f 3 f 4

f 4 0= f 5 x y,( )

18

l tex-idealraphsationscom-ture

thisplace-but ittex-minald formmars

modelrame-

mthee

Zucker [58] has proposed a method in which he regards the observable textures (reatures) as distorted versions of ideal textures. The placement rule is defined for thetexture by a graph that is isomorphic to a regular or semiregular tessellation. These gare then transformed to generate the observable texture. Which of the regular tessellis used as the placement rule is inferred from the observable texture. This is done byputing a two-dimensional histogram of the relative positions of the detected textokens.

Another approach to modeling texture by structural means is described by Fu [59]. Inapproach the texture image is regarded as texture primitives arranged according to ament rule. The primitive can be as simple as a single pixel that can take a gray value,is usually a collection of pixels. The placement rule is defined by a tree grammar. Ature is then viewed as a string in the language defined by the grammar whose tersymbols are the texture primitives. An advantage of this method is that it can be usetexture generation as well as texture analysis. The patterns generated by the tree gracould also be regarded as ideal textures in Zucker’s model.

3.3. Model Based Methods

Model based texture analysis methods are based on the construction of an imagethat can be used not only to describe texture, but also to synthesize it. The model paters capture the essential perceived qualities of texture.

(a) (b)

FIGURE 14. Texture segmentation using the Voronoi tessellation. (a) An example texture pair froBrodatz’s album [92], (b) the peaks detected in the filtered image, and (c) the segmentation usingtexture features obtained from Voronoi polygons [49]. The arrows indicate the border direction. Thinterior is on the right when looking in the direction of the arrow.

(c)

..

.

.

.

.

.

.

.

.

.

.

.

..

...

.

..

..

.

.

.

.

.

.

.

.

.

.

.

.

.

.

..

.

.

.

.

..

.

.

..

.

.

.

.

.

.

.

.

..

..

..

..

.

..

.

..

.

.

.

.

.

.

.

.

.

.

.

..

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

..

.

.

.

.

.

.

.

.

.

..

.

..

.

..

.

.

.

.

.

.

.

.

.

.

.

..

.

.

.

.

.

..

.

..

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

..

....

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

..

..

.

.

...

.

.

..

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

..

.

..

.

.

.

.

.

.

...

.

.

.

.

.

.

..

.......

.

.

.

.

.

.

.

.

.

.

.

.

.

.

..

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

..

..

.

.

..

..

.

.

.

.

.

.

.

.

.

.

..

.

..

.

.

..

.

..

.

.

..

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

...

.

.

.

.

.

.

..

.

.

.

.

.

.

.

.

.

.

.

.

..

.

.

...

.

.

.

..

.

.

..

19

le toe thatoringch asage

bye

-

l

ll

anr

ld

ark-

fectedsite,

oba-

lat-

tionigh-e 15.the

niqueran-one

odelon-

3.3.1. Random Field Models

Markov random fields (MRFs) have been popular for modeling images. They are abcapture the local (spatial) contextual information in an image. These models assumthe intensity at each pixel in the image depends on the intensities of only the neighbpixels. MRF models have been applied to various image processing applications sutexture synthesis [60], texture classification [61,62], image segmentation [63,64], imrestoration [65], and image compression.

The image is usually represented by an lattice denoted. is a random variable which represents th

gray level at pixel on latticeL. The indexing of the lattice is simplified for mathe

matical convenience to with . LetA be the range set common to al

random variables and let denote the set of a

labellings ofL. Note thatA is specified according to the application. For instance, forimage with 256 different gray levelsA may be the set . The random vecto

denotes a coloring of the lattice. A discrete Markov random fie

is a random field whose probability mass function has the properties of positivity, Movianity, and homogeneity.

The neighbor set of a sitet can be defined in different ways. The first-order neighbors otare its four-connected neighbors and the second-order neighbors are its eight-connneighbors. Within these neighborhoods, sets of neighbors which form cliques (singlepairs, triples, and quadruples) are usually used in the definition of the conditional prbilities.

A discrete Gibbs random field (GRF) assigns a probability mass function to the entiretice:

(11.4)

where is an energy function and is a normalizing constant called the partifunction. The energy function is usually specified in terms of cliques formed over neboring pixels. For the second-order neighbors the possible cliques are given in FigurThe energy function then is expressed in terms of potential functions over

cliques :

(11.5)

We have the property that with respect to a neighborhood system, there exists a uGibbs random field for every Markov random field and there exists a unique Markovdom field for every Gibbs random field [66]. The consequence of this theorem is thatcan model the texture either globally by specifying the total energy of the lattice or mit locally by specifying the local interactions of the neighboring pixels in terms of the cditional probabilities.

N N×L i j,( ) 1 i M 1 j N≤ ≤,≤ ≤ = I i j,( )

i j,( )I t t i 1–( )N j+=

I t Ω x1 x2 … xMN, , ,( ) xt A t∀,∈ =

0 1 … 255, , , I I 1 I 2 … I MN, , ,( )=

P X x=( ) 1Z---e U x( )–= x Ω∈∀

U x( ) Z

VC .( )

Q

U x( ) VC x( )c Q∈∑=

20

elds.ichond-res-

an-

f theto bee esti-

g thisaturalresults

There are a number of ways in which textures are modeled using Gibbs random fiAmong these are the Derin-Elliot model [67] and the auto-binomial model [66, 60] whare defined by considering only the single pixel and pairwise pixel cliques in the secorder neighbors of a site. In both models the conditional probabilities are given by expsions of the following form:

(11.6)

where is the normalization constant. The energy of the Gibbs r

dom field is given by:

(11.7)

where and . The two

models define the components of the vector, , differently as follows:

Derin-Elliott model: .

Auto-binomial model: .

Herer is the index that defines the set of neighboring pixels of a sitet, and is anindicator function as follows:

(11.8)

The vector is the set of parameters that define and model the textural properties oimage. In texture synthesis problems, the values are set to control the type of texturegenerated. In the classification and segmentation problems, the parameters need to bmated in order to process the textures images. Textures were synthesized usinmethod by Cross and Jain [60]. Model parameters were also estimated for a set of ntextures. The estimated parameters were used to generate synthetic textures and the

FIGURE 15. The clique types for the second-order neighborhood.

P xt Rt( )1Zt-----e

w xt Rt( )T– θ=

Zt e w g Rt,( )Tθ–

g A∈∑=

U x( ) 12--- w xt Rt,( )Tθ

t 1=

MN

∑=

w xt Rt,( ) w1 xt( ) w2 xt( ) w3 xt( ) w4 xt( )T

= θ θ1 θ2 θ3 θ4T

=

w wr xt( )

wr xt( ) I xt xt r–,( ) I xt xt r+,( ) 1 r 4≤ ≤+=

wr xt( ) xt xt r– xt r++( )= 1 r 4≤ ≤

I a b,( )

I a b,( )1– if a b=

1 otherwise

=

θ

21

they

rentties inotice

con-istic

argern evi-ctals.

cribedsion

ed to

ce-

ely, a

71].

were compared to the original images. The models captured microtextures well, butfailed with regular and inhomogeneous textures.

3.3.2. Fractals

Many natural surfaces have a statistical quality of roughness and self-similarity at diffescales. Fractals are very useful and have become popular in modeling these properimage processing. Mandelbrot [68] proposed fractal geometry and is the first one to nits existence in the natural world.

We first define a deterministic fractal in order to introduce some of the fundamentalcepts. Self-similarity across scales in fractal geometry is a crucial concept. A determinfractal is defined using this concept of self-similarity as follows. Given a bounded setA ina Euclidean n-space, the setA is said to be self-similar whenA is the union ofN distinct(non-overlapping) copies of itself, each of which has been scaled down by a ratio ofr. Thefractal dimensionD is related to the numberN and the ratior as follows:

(11.9)

The fractal dimension gives a measure of the roughness of a surface. Intuitively, the lthe fractal dimension, the rougher the texture is. Pentland [69] has argued and givedence that images of most natural surfaces can be modeled as spatially isotropic fraMost natural surfaces and in particular textured surfaces are not deterministic as desabove but have a statistical variation. This makes the computation of fractal dimenmore difficult.

There are a number of methods proposed for estimating the fractal dimensionD. Onemethod is the estimation of the box dimension as follows [70]. Given a bounded setA inEuclidean n-space, consider boxes of size on a side which cover the setA. A scaled

down version of the set by ratior, will result in similar sets. This new setcan be covered by boxes of size . The number of such boxes then is relat

the fractal dimension by

(11.10)

The fractal dimension is then estimated from Equation (11.10) by the following produre. For a givenL, divide the n-space into a grid of boxes of sizeL and count the numberof boxes coveringA. Repeat this procedure for different values ofL. Then estimate thevalue of the fractal dimensionD from the slope of the line

(11.11)

This can be accomplished by computing the least squares linear fit to the data, namplot of vs. .

An improved method of estimating the fractal dimension was proposed by Voss [Assume we are estimating the fractal dimension of an image surfaceA. Let be the

DNlog

1 r⁄( )log----------------------=

Lmax

A N 1 rD⁄=

L rLmax=

N L( ) 1r D------

Lmax

L------------

D= =

ln N L( )( ) Dln L( )– Dln Lmax( )+=

ln L( ) ln N L( )( )–

P m L,( )

22

r--r the

e thectalsig-

ownctaldres.

Thismass.

mass

as

d as

Theythe

tures,ardages:faces.

probability that there arem points within a box of side lengthL centered at an arbitrarypoint on the surfaceA. Let M be the total number of points in the image. When one ovelays the image with boxes of side lengthL, then the is the expected number of boxes withmpoints inside. The expected total number of boxes needed to covewhole image is

(11.12)

The expected value of is proportional to and thus can be used to estimatfractal dimensionD. Other methods have also been proposed for estimating the fradimension. For example, Super and Bovik [72] have proposed using Gabor filters andnal processing methods to estimate the fractal dimension in textured images.

The fractal dimension is not sufficient to capture all textural properties. It has been sh[70] that there may be perceptually very different textures that have very similar fradimensions. Therefore, another measure, calledlacunarity [68,71,70], has been suggestein order to capture the textural property that will let one distinguish between such textuLacunarity is defined as

(11.13)

whereM is the mass of the fractal set and is the expected value of the mass.measures the discrepancy between the actual mass and the expected value of theLacunarity is small when texture is fine and it is large when the texture is coarse. Theof the fractal set is related to the lengthL by the power law:

(11.14)

Voss [71] suggested computing lacunarity from the probability distribution

follows. Let and . Then lacunarity

is defined as:

(11.15)

where is the expected value of . This measure of the image is then usea texture feature in order to perform texture segmentation or classification.

Ohanian and Dubes [73] have studied the performance of various texture features.studied the texture features with the performance criteria “which features optimizedclassification rate?” They compared four fractal features, sixteen co-occurrence feafour Markov random field features, and Gabor features. They used Whitney’s forwselection method for feature selection. The evaluation was done on four classes of imGauss Markov random field images, fractal images, leather images, and painted sur

M m⁄( )P m L,( )

E N L( )[ ] M 1 m⁄( )P m L,( )

m 1=

N

∑=

N L( ) L D–

Λ EM

E M( )-------------- 1–

2=

E M( )

M L( ) KLD=

P m L,( )

M L( ) mP m L,( )m 1=

N

∑= M2 L( ) m2P m L,( )m 1=

N

∑=

Λ

Λ L( ) M2 L( ) M L( )( )2–

M L( )( )2-----------------------------------------=

E M L( )[ ] M L( )

23

sifica-ur-

e theltered

tex-

analy-of itsy onltered

arlierer uniter tex-uch asr the

gnitude

cep-volu-ven of

con-andnlin-dis-rderctionon aturesfea-hslerting

The co-occurrence features generally outperformed other features (88% correct clastion) followed by fractal features (84% classification). Using both fractal and co-occrence features improved the classification rate to 91%. Their study did not compartexture features in segmentation tasks. It also used the energy from the raw Gabor fiimages instead of using the empirical nonlinear transformation needed to obtain theture features as suggested in [40] (see also Section 3.4.3).

3.4. Signal Processing Methods

Psychophysical research has given evidence that the human brain does a frequencysis of the image [19,74]. Texture is especially suited for this type of analysis becauseproperties. This section will review the various techniques of texture analysis that relsignal processing techniques. Most techniques try to compute certain features from fiimages which are then used in either classification or segmentation tasks.

3.4.1. Spatial Domain Filters

Spatial domain filters are the most direct way to capture image texture properties. Eattempts at defining such methods concentrated on measuring the edge density parea. Fine textures tend to have a higher density of edges per unit area than coarstures. The measurement of edgeness is usually computed by simple edge masks sthe Robert’s operator or the Laplacian operator [10,47]. The two orthogonal masks foRobert’s operator and one digital realization of the Laplacian are given below.

The edgeness measure can be computed over an image area by computing a mafrom the responses of Roberts masks or from the response of the Laplacian mask.

Malik and Perona [75] proposed spatial filtering to model the preattentive texture pertion in human visual system. Their proposed model consists of three stages: (i) contion of the image with a bank of even-symmetric filters followed by half-warectification, (ii) inhibition of spurious responses in a localized area, and (iii) detectiothe boundaries between the different textures. The even-symmetric filters they usedsist of differences of offset Gaussian (DOOG) functions. The half-wave rectificationinhibition (implemented as leaders-take-all strategy) are methods of introducing a noearity into the computation of texture features. A nonlinearity is needed in order tocriminate texture pairs with identical mean brightness and identical second-ostatistics. The texture boundary detection is done by a straightforward edge detemethod applied to the feature images obtained from stage (ii). This method worksvariety of texture examples and is able to discriminate natural as well as synthetic texwith carefully controlled properties. Unser and Eden [76] have also looked at texturetures that are obtained from spatial filters and a nonlinear operator. Reed and Wec[77] review a number of spatial/spatial frequency domain filter techniques for segmentextured images.

Roberts Operators Laplacian Operator

M11 0

0 1–= M2

0 1

1– 0= L

1– 1– 1–

1– 8 1–

1– 1– 1–

=

24

ents

pixelThe

atures.sys-the

in-

m in

the

8]. An

s theagesulti-ssingvel-

Another set of spatial filters are based on spatial moments [47]. The momover an image regionR are given by the formula

(11.16)

If the regionR is a local rectangular area and the moments are computed around eachin the image, then this is equivalent to filtering the image by a set of spatial masks.resulting filtered images that correspond to the moments are then used as texture feThe masks are obtained by defining a window of size and a local coordinatetem centered within the window. Let be the image coordinates at which

moments are computed. For pixel coordinates which fall within the w

dow centered at , the normalized coordinates are given by:

(11.17)

Then the moments within a window centered at pixel are computed by the suEquation (11.16) that uses the normalized coordinates.

(11.18)

The coefficients for each pixel within the window to evaluate the sum is what definesmask coefficients. IfR is a 3x3 region, then the resulting masks are given below:

The moment-based features have been used successfully in texture segmentation [7example texture pair and the segmentation are shown in Figure 16.

3.4.2. Fourier domain filtering

The frequency analysis of the textured image is best done in the Fourier domain. Apsychophysical results indicated, the human visual system analyzes the textured imby decomposing the image into its frequency and orientation components [19]. The mple channels tuned to different frequencies are also referred as multi-resolution procein the literature. The concept of multi-resolution processing is further refined and de

p q+( )th

mpq xpyqI x y,( )x y,( ) R∈∑=

W W×i j,( )

m n,( ) W W×i j,( ) xm yn,( )

xmm i–( )W 2⁄( )

-----------------= ynn j–( )W 2⁄( )

-----------------=

i j,( )

mpq I m n,( )xmp

ynq

m W 2⁄–=

W 2⁄

∑n W 2⁄–=

W 2⁄

∑=

M00

1 1 1

1 1 1

1 1 1

=

M10

1– 1– 1–

0 0 0

1 1 1

=

M01

1– 0 1

1– 0 1

1– 0 1

=

M20

1 1 1

0 0 0

1 1 1

= M11

1 0 1–

0 0 0

1– 0 1

= M02

1 0 1

1 0 1

1 0 1

=

25

sicalourier

thekept.

ulti-

encywithble toxture

anyhan-y of

rierned

orm.rier

tionthe

ain.g fil-This

nges.

ting o

oped in the wavelet model described below. Along the lines of these psychophyresults, texture analysis systems have been developed that perform filtering in the Fdomain to obtain feature images. The idea is similar to the features computed fromrings and wedges as described in Section 3.1.2, except that the phase information isCoggins and Jain [79] used a set of frequency and orientation selective filters in mchannel filtering approach. Each filter is either frequency selectiveor orientation selective.

There are four orientation filters centered at 0, 45, 90, and 135. The number of frequselective filters depends on the image size. For an image of size six filterscenter frequencies at 1, 2, 4, 8, 16, 32, and 64 cycles/image were used. They were asuccessfully segment and classify a variety of natural images as well as synthetic tepairs described by Julesz with identical second-order statistics (see Figure 4).

3.4.3. Gabor and Wavelet models

The Fourier transform is an analysis of the global frequency content in the signal. Mapplications require the analysis to be localized in the spatial domain. This is usuallydled by introducing spatial dependency into the Fourier analysis. The classical wadoing this is through what is called the window Fourier transform. The window Foutransform (or short-time Fourier Transform) of a one-dimensional signal is defias:

(11.19)

When the window function is Gaussian, the transform becomes a Gabor transfThe limits on the resolution in the time and frequency domain of the window Foutransform are determined by thetime-bandwidth productor the Heisenberg uncertaintyinequality given by:

(11.20)

Once a window is chosen for the window Fourier transform, the time-frequency resoluis fixed over the entire time-frequency plane. To overcome the resolution limitation ofwindow Fourier transform, one lets the and vary in the time-frequency domIntuitively, the time resolution must increase as the central frequency of the analyzinter is increased. That is, the relative bandwidth is kept constant in a logarithmic scale.is accomplished by using a window whose width changes as the frequency cha

(a) (b)

FIGURE 16. The segmentation results using moment based texture features. (a) A texture pair consisfreptile skin and herringbone pattern from the Brodatz album [92]. (b) The resulting segmentation.

128 128×

f x( )

Fw u ξ,( ) f x( )w x ξ–( )e j2πux– dx

∞–

∫=

w x( )

∆t∆u1

4π------≥

∆t ∆u

26

the

elet

same

be

s ofrs in

abortwo-

and

and

by a

them of

Recall that when a function is scaled in time by which is expressed as ,

function is contracted if and it is expanded when . Using this fact, the wavtransform can be written as:

(11.21)

Here, the impulse response of the filter bank is defined to be scaled versions of theprototype function . Now, setting in Equation (11.21)

(11.22)

we obtain the wavelet model for texture analysis. Usually the scaling factor willbased on the frequency of the filter.

Daugman [80] proposed the use of Gabor filters in the modeling of the receptive fieldsimple cells in the visual cortex of some mammals. The proposal to use the Gabor filtetexture analysis was made by Turner [81] and Boviket al. [82]. Later Farrokhnia and Jainused it successfully in segmentation and classification of textured images [40,83]. Gfilters have some desirable optimality properties. Daugman [84] showed that fordimensional Gabor functions, the uncertainty relations and

attain the minimum value. Here and are effective widths in the spatial domain

and are effective bandwidths in the frequency domain.

A two-dimensional Gabor function consists of a sinusoidal plane wave of a certainfre-quency andorientation modulated by a Gaussian envelope. It is given by:

(11.23)

where and are the frequency and phase of the sinusoidal wave. The values

are the sizes of the Gaussian envelope in thex and y directions, respectively. The

Gabor function at an arbitrary orientation can be obtained from Equation (11.23)

rigid rotation of thex-y plane by .

The Gabor filter is a frequency and orientation selective filter. This can be seen fromFourier domain analysis of the function. When the phase is 0, the Fourier transfor

the resulting even-symmetric Gabor function is given by

(11.24)

f t( ) a f at( )a 1> a 1<

Wf a, u ξ,( )1

a------- f t( )h∗ t ξ–

a----------( )dt

∞–

∫=

h t( )

h t( ) w t( )e j2πut–=

a

∆x∆u π 4⁄≥ ∆y∆v π 4⁄≥∆x ∆y

∆u ∆v

f x y,( ) exp12--- x2

σx2

------ y2

σy2

------+–( ) 2πu0x φ+( )cos=

u0 φ σx

σy

θ0

θ0

φf x y,( )

F u v,( ) A exp12---

u u0–( )2

σu2

---------------------- v2

σv2

------+–( ) exp12---

u u0+( )2

σu2

---------------------- v2

σv2

------+–( )+

=

27

and

n the

s forlters.

red

al-

iationow

ture

f thee 17.assifi-e 18.

pliedxture

whatspe-is a

ethodss. In a

where , , and . This function is real-val-

ued and has two lobes in the spatial frequency domain, one centered around

another centered around . For a Gabor filter of a particular orientation, the lobes i

frequency domain are also appropriately rotated.

Jain and Farrokhnia [40] used a version of the Gabor transform in which window sizecomputing the Gabor filters are selected according to the central frequencies of the fiThe texture features were obtained as follows:

(a) Use a bank of Gabor filters at multiple scales and orientations to obtain filte

images. Let the filtered image for the filter be .

(b) Pass each filtered image through a sigmoidal nonlinearity. This nonlinearityhas the form of . The choice of the value of is determined empiricly.

(c) The texture feature for each pixel is computed as the absolute average devof the transformed values of the filtered images from the mean within a wind

of size . The filtered images have zero mean, therefore, the texfeature image is given by the equation:

(11.25)

The window size is also determined automatically based on the central frequency ofilter. An example texture image and some intermediate results are shown in FigurTexture features using Gabor filters were used in texture segmentation and texture clcation tasks successfully. An example of the resulting segmentation is shown in FigurFurther details of the segmentation algorithm are explained in Section 4.1.

4. Texture Analysis Problems

The various methods for modeling textures and extracting texture features can be apin four broad categories of problems: texture segmentation, texture classification, tesynthesis, and shape from texture. We now review these four areas.

4.1. Texture Segmentation

Texture segmentation is a difficult problem because one usually does not knowa prioriwhat types of textures exist in an image, how many different textures there are, andregions in the image have which textures. In fact, one does not need to know whichcific textures exist in the image in order to do texture segmentation. All that is neededway to tell that two textures (usually in adjacent regions of the images) are different.

The two general approaches to performing texture segmentation are analogous to mfor image segmentation: region-based approaches or boundary-based approache

σu 1 2πσx( )⁄= σv 1 2πσy( )⁄= A 2πσxσy=

u0

u0–

i th r i x y,( )

ψ t( )αt( )tanh α

W M M× i th

ei x y,( )

ei x y,( )1

M2-------- ψ r i a b,( )( )

a b,( ) W∈∑=

M

28

formxtureentedclosede dis-pecify

ds on

ture inure. Inge inturesmeth-

bsetas acy olteredighte are

region-based approach, one tries to identify regions of the image which have a unitexture. Pixels or small local regions are merged based on the similarity of some teproperty. The regions having different textures are then considered to be segmregions. This method has the advantage that the boundaries of regions are alwaysand therefore, the regions with different textures are always well separated. It has thadvantage, however, that in many region-based segmentation methods, one has to sthe number of distinct textures present in the image in advance. In addition, thresholsimilarity values are needed.

The boundary-based approaches are based upon the detection of differences in texadjacent regions. Thus boundaries are detected where there are differences in textthis method, one does not need to know the number of textured regions in the imaadvance. However, the boundaries may have gaps and two regions with different texare not identified as separate closed regions. Strictly speaking, the boundary basedods result in segmentation only if all the boundaries detected form closed curves.

FIGURE 17. Filtering results on an example texture image. (a) Input image of five textures. (b), (c) A suof the filtered images each filtered with a Gabor filter with the following parameters. The filter in (b) hcentral frequency at 16 cycles/image-width and 135˚ orientation. The filter in (c) has a central frequenf32 cycles/image-width and 0˚ orientation. (d), (e) The feature images obtained corresponding to the fiimages in (b) and (c). The filtered image in (b) shows a lot of activity in the textured region of the top rquadrant and the image in (c) shows activity in the textured region of the top left quadrant. Thesreflected in the feature images in (d) and (e).

(a)

(b)

(d)

(c)

(e)

29

d Jainr tex-r the

it isBuf

tation

ns inKol-ers andcorre-uch ase thepix-iffer-

dary-ey useased

ag-

omlyonalheal-

nted.

ysis.mage.re-

gion.

etec-h fea-dges

on re-

ve there 18.

Boundary based segmentation of textured images have been used by Tuceryan an[49], Voorhees and Poggio [56], and Eom and Kashyap [85]. In all cases, the edges (oture boundaries) are detected by taking two adjacent windows and deciding whethetextures in the two windows belong to the same texture or to different textures. Ifdecided that the two textures are different, the point is marked as a boundary pixel. Duand Kardan [86] studied and compared the performance of various texture segmentechniques and their ability to localize the boundaries.

Tuceryan and Jain [49] use the texture features computed from the Voronoi polygoorder to compare the textures in the two windows. The comparison is done using amogorov-Smirnoff test. A probabilistic relaxation labeling, which enforces bordsmoothness, is used to remove isolated edge pixels and fill boundary gaps. VoorheePoggio extract blobs and elongated structures from images (they suggest that thesespond to Julesz’s textons). The texture properties are based on blob characteristics stheir sizes, orientations, etc. They then decide whether the two sides of a pixel havsame texture using a statistical test called maximum frequency difference (MFD). Theels where this statistic is sufficiently large are considered to be boundaries between dent textures.

Jain and Farrokhnia [40] give an example of integrating a region-based and a bounbased method to obtain a cleaner and more robust texture segmentation method. Ththe texture features computed from the bank of Gabor filters to perform a region-bsegmentation. This is accomplished by the following steps:

(a) Gabor features are calculated from the input image, yielding several feature imes.

(b) A cluster analysis is performed in the Gabor feature space on a subset of randselected pixels in the input image (this is done in order to increase computatiefficiency. About 6% of the total number of pixels in the image are selected). Tnumberk of clusters is specified for doing the cluster analysis. This is set to a vue larger than the true number of clusters and thus the image is oversegme

(c) Step (b) assigns a cluster label to the pixels (pattern) involved in cluster analThese labelled patterns are used as the training set and all the pixels in the iare classified into one of thek clusters. A minimum distance classifier is usedThis results in a complete segmentation of the image into uniform texturedgions.

(d) A connected component analysis is performed to identify each segmented re

(e) A boundary-based segmentation is performed by applying the Canny edge dtor on each feature image. The magnitude of the Canny edge detector for eacture image is summed up for each pixel to obtain a total edge response. The eare then detected based on this total magnitude.

(f) The edges so detected are then combined with the region-based segmentatisults to obtain the final texture segmentation.

The integration of the boundary-based and region-based segmentation results improresulting segmentation in most cases. For an example of this improvement see Figu

30

longs

d, one

ssinguse

hniainedsam-from

selyManyfieldifying0].field

calen-tation

4.2. Texture Classification

Texture classification involves deciding what texture category an observed image beto. In order to accomplish this, one needs to have ana priori knowledge of the classes tobe recognized. Once this knowledge is available and the texture features are extractethen uses classical pattern classification techniques in order to do the classification.

Examples where texture classification was applied as the appropriate texture procemethod include the classification of regions in satellite images into categories of land[41]. Texture classification was also used in automated paint inspection by Farrok[83]. In the latter application, the categories were ratings of the quality of paints obtafrom human experts. These quality rating categories were then used as the trainingples for supervised classification of paint images using texture features obtainedmulti-channel Gabor filters.

4.3. Texture Synthesis

Texture synthesis is a problem which is more popular in computer graphics. It is clotied to some of the methods discussed above, so we give only a brief summary here.of the modeling methods are directly applicable to texture synthesis. Markov randommodels discussed in Section 3.3.1 can be directly used to generate textures by specthe parameter vector and sampling from the probability distribution function [62, 6The synthetic textures in Figure 2(b) are generated using a gaussian Markov randommodel and the algorithm in [87].

(a) (b)

(c) (d)

FIGURE 18. The results of integrating region-based and boundary-based processing using the multi-sGabor filtering method. (a) Original image consisting of five natural textures. (b) Seven category regiobased segmentation results. (c) Edge-based processing and texture edges detected. (d) New segmenafter combining region-based and edge-based results.

θ

31

look-the-

mentveryealis-oreh aightsdom

ting offirstmount

n .

ction

ctalample

idedls thessign-eter-gular

pancyset ofmod-

three-h cuess offromand

theas onand ae on

nce ofsefulboth

pertyhich

Fractals have become popular recently in computer graphics for generating realisticing textured images [88]. A number of different methods have been proposed for synsizing textures using fractal models. These methods include midpoint displacemethod and Fourier filtering method. The midpoint displacement method has becomepopular because it is a simple and fast algorithm yet it can be used to generate very rtic looking textures. Here we only give the general outline of the algorithm. A much mdetailed discussion of the algorithm can be found in [88]. The algorithm starts witsquare grid representing the image with the four corners set to 0. It then displaces heat the midpoints of the four sides and the center point of the square region by ranamounts and repeats the process recursively. The iteration uses the grid consisthe midpoints of the squares in the grid for iteration . The height at the midpoint isinterpolated between the endpoints and a random value is added to this value. The a

added is chosen from a normal distribution with zero-mean and variance at iteratio

In order to keep the self-similar nature of the surface, the variance is changed as a funof the iteration number. The variance at iteration is given by

(11.26)

This results in a fractal surface with fractal dimension . The heights of the frasurface can be mapped onto intensity values to generate the textured images. The eximage in Figure 2(c) was generated using this method.

Other methods include mosaic models [89,90]. This class of models can in turn be divinto subclasses of cell structure models and coverage models. In cell structure modetextures are generated by tessellating the plane into cells (bounded polygons) and aing each cell gray levels according to a set of probabilities. The type of tessellation dmines what type of textures are generated. The possible tessellations include trianpattern, checkerboard patterns, Poisson line model, Delaunay model, and occumodel. In coverage models, the texture is obtained by a random arrangement of ageometric figures in the plane. The coverage models are also referred to as bombingels.

4.4. Shape from Texture

There are many cues in images that allow the viewer to make inferences about thedimensional shapes of objects and surfaces present in the image. Examples of sucinclude the variations of shading on the object surfaces or the relative configurationboundaries and the types of junctions that allow one to infer three-dimensional shapethe line drawings of objects. The relation between the variations in texture propertiessurface shape was first pointed out by Gibson [41].

Stevens observed that certain properties of texture are perceptually significant inextraction of surface geometry [91]. There are three effects that surface geometry hthe appearance of texture in images: foreshortening and scaling of texture elements,change in their density. The foreshortening effect is due to the orientation of the surfacwhich the texture element lies. The scaling and density changes are due to the distathe texture elements from the viewer. Stevens argued that texture density is not a umeasure for computing distance or orientation information because the density varieswith scaling and foreshortening. He concluded that the more perceptually stable prothat allows one to extract surface geometry information is the direction in the image w

n 1+n

σn2

n

n

σn2

r2nH

= wherer 1 2⁄=

3 H–( )

32

nristic

rfaceace inimageepthxture

facegles.Tilt isoordi-edges areea isicular

atingete axis

n the

y the

om-tilt

likeli-

of

s:

e

nte-ation.lds ae ele-lter-

is not foreshortened, called thecharacteristic dimension. Stevens suggested that one cacompute relative depth information using the reciprocal of the scaling in the charactedimension. Using the relative depths, surface orientation can be estimated.

Bajcsy and Lieberman [92] used the gradient in texture element sizes to derive sushape. They assumed a uniform texture element size on the three-dimensional surfthe scene. The relative distances are computed based on a gradient function in thewhich was estimated from the texture element sizes. The estimation of the relative dwas done without using knowledge about the camera parameters and the original teelement sizes.

Witkin [93] used the distribution of edge orientations in the image to estimate the surorientation. The surface orientation is represented by the slant ( ) and tilt ( ) anSlant is the angle between a normal to the surface and a normal to the image plane.the angle between the surface normal’s projection onto the image plane and a fixed cnate axis in the image plane. He assumed an isotropic texture (uniform distribution oforientations) on the original surface. As a result of the projection process, the textureforeshortened in the direction of steepest inclination (slant angle). Note that this idrelated to Stevens’ argument because the direction of steepest inclination is perpendto the characteristic dimension. Witkin formulated the surface shape recovery by relthe slant and tilt angles to the distribution of observed edge directions in the image. Lbe the original edge orientation (the angle between the tangent and a fixed coordinat

on the plane S containing the tangent). Let be the angle between the x-axis i

image plane and the projected tangent. The is related to the slant and tilt angles bfollowing expression:

(11.27)

Here is an observable quantity in the image and are the quantities to be cputed. Witkin derived the expression for the conditional probabilities for the slant andangles given the measured edge directions in the image and then used a maximum

hood estimation method to compute the . Let be a set

observed edge directions in the image. Then the conditional probabilities are given a

(11.28)

where . The maximum likelihood estimate of gives th

desired surface orientation.

Blostein and Ahuja [57] used the scaling effect to extract surface information. They igrated the process of texture element extraction with the surface geometry computTexture element extraction is performed at multiple scales and the subset that yiegood surface fit is selected. The surfaces are assumed planar for simplicity. Texturments are defined to be circular regions of uniform intensity which are extracted by fi

σ τ

β

α∗

α∗

α∗ βtanσcos

------------ atan τ+=

α∗ σ τ,( )

σ τ,( ) A∗ α1∗ … αn

∗, , =

P σ τ A∗,( )P σ τ,( )P A∗ σ τ,( )

P σ τ,( )P A∗ σ τ,( )dσdτ∫∫----------------------------------------------------------------=

P σ τ,( ) σsin

π2-----------= P σ τ, A∗( )

33

s to

ts of

areutedxture

a)ld

ing the image with and operators and comparing the filter response

those of an ideal disk (here is the size of the Gaussian, ). At the extremum poin

the image filtered by , the diameter ( ) and contrast ( ) of the best fitting diskscomputed. The convolution is done at multiple scales. Only those disks whose compdiameters are close to the size of the Gaussian are retained. As a result, blob-like teelements of different sizes are detected.

f

textured plane

g

FIGURE 19. The projective distortion of a texture element in the image.

image of texture elementof lengthFi

Texture element of length on the surface.F p

The slant of the planeσ

FIGURE 20. Examples of shape from texture computation using Blostein and Ahuja’s algorithm [57]. (An image of a field of rocks and the computed slant and tilt of the plane. (b) An image of a sunflower fieand the extracted slant and tilt values.

(a) (b)

σ 62.5° τ, 65°= = σ 70° τ, 95°= =

G∇2σ∂

∂G∇2( )

σ G

G∇2 D C

34

lt of

nd the

duct

ele-

. The

era,e. To

com-xtureample

for pro-in thedoc-ownhavebeen

n andfrom

DA-pro-d C.ided

The geometry of the projection is shown in Figure 19. Let and be the slant and ti

the surface. The image of a texture element has the foreshortened dimension a

characteristic dimension . The area of the image texel is proportional to the pro

for compact shapes. The expression for the area, , of the image of a texture

ment is given by:

(11.29)

where is the area that would be measured for the texel at the center of the image

angle is given by the expression

(11.30)

Here, is the physical width of the image, is a measure of field of view of the camand denotes pixel coordinates in the image. can be measured in the imag

find the surface orientation, an accumulator array consisting of the parameters

is constructed. For each combination of parameter values, a possible planar fit isputed. The plane with the highest fit rating is selected as the surface orientation and teelements that support this fit are selected as the true texture elements. Some eximages and the computed slant and tilt values are shown in Figure 20.

5. Summary

This chapter has reviewed the basic concepts and various methods and techniques cessing textured images. Texture is a prevalent property of most physical surfacesnatural world. It also arises in many applications such as satellite imagery and printeduments. Many common low level vision algorithms such as edge detection break dwhen applied to images that contain textured surfaces. It is therefore crucial that werobust and efficient methods for processing textured images. Texture processing hassuccessfully applied to practical application domains such as automated inspectiosatellite imagery. It is also going to play an important role in the future as we can seethe promising application of texture to a variety of different application domains.

Acknowledgment

The support of the National Science Foundation through grants IRI-8705256 and C8806599 is gratefully acknowledged. We thank the Norwegian Computing center forviding the SAR images shown in Figure 5. We also thank our colleagues Dr. RicharDubes and Dr. Patrick J. Flynn for the invaluable comments and feedback they provduring the preparation of this document.

σ τFi

Ui Ai

FiUi Ai

Ai AC 1 θ σtantan–( )3=

AC

θ

θ x τcos y τsin+( ) r f⁄( )( )atan=

r r f⁄x y,( ) Ai

AC σ τ, ,( )

35

he-gan,

cep-

,

In

ate

e,

stics

References

1. Coggins, J. M., “A Framework for Texture Analysis Based on Spatial Filtering,” Ph.D. Tsis, Computer Science Department, Michigan State University, East Lansing, Michi1982.

2. Tamura, H., S. Mori, and Y. Yamawaki, “Textural Features Corresponding to Visual Pertion,” IEEE Transactions on Systems, Man, and Cybernetics, SMC-8, pp. 460-473, 1978.

3. Sklansky, J., “Image Segmentation and Feature Extraction,”IEEE Transactions on SystemsMan, and Cybernetics, SMC-8, pp. 237-247, 1978.

4. Haralick, R.M., “Statistical and Structural Approaches to Texture,”Proceedings of the IEEE,67, pp. 786-804, 1979.

5. Richards, W. and A. Polit, “Texture matching,”Kybernetic, 16, pp. 155-162, 1974.

6. Zucker, S. W. and K. Kant, “Multiple-level Representations for Texture Discrimination,”Proceedings of the IEEE Conference on Pattern Recognition and Image Processing, pp. 609-614, Dallas, TX, 1981.

7. Hawkins, J. K., “Textural Properties for Pattern Recognition,” InPicture Processing andPsychopictorics, B. Lipkin and A. Rosenfeld (editors), Academic Press, New York, 1969.

8. Brodatz, P.,Textures: A Photographic Album for Artists and Designers. New York, DoverPublications, 1966.

9. Chen, C. C.,Markov Random Fields in Image Analysis, Ph.D. Thesis, Computer ScienceDepartment, Michigan State University, East Lansing, MI, 1988.

10. Gibson, J.J.,The perception of the visual world, Boston, MA, Houghton Mifflin, 1950.

11. Julesz, B., E. N. Gilbert, L. A. Shepp, and H. L. Frisch, “Inability of humans to discriminbetween visual textures that agree in second-order statistics - revisited,”Perception, 2, pp.391-405,1973.

12. Julesz, B.,“ Visual Pattern Discrimination,”IRE Trans. on Information Theory, IT-8 , pp. 84-92, 1962.

13. Julesz, B., “Experiments in the visual perception of texture,”Scientific American, 232, pp.34-43, 1975.

14. Julesz, B., “Nonlinear and Cooperative Processes in Texture Perception,” InTheoreticalApproaches in Neurobiology,T.P. Werner, E. Reichardt, (Editors). MIT Press, CambridgMA, pp. 93-108, 1981.

15. Julesz, B., “Textons, the Elements of Texture Perception, and Their Interactions,”Nature,290, pp. 91-97, 1981.

16. Julesz, B., “A Theory of Preattentive Texture Discrimination Based on First-Order Statiof Textons,”Biological Cybernetics, 41, pp. 131-138, 1981.

17. Caelli, T.,Visual Perception, Pergamon Press, 1981.

36

ng in

at-

in

on-on

” In

nd

ess-

In

is-

of

Auto-

s of

on in

h

nal-

18. Beck, J., A. Sutter, and R. Ivry, “Spatial Frequency Channels and Perceptual GroupiTexture Segregation,”Computer Vision, Graphics, and Image Processing, 37, pp. 299-325,1987.

19. Campbell, F.W. and J.G. Robson, “Application of Fourier Analysis to the Visibility of Grings,” Journal of Physiology, 197, pp. 551-566, 1968.

20. Devalois, R.L., D.G. Albrecht, and L.G. Thorell, “Spatial -frequency selectivity of cellsmacaque visual cortex,”Vision Research, 22, pp. 545-559, 1982.

21. Dewaele, P., P. Van Gool, and A. Oosterlinck, “Texture Inspection with Self-Adaptive Cvolution Filters,” InProceedings of the 9th International Conference on Pattern Recogniti,pp. 56-60, Rome, Italy, Nov. 14-17, 1988.

22. Chetverikov, D., “Detecting Defects in Texture,” InProceedings of the 9th InternationalConference on Pattern Recognition, pp. 61-63, Rome, Italy, Nov. 14-17, 1988.

23. Chen, J. and A. K. Jain, “A Structural Approach to Identify Defects in Textured Images,Proceedings of IEEE International Conference on Systems, Man, and Cybernetics, pp. 29-32, Beijing, 1988.

24. Conners, R. W., C. W. McMillin, K. Lin, and R. E. Vasquez-Espinosa, “Identifying aLocating Surface Defects in Wood: Part of an Automated Lumber Processing System,”IEEETransactions on Pattern Analysis and Machine Intelligence, PAMI-5 , pp. 573-583, 1983.

25. Siew, L. H., R. M. Hodgson, and E. J. Wood, “Texture Measures for Carpet Wear Assment,” IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-10 , pp. 92-105, 1988.

26. Jain, A. K., F. Farrokhnia, and D. H. Alman, “Texture Analysis of Automotive Finishes,”Proceedings of SME Machine Vision Applications Conference, pp. 1-16, Detroit, MI, Nov.1990.

27. Sutton, R. and E. L. Hall, “Texture Measures for Automatic Classification of Pulmonary Dease,” IEEE Transactions on Computers, C-21, pp. 667-676, 1972.

28. Harms, H., U. Gunzer, and H. M. Aus, “Combined Local Color and Texture AnalysisStained Cells,”Computer Vision, Graphics, and Image Processing, 33, pp.364-376, 1986.

29. Landeweerd, G. H. and E. S. Gelsema, “The Use of Nuclear Texture Parameters in thematic Analysis of Leukocytes,”Pattern Recognition, 10, pp. 57-61, 1978.

30. Insana, M. F., R. F. Wagner, B. S. Garra, D. G. Brown, and T. H. Shawker, “AnalysiUltrasound Image Texture via Generalized Rician Statistics,”Optical Engineering, 25, pp.743-748, 1986.

31. Chen, C. C., J. S. Daponte, and M. D. Fox, “Fractal Feature Analysis and ClassificatiMedical Imaging,”IEEE Transactions on Medical Imaging, 8, pp. 133-142, 1989.

32. Lundervold, A.,Ultrasonic Tissue Characterization - A Pattern Recognition Approac,Technical Report, Norwegian Computing Center, Oslo, Norway, 1992.

33. Wang, D. and S. N. Srihari, “Classification of Newspaper Image Blocks Using Texture Aysis,” Computer Vision, Graphics, and Image Processing, 47, pp. 327-352, 1989.

37

in

ede

atic

Fil-

pear

ers,”

on,”

delpo-

,

tri-on

34. Wahl, F. M., K. Y. Wong, and R. G. Casey, “Block Segmentation and Text ExtractionMixed Text/Image Documents,”Computer Graphics and Image Processing, 20, pp. 375-390,1982.

35. Fletcher, J. A. and R. Kasturi, “A Robust Algorithm for Text String Separation from MixText/Graphics Images,”IEEE Transactions on Image Analysis and Machine Intelligenc,PAMI-10 , pp. 910-918, 1988.

36. Taxt, T., P. J. Flynn, and A. K. Jain, “Segmentation of Document Images,”Transactions onPattern Analysis and Machine Intelligence, PAMI-11 , pp. 1322-1329, 1989.

37. Jain, A. K. and S. K. Bhattacharjee, “Text Segmentation Using Gabor Filters for AutomDocument Processing,” to appear inMachine Vision and Applications.

38. Jain, A. K. and S. K. Bhattacharjee, “Address Block Location on Envelopes Using Gaborters,” to appear inProceedings of 11th International Conference on Pattern Recognition, TheHague, Netherlands, August 1992.

39. Jain, A. K., S. K. Bhattacharjee, and Y. Chen, “On Texture in Document Images,” to apin Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Cham-paign, IL, June 1992.

40. Jain, A.K. and F. Farrokhnia. “Unsupervised Texture Segmentation Using Gabor FiltPattern Recognition, 24, pp. 1167-1186, 1991.

41. Haralick, R. M., K. Shanmugam, and I. Dinstein, “Textural features for image classificatiIEEE Transactions on Systems, Man, and Cybernetics, SMC-3, pp. 610-621, 1973.

42. Rignot, E. and R. Kwok, “Extraction of Textural Features in SAR Images: Statistical Moand Sensitivity,” InProceedings of International Geoscience and Remote Sensing Symsium, pp. 1979-1982, Washington, DC, 1990.

43. Schistad, A. H. and A. K. Jain, “Texture Analysis in the Presence of Speckle Noise,” InPro-ceedings of IEEE Geoscience and Remote Sensing Symposium, Houston, TX, May 1992.

44. Du, L. J., “Texture Segmentation of SAR Images Using Localized Spatial Filtering,” InPro-ceedings of International Geoscience and Remote Sensing Symposium, pp. 1983-1986,Washington, DC, 1990.

45. Lee, J. H. and W. D. Philpot, “A Spectral-Textural Classifier for Digital Imagery,” InPro-ceedings of International Geoscience and Remote Sensing Symposium, pp. 2005-2008,Washington, DC, 1990.

46. Tomita, Fumiaki and S. Tsuji,Computer Analysis of Visual Textures, Kluwer Academic Pub-lishers, Boston, 1990.

47. Laws, K. I.,Textured Image Segmentation. Ph.D. thesis, University of Southern California1980.

48. Picard, R., I. M. Elfadel, and A. P. Pentland, “Markov/Gibbs Texture Modeling: Aura Maces and Temperature Effects,” InProceedings of the IEEE Conference on Computer Visiand Pattern Recognition, pp. 371-377, Maui, Hawaii, 1991.

49. Tuceryan, M. and A. K. Jain, “Texture Segmentation Using Voronoi Polygons,”IEEE Trans-actions on Pattern Analysis and Machine Intelligence, PAMI-12 , pp. 211-216, 1990.

38

qua-

-

s,” In

ande

Ran-

mage

for

esiane

50. Ahuja, N., “Dot Pattern Processing Using Voronoi Neighborhoods,”IEEE Transactions onPattern Analysis and Machine Intelligence, PAMI-4 , pp. 336-343, 1982.

51. Voronoi, G., “Nouvelles applications des paramètres continus à la théorie des formesdratiques. Deuxième mémoire: Recherches sur les parallélloèdres primitifs,”J. ReineAngew. Math., 134, pp. 198-287,1908.

52. Shamos, M.I. and D. Hoey. “Closest-point Problems,” In16th Annual Symposium on Foundations of Computer Science, pp. 131-162, 1975.

53. Preparata, F. P. and M. I. Shamos,Computational Geometry, Springer-Verlag, New York,1985.

54. Marr, D.,Vision, Freeman, San Francisco, 1982.

55. Hu, M. K., “Visual Pattern Recognition by Moment Invariants,”IRE Transactions on Infor-mation Theory, IT-8 , pp. 179-187, 1962.

56. Voorhees, H. and T. Poggio, “Detecting textons and texture boundaries in natural imageProceedings of the First International Conference on Computer Vision, pp. 250-258, Lon-don, 1987.

57. Blostein, D. and N. Ahuja, “Shape from Texture: Integrating Texture-Element ExtractionSurface Estimation,”IEEE Transactions on Pattern Analysis and Machine Intelligenc,PAMI-11 , pp. 1233-1251, 1989.

58. Zucker, S. W., “Toward a model of Texture,”Computer Graphics and Image Processing, 5,pp. 190-202, 1976.

59. Fu, K.S.,Syntactic Pattern Recognition and Applications, Prentice-Hall, New Jersey, 1982.

60. Cross, G.C. and A.K. Jain, “Markov Random Field Texture Models,”IEEE Transactions onPattern Analysis and Machine Intelligence, PAMI-5 , pp. 25-39, 1983.

61. Chellappa, R. and S. Chatterjee, “Classification of Textures Using Gaussian Markovdom Fields,” IEEE Transactions on Acoustic, Speech, and Signal Processing, ASSP-33, pp.959-963, 1985.

62. Khotanzad, A. and R. Kashyap, “Feature Selection for Texture Recognition Based on ISynthesis,”IEEE Transactions on Systems, Man, and Cybernetics, 17, pp. 1087-1095, 1987.

63. Cohen, F.S. and D.B. Cooper, “Simple Parallel Hierarchical and Relaxation AlgorithmsSegmenting Noncausal Markovian Random Fields,”IEEE Transactions on Pattern Analysisand Machine Intelligence, PAMI-9 , pp. 195-219, 1987.

64. Therrien, C.W., “An Estimation-Theoretic Approach to Terrain Image Segmentation,”Com-puter Vision, Graphics, and Image Processing, 22, pp. 313-326, 1983.

65. Geman, S. and D. Geman, “Stochastic Relaxation, Gibbs distributions, and the BayRestoration of Images,” IEEE Transactions on Pattern Analysis and Machine Intelligenc,PAMI-6 , pp. 721-741, 1984.

66. Besag, J., “Spatial Interaction and the Statistical Analysis of Lattice Systems,”Journal ofRoyal Statistical Society, B-36, pp. 344-348, 1974.

39

inge

tion

sing

l Fea-

ha-

sedr-

ation

es,”

on,”

al-90.

67. Derin, H. and H. Elliott, “Modeling and segmentation of noisy and textured images usGibbs random fields,”IEEE Transactions on Pattern Analysis and Machine Intelligenc,PAMI-9 , pp. 39-55, 1987.

68. Mandelbrot, B. B.,The Fractal Geometry of Nature, Freeman, San Francisco, 1983.

69. Pentland, A., “Fractal-based description of natural scenes,”IEEE Transactions on PatternAnalysis and Machine Intelligence, PAMI-9 , pp. 661-674, 1984.

70. Keller, J. M., S. Chen, and R. M. Crownover, “Texture Description and Segmentathrough Fractal Geometry,”Computer Vision, Graphics, and Image Processing, 45, pp. 150-166, 1989.

71. Voss, R., “Random fractals: Characterization and measurement,” InScaling Phenomena inDisordered Systems, R. Pynn and A. Skjeltorp, (Editors), Plenum, New York, 1986.

72. Super, B. J. and A. C. Bovik, “Localized Measurement of Image Fractal Dimension UGabor Filters,”Journal of Visual Communication and Image Representation, 2, pp. 114-128,1991.

73. Ohanian, P. P. and R. C. Dubes, “Performance Evaluation for Four Classes of Texturatures,” submitted toPattern Recognition.

74. Georgeson, M. A., “Spatial Fourier Analysis and Human Vision,” Chapter 2, inTutorialEssays in Psychology, A Guide to Recent Advances, N. S. Sutherland (ed.), vol 2, LawrenceEarlbaum Associates, Hillsdale, N.J., 1979.

75. Malik, J. and P. Perona, “Preattentive Texture Discrimination with Early Vision Mecnisms,”Journal of the Optical Society of America, Series A,7, pp. 923-932, 1990.

76. Unser, M. and M. Eden, “Nonlinear Operators for Improving Texture Segmentation Baon Features Extracted by Spatial Filtering,”IEEE Transactions on Systems, Man, and Cybenetics, SMC-20, pp. 804-815, 1990.

77. Reed, T. R. and H. Wechsler, “Segmentation of Textured Images and Gestalt OrganizUsing Spatial/Spatial-Frequency Representations,”IEEE Transactions on Pattern Analysisand Machine Intelligence, PAMI-12 , pp. 1-12, 1990.

78. Tuceryan, M. “Moment Based Texture Segmentation,” InProceedings of 11th InternationalConference on Pattern Recognition, The Hague, Netherlands, August 1992.

79. Coggins, J.M. and A.K. Jain, “A Spatial Filtering Approach to Texture Analysis,” PatternRecognition Letters, 3, pp. 195-203, 1985.

80. Daugman, J.G., “Two-dimensional spectral analysis of cortical receptive field profilVision Research, 20, pp. 847-856, 1980.

81. Turner, M.R., “Texture Discrimination by Gabor Functions,”Biological Cybernetics, 55, pp.71-82, 1986.

82. Clark, M. and A.C. Bovik, “Texture segmentation using Gabor modulation/demodulatiPattern Recognition Letters, 6, pp. 261-267, 1987.

83. Farrokhnia, F.,Multi-channel filtering techniques for texture segmentation and surface quity inspection, Ph.D. thesis, Computer Science Department, Michigan State University, 19

40

enta-

om

eg-

Usingr-

84. Daugman, J.G., “Uncertainty relation for resolution in space, spatial-frequency, and orition optimized by two-dimensional visual cortical filters,”Journal of Optical Society ofAmerica, 2, pp. 1160-1169, 1985.

85. Eom, Kie-Bum and R. L. Kashyap, “Texture and Intensity Edge Detection with RandField Models,” In Proceedings of the Workshop on Computer Vision, pp. 29-34, MiamiBeach, FL, 1987.

86. Du Buf, J. M. H. M. Kardan, and M. Spann, “Texture Feature Performance for Image Smentation,”Pattern Recognition, 23, pp. 291-309, 1990.

87. Chellappa, R., S. Chatterjee, and R. Bagdazian, “Texture Synthesis and CompressionGaussian-Markov Random Field Models,”IEEE Transactions on Systems, Man, and Cybenetics, SMC-15, pp. 298-303, 1985.

88. Peitgen, H. O. and D. Saupe,The Science of Fractal Images, Springer-Verlag, New York,1988.

89. Ahuja, N. and A. Rosenfeld, “Mosaic Models for Textures,”IEEE Transactions on PatternAnalysis and Machine Intelligence, PAMI-3 , pp. 1-11, 1981.

90. Ahuja, N. “Texture,” InEncyclopedia of Artificial Intelligence, pp. 1101-1115, 1987.

91. Stevens, K. A.,Surface Perception from Local Analysis of Texture and Contour, MIT Techni-cal Report, Artificial Intelligence Laboratory, no. AI-TR 512, 1980.

92. Bajcsy, R. and L. Lieberman, “Texture Gradient as a Depth Cue,”Computer Graphics andImage Processing, 5, pp. 52-67, 1976.

93. Witkin, A. P, “Recovering Surface Shape and Orientation from Texture,”Artificial Intelli-gence, 17, pp. 17-45, 1981.

41