h. joel trussell, eli saber, and michael vrhelbarner/courses/eleg675/papers... · 2005. 4. 16. ·...

9
H umans have always seen the world in color but only recently have we been able to generate vast quantities of color images with such ease. In the last three decades, we have seen a rapid and enormous transition from grayscale images to color ones. Today, we are exposed to color images on a daily basis in print, photographs, television, computer displays, and cinema movies, where color now plays a vital role in the advertising and dissemination of information throughout the world. Color monitors, printers, and copiers now dominate the office and home environments, with color becoming increasingly cheaper and easier to generate and reproduce. Color demands have soared in the marketplace and are projected to do so for years to come. With this rapid progression, color and multispectral properties of images are becoming increasingly crucial to the field of image pro- cessing, often extending and/or replacing previously known grayscale techniques. We have seen the birth of color algorithms that range from direct extensions of grayscale ones, where images are treated as three monochrome separations, to more sophisticated approaches that exploit the correlations among the color bands, yielding more accurate results. Hence, it is becoming increasingly necessary for the signal processing community to understand the fundamental dif- ferences between color and grayscale imaging. There are more than a few extensions of concepts IEEE SIGNAL PROCESSING MAGAZINE [14] JANUARY 2005 1053-5888/05/$20.00©2005IEEE FLOWER PHOTO © 1991 21ST CENTURY MEDIA, CAMERA AND BACKGROUND PHOTO: © DIGITAL VISION LTD. [ H. Joel Trussell, Eli Saber, and Michael Vrhel ] Color Image Processing [ Basics and Special Issue Overview ]

Upload: others

Post on 15-Mar-2021

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: H. Joel Trussell, Eli Saber, and Michael Vrhelbarner/courses/eleg675/papers... · 2005. 4. 16. · ology, and Wyszecki and Stiles [22] for a brief description of physiology and its

Humans have always seen the world in color but only recently have we been able togenerate vast quantities of color images with such ease. In the last three decades,we have seen a rapid and enormous transition from grayscale images to colorones. Today, we are exposed to color images on a daily basis in print, photographs,television, computer displays, and cinema movies, where color now plays a vital

role in the advertising and dissemination of information throughout the world. Color monitors,printers, and copiers now dominate the office and home environments, with color becomingincreasingly cheaper and easier to generate and reproduce. Color demands have soared in themarketplace and are projected to do so for years to come. With this rapid progression, color andmultispectral properties of images are becoming increasingly crucial to the field of image pro-cessing, often extending and/or replacing previously known grayscale techniques. We have seenthe birth of color algorithms that range from direct extensions of grayscale ones, where imagesare treated as three monochrome separations, to more sophisticated approaches that exploit thecorrelations among the color bands, yielding more accurate results. Hence, it is becomingincreasingly necessary for the signal processing community to understand the fundamental dif-ferences between color and grayscale imaging. There are more than a few extensions of concepts

IEEE SIGNAL PROCESSING MAGAZINE [14] JANUARY 2005 1053-5888/05/$20.00©2005IEEE

FLO

WE

R P

HO

TO ©

1991

21S

T C

EN

TU

RY

ME

DIA

,C

AM

ER

A A

ND

BA

CK

GR

OU

ND

PH

OTO

:©D

IGIT

AL

VIS

ION

LT

D.

[H. Joel Trussell, Eli Saber, and Michael Vrhel]

Color Image Processing[Basics and Special Issue Overview]

Page 2: H. Joel Trussell, Eli Saber, and Michael Vrhelbarner/courses/eleg675/papers... · 2005. 4. 16. · ology, and Wyszecki and Stiles [22] for a brief description of physiology and its

and perceptions that must be understood in order to producesuccessful research and products in the color world.

Over the last three decades, we have seen several importantcontributions in the field of color image processing. While therehave been many early papers that address various aspects ofcolor images, it is only recently that a more complete under-standing of color vision, colorimetry, and color appearance hasbeen applied to the design of imaging systems and image pro-cessing methodologies. The first contributions in this area werethose that changed the formulation of color signals from simplealgebraic equations to matrix representation [8], [9], [15]. Morepowerful use of the matrix algebraic representation was present-ed in [13], where set theoretic methods were introduced to colorprocessing. The first overviewextending signal processing con-cepts to color was presented inIEEE Signal Processing Maga-zine in 1993 [14]. This was fol-lowed by a special issue on colorimage processing in IEEETransactions on Image Pro-cessing in July 1997, where acomplete review of the state ofthe art at that time was found in[11]. More recently, we have seen the introduction of severaltexts that address color image processing [10], [12].

The articles selected for this special section concentrate, ingeneral, on the processing and application of color for input andoutput devices. They have been chosen to provide the readerwith a broad overview of color techniques and usage for hard-ware devices. However, innovative color applications extend farbeyond the topics covered within this issue. To this effect, colorhas been widely utilized and exploited for its properties in manyapplications, including multimedia and video, database indexingand retrieval, and exchange of information, to name a few.Furthermore, the extension of color concepts to multispectralimaging has been shown to provide significant increases inaccuracy by recording more than the usual three spectral bands.Applications of these special devices have been primarily con-centrated in the fine arts [17].

To avoid much duplication among the selected articles, wehave chosen to focus the balance of this article on a brief reviewof the fundamental concepts of color, thereby allowing each ofthe subsequent articles to concentrate on their particular topic.This introduction is followed by “Color Image Generation andDisplay Technologies’’ [1], where we describe the capabilitiesand limitations of the primary hardware systems responsible forcreating and displaying color images. In particular, we focus ourattention on the most popular input devices, namely scannersand digital cameras, and output devices such as LCD displaysand ink jet and laser printers. With digital still cameras becom-ing so common and having outstripped film cameras in totalrevenue generated, it is appropriate to review their fundamen-tals herein. This is accomplished by the article “Color ImageProcessing Pipeline’’ [3]. In this article, the authors outline the

steps required to create a digital image from the analog radianceinformation of a natural scene. The following article, titled“Demosaicking: Color Filter Array Interpolation’’ [5], focuses onone of the most important steps in a consumer digital cameraimaging pipeline, where the methods of decoding mosaics ofcolor pixels to produce high-resolution continuous color imagesare discussed in detail. Having seen an example of an imagingdevice, it is natural to discuss the underlying optimization tech-niques that take into consideration all the individual steps in agiven system in an attempt to maximize its performance. Thearticle titled “System Optimization in Digital Color Imaging’’ [4]chooses two examples of color printing and discusses the meth-ods and results of optimizing them, clearly highlighting the

total gain achieved by consider-ing the system as a whole.

At this point, the focus ofthe issue shifts from hardwareand system centric to imageprocessing techniques thatform the backbone of manycolor systems and applications.The first of these is discussed in“Detection and Classification ofEdges in Color Images” [6],

where the emphasis is placed on detecting discontinuities, i.e.,transitions from one region to another, instead of similarities ina given image. One of the most fundamental steps in manyapplications and in the design of image processing techniques isto ensure that a given image is optimized for noise andenhanced in quality. This is outlined in the article “VectorFiltering for Color Imaging” [7], where the authors discuss andcompare the various techniques outlining their strength, areasof improvements, and future research directions. Finally, thearticle titled “Digital Color Halftoning” [2] provides a completereview of the methods employed by printers to reproduce colorimages and the challenges they face in ensuring that theseimages are free of visual artifacts.

FUNDAMENTALS OF THE EYEIt is assumed that the readers of this issue are familiar with thebasics of the human eye. Recommended texts that describe theeye in detail include those by Wandell [16] for an overall studyof psychovisual phenomena, Barlow and Mollon [21] for physi-ology, and Wyszecki and Stiles [22] for a brief description ofphysiology and its relationship to photometric requirements.This article provides a brief overview to set the stage for theupcoming articles.

Incident light is focused by the cornea and lens to form animage of the object being viewed on the retina. The retina, at theback of the eye, contains the rods and cones that act as sensorsfor the human imaging system and operate in different ways andat various light levels. The rods and cones are distributedunevenly throughout the retina. The rods are utilized for mono-chromatic vision at low light levels, termed “scotopic vision.”The cones, on the other hand, are the primary source of color

METAMERISM IS BASICALLY COLORALIASING AND CAN BE DESCRIBED BY

GENERALIZING THE WELL-KNOWNSHANNON SAMPLING THEOREMFREQUENTLY ENCOUNTERED IN

COMMUNICATIONS AND DIGITAL SIGNAL PROCESSING.

IEEE SIGNAL PROCESSING MAGAZINE [15] JANUARY 2005

Page 3: H. Joel Trussell, Eli Saber, and Michael Vrhelbarner/courses/eleg675/papers... · 2005. 4. 16. · ology, and Wyszecki and Stiles [22] for a brief description of physiology and its

vision in bright light, a condition known as “photopic vision.”Three different types can be found in normal observers. Theseare called short (S), medium (M), and long (L) due to theirenhanced sensitivity to short, medium, and long wavelengths oflight. In particular, the S, M, and L cones are most sensitive toblue, green, and yellow-green light, respectively (Figure 1), pro-viding a qualitative basis for representing a digital color imagewith red, green, and blue (RGB) monochrome images.

IMAGE FORMATIONBefore we begin to characterize the spectrum of a pixel, we needto define the physical quantity that we are measuring. For cam-eras, the quantity of interest is the radiant energy spectrum thatis obtained in a solid angle emanating from the pixel on the focalplane. For scanners, the quantity of interest is the spectralreflectance of a pixel-shaped area on the image. The reflectancecan be obtained since the radiant energy spectrum of the illumi-nation of the scanner is known. Figure 2 provides an illustrationof this process for a digital camera. From a physical point of view,the radiant power emitted by a source is measured in watts.Likewise, the power received by a detector is measured in watts.Since the power is a function of wavelength, the unit should bewatts/nanometer. The total power integrated over all wavelengthsis called “radiant flux.” We denote radiant flux per nanometer byr(λ). Discussions of radiometry are found in [22]. The detailsinclude factors relating to the angles of viewing, angles ofreflectance, and the luminance efficiency function of the eye.

MATHEMATICAL DEFINITION OF COLOR MATCHINGA vector space approach to describing color is useful for express-ing and solving complex problems in color imaging. For thisreason, we will use this notation to describe the fundamentals ofcolor matching. Let the N × 3 matrix S = [s1, s2, s3] representthe response of the eye, where the N vectors, si, correspond tothe response of the ith type sensor (cone) (Figure 1). A given vis-ible spectrum can be represented by an N vector, f, a functionwhose value is radiant energy. Hence, the response of the sen-sors to the input spectrum is a three vector, c, obtained by

c = STf. (1)

Two visible N-vectors spectra f and g are said to have thesame color if they appear the same to a human observer. In ourlinear model, this implies that if f and g represent different spec-tral distributions, they portray equivalent colors if

STf = STg. (2)

From this, it can be easily seen that many different spectra canresult in the same color appearance to a given observer. This fasci-nating phenomena is known as metamerism (meh tam er ism), andthe two spectra are termed as metamers. In essence, metamerism isbasically color aliasing and can be described by generalizing thewell-known Shannon sampling theorem frequently encountered incommunications and digital signal processing. It should be noted,however, that the level of metamerism may vary across variousobservers, dependent on their individual cone sensitivities.

Since it is not practical to characterize the spectral sensitivi-ty of all observers, the color community, in the form of theCommission Internationale de l’Eclairage (CIE), has tabulated astandard set of color matching functions that represent theresponse of a “standard observer” to matching monochromaticlight at various wavelengths with varying intensities of threeprimary lights. These functions are shown in Figure 3 as dashed[FIG2] Image formation process in digital still camera.

Lens

Image Plane

2-D CCD ArrayFocal Plane

IEEE SIGNAL PROCESSING MAGAZINE [16] JANUARY 2005

[FIG1] Cone sensitivities.

0.8

0.6

0.4

0.2

0400 450 500 550 600 650 700

1

Wavelength (nm)

Cone Sensitivities

Page 4: H. Joel Trussell, Eli Saber, and Michael Vrhelbarner/courses/eleg675/papers... · 2005. 4. 16. · ology, and Wyszecki and Stiles [22] for a brief description of physiology and its

IEEE SIGNAL PROCESSING MAGAZINE [17] JANUARY 2005

lines. If we let the matrix Argb represent the relative amount ofeach of the primaries required to match a standard intensity ofmonochromatic light, then it can be shown that S can beobtained by a linear transformation of the color matchingmatrix, Argb. Indeed, any matrix that can be obtained by a lineartransformation from Argb or S can be utilized for color match-ing. The derivations are discussed in detail in [14].

In practice, it is desirable to have a matrix of color matchingfunctions that are nonnegative, so they can be physically realized asoptical filters. This problem was addressed by the CIE, in 1931,yielding the XYZ color matching functions shown in Figure 3 assolids lines. Hence, the matrix A can now be used to represent thesefunctions. The Y value was chosen to be the luminous efficiencyfunction, making it equivalent to the photometric luminance value.This standardization led to the precise definition of colorimetricquantities, such as tristimulus values and chromaticity.

The term “tristimulus values” refers to the vector of valuesobtained from a radiant spectrum, r, by t = [X ,Y,Z ]T = ATr(we recognize the inconsistency of denoting the elements of t byX, Y, Z, but since the color world still uses the X, Y, Z terms, weuse it here). The chromaticity is then obtained by normalizingthe tristimulus values yielding

x =X/(X + Y + Z)

y =Y/(X + Y + Z)

z =Z/(X + Y + Z).

Since x + y + z = 1, any two chromaticity coordinates aresuf-ficient to characterize the chromaticity of a spectrum. In general,as a matter of convention, the x and y terms are used as the stan-dard. A chromaticity diagram constructed from x and y, (as shownin Figure 6) is often employed to describe the possible range of col-ors that can be produced by a color output device. This maximumrange is plotted as a horseshoe or shark fin shaped curve, whichrepresents the chromaticities of monochromatic light.

MATHEMATICS OF COLOR REPRODUCTIONTo reproduce a color image, it is necessary to generate new vec-tors in N space (spectral space) from those obtained by a givenmultispectral sensor. Since the eye can be represented as athree-channel sensor, it is most common for a multispectralsensor to use three types of filters. Hence, the characteristics ofthe resulting multispectral response functions associated withthe input and output devices are critical aspects for color repro-duction. Output devices can be characterized as being additiveor subtractive. Additive devices, such as cathode ray tubes(CRTs), produce light of varying spectral composition as viewedby the human observer. On the other hand, subtractive devices,such as ink-jet printers, produce filters that attenuate portionsof an illuminating spectrum. We will discuss both types in thefollowing, clearly highlighting their differences.

ADDITIVE COLOR SYSTEMSIn additive devices, various colors are generated by combininglight sources with different wavelengths. These light sources

are known as primaries. An example of this is illustrated inFigure 4. In the figure, it can be easily seen that cyan, magen-ta, yellow, and white are generated by combining blue andgreen; red and blue; red and green; and red, green, and blue,respectively. The red, green, and blue channels of an examplecolor image are also shown for illustration purposes. Othercolors can be generated by varying the intensities of the red,green, and blue primaries. For instance, the screen of a televi-sion, or CRT, is covered with phosphoric dots that are clus-tered in groups. Each group contains these primary colors:red, green, and blue, which are combined in a weighted fash-ion to produce a wide range of colors. Additive color systemsare characterized by their corresponding multispectral outputresponse. For instance, a three-color monitor is represented bythe N × 3 matrix, E, which serves the same purpose as the pri-maries in the color matching experiment. The amount of eachprimary is controlled by a three-vector c. The spectrum of theoutput is then computed as follows:

f = Ec, (3)

Hence, the tristimulus values associated with a standardobserver who is viewing the screen are given by

t = ATf = ATEc. (4)

There are several challenges that need to be consideredwhen dealing with additive systems. One is to choose the con-trol values so that the output matches the intended target val-ues. This is not feasible for all possible colors due to the powerlimitations of the output device. Furthermore, the control val-ues cannot be negative, i.e., we cannot produce negative light.In addition, we have the question of estimating the best valuesof c from some recorded data.

[FIG3] CIE RGB and XYZ color matching functions: RGB are shownin dashed lines, and XYZ are shown in solid lines.

400 450 500 550 600 650 700−1

−0.5

0

0.5

1

1.5

2

RGB (Dashed) and XYZ (Solid)Color Matching Functions

Wavelength (nm)

Page 5: H. Joel Trussell, Eli Saber, and Michael Vrhelbarner/courses/eleg675/papers... · 2005. 4. 16. · ology, and Wyszecki and Stiles [22] for a brief description of physiology and its

SUBTRACTIVE COLOR SYSTEMSSubtractive systems are characterized by the fundamental prop-erty that color is obtained by removing (subtracting) selectedportions of a source spectrum. This is illustrated in Figure 5,where cyan, magenta, and yellow colorants are used to absorb,in this respect, the red, green, and blue spectral componentsfrom white light. The cyan, magenta, and yellow channels of acolor image are also shown for illustration purposes. Hence,each colorant absorbs its complementary color and transmitsthe remainder of the spectrum. The amount of light removed,by blocking or absorption, is determined by the concentrationand material properties of the colorant. The color is generallypresented on a transmissive medium like transparencies or on areflective medium like paper. While the colorant for subtractivesystems may be inks, dyes, wax, or toners, the same mathemati-cal representation outlined in previously can be used to approx-imate them. The main property of interest for imaging insubtractive systems is the optical density. The transmission ofan optically transmissive material is defined as the ratio of theintensity of the light that passes through the material to theintensity of the source. This is illustrated by

T = Iout

Iin. (5)

As a result, the optical density is defined by

d = − log10(T) (6)

and is related to the physical density of the material.The inks can be characterized by their density spectra,the N × 3 matrix D. Hence, the spectrum that is seenby the observer is the product of an illuminationsource, the transmission of the ink, and the reflectanceof the paper. Since the transmissions of the individualinks reduce the light proportionately, the output ateach wavelength, λ, is given by

g(λ) = l(λ)t1(λ)t2(λ)t3(λ) (7)

where ti(λ) is the transmission of the ith ink and l(λ)

is the intensity of the illuminant. For simplification,the reflectance of the paper is assumed perfect and isassigned the value of 1.0. The transmission of a partic-ular colorant is related logarithmically to the concen-tration of the ink on the page. The observed spectrumis obtained mathematically by

g = L[10−Dc

](8)

where L is a diagonal matrix representing an illuminantspectrum and c is the concentration of the colorant.The concentration values are held between zero andunity and the matrix of density spectra, D, representsthe densities at the maximum concentration. The expo-nential term is computed componentwise, i.e.,

10r = [10r1 10r2 . . . 10rN ]T. (9)

This simple model ignores nonlinear interactions betweencolorant layers. For a reflective medium, the model requiresan additional diagonal matrix, which represents thereflectance spectrum of the surface. For simplicity, this canbe conceptually included in the illuminant matrix L. Theactual process for subtractive color reproduction is muchmore complicated and cannot, in general, be comprehensive-ly modeled by the equations described here. Hence, these sys-tems are usually characterized by look-up tables (LUTs) thatcapture their input-output relationships empirically. Thedetails of handling device characterizations via LUTs aredescribed in [19] and [18].

GAMUTThe range of colors that are physically obtainable by a colorsystem is called the gamut of the device. For an additivedevice, like a CRT, the gamut is easily described. Since the col-ors are linear combinations of three primaries, the possiblecolors are those obtained from the relation

c = Cmaxβ (10)

where Cmax is the matrix of tristimulus values for the maximumcontrol value (usually 255) of each color gun and β is a vector whosevalues vary independently between zero and unity. The graphical rep-

IEEE SIGNAL PROCESSING MAGAZINE [18] JANUARY 2005

[FIG4] Additive color system.

Blue Gun

Green Gun

Red Gun

Red Image

Green Image

Blue Image

+ + =

Page 6: H. Joel Trussell, Eli Saber, and Michael Vrhelbarner/courses/eleg675/papers... · 2005. 4. 16. · ology, and Wyszecki and Stiles [22] for a brief description of physiology and its

resentation of the gamut is most often done using the chromaticitycoordinates for a fixed luminance. Figure 6 shows the gamut of aCRT monitor, a dye-sublimation printer, and the sRGB color space.

COLOR SPACESThe proper use and understanding of color spaces is necessaryfor the development of color image processing methods thatare optimal for the human visual system. Many algorithmshave been developed that process in an RGB color space with-out ever defining this space in terms of the CIE color match-ing functions, or even in terms of the spectral responses of R,G, and B. Such algorithms are nothing more than multichan-nel image processing techniques applied to a three-bandimage, since there is no accounting for the perceptual aspectof the problem. To obtain some relationship with the humanvisual system, many color image processing algorithms oper-ate on data in hue, saturation, lightness (HSL) spaces.Commonly, these spaces are transformations of the aforemen-tioned RGB color space and hence have no visual meaninguntil a relationship is established back to a CIE color space. Tofurther confuse the issue, there are many variants of thesecolor spaces, including hue saturation value (HSV), hue satu-ration intensity (HSI), and hue chroma intensity (HCI), someof which have multiple definitionsin terms of transforming from RGB.Since color spaces are of suchimportance and a subject of confu-sion, we will discuss them in detail.

There are two primary aspects ofa color space that make it moredesirable and attractive for use incolor devices: 1) its computationalexpediency in transforming a givenset of data to the specific colorspace and 2) conformity of dis-tances of color vectors in the spaceto that observed perceptually by ahuman subject, i.e., if two colorsare far apart in the color space, theylook significantly different to anobserver with normal color vision.Unfortunately, these two criteriaare antagonistic. The color spacesthat are most suited for measuringperceptual differences require com-plex computation, and vice versa.

UNIFORM COLOR SPACESIt is well publicized that the psycho-visual system is nonlinear andextremely complex. It cannot bemodeled by a simple function. Thesensitivity of the system depends onwhat is being observed and the pur-pose of the observation. A measure of

sensitivity that is consistent with the observations of arbitraryscenes is well beyond our present capabilities. However, muchwork has been done to determine human color sensitivity inmatching two color fields that subtend only a small portion ofthe visual field. In fact, the color matching functions (CMFs) ofFigure 3 are more accurately designated by the solid angle of thefield of view that was used for their measurement. A two-degreefield of view was used for those CMFs.

It is well known that mean square error is, in general, apoor measure of error in any phenomenon involving humanjudgment. A common method of treating the nonuniform errorproblem is to transform the space into one where Euclideandistances are more closely correlated with perceptual ones. As aresult, the CIE recommended, in 1976, two transformations inan attempt to standardize measures in the industry. Neither ofthese standards achieve the goal of a uniform color space.However, the recommended transformations do reduce thevariations in the sensitivity ellipses by a large degree. In addi-tion, they have another major feature in common: the meas-ures are made relative to a reference white point. By using thereference point, the transformations attempt to account for theadaptive characteristics of the visual system. The first of thesetransformation is the CIELab space defined by

[FIG5] Subtractive color system.

WhiteLight

YellowLayer

MagentaLayer

CyanLayer

FinalImage

− − − =

IEEE SIGNAL PROCESSING MAGAZINE [19] JANUARY 2005

Page 7: H. Joel Trussell, Eli Saber, and Michael Vrhelbarner/courses/eleg675/papers... · 2005. 4. 16. · ology, and Wyszecki and Stiles [22] for a brief description of physiology and its

L∗ =116(

YYn

) 13 − 16 (11)

a∗ =500

[(XXn

) 13 −

(YYn

) 13

](12)

b∗ =200

[(YYn

) 13 −

(ZZn

) 13

](13)

for (X/Xn), (Y/Yn), (Z/Zn) > 0.01. The values Xn, Yn, Zn arethe CIE tristimulus values of the reference white under thereference illumination, and X, Y, Z are the tristimulus val-ues, which are to be mapped to the CIELab color space. Thismaps the reference white to (L∗, a∗, b∗) = (100,0,0) . Therequirement that the normalized values be greater than 0.01is an attempt to account for the fact that at low illuminationthe cones become less sensitive and the rods (monochromereceptors) become active. Hence, a linear model is used atlow light levels. A second transformation is the CIELuvspace defined by

L∗ =116(

YYn

) 13 − 16 (14)

u∗ =13L∗(u − un) (15)

v∗ =13L∗(v − vn) (16)

where

u = 4XX + 15Y + 3Z

(17)

v = 9YX + 15Y + 3Z

(18)

un = 4Xn

Xn + 15Yn + 3Zn(19)

vn = 9Yn

Xn + 15Yn + 3Zn. (20)

The un and vn tristimulus values represent a reference whiteand the formula is good for (Y/Yn) > 0.01. Errors between twocolors c1 and c2 are measured in terms of

�E ∗ab =

[(L∗

1 − L∗2)

2 + (a∗1 − a∗

2)2 + (b∗

1 − b∗2)

2]1/2

(21)

�E ∗uv =

[(L∗

1 − L∗2)

2 + (u∗1 − u∗

2)2 + (v∗

1 − v∗2)

2]1/2

.

(22)

The CIE has since updated �E ∗ab with a new weighted version,

which is designated �E ∗94 [24]. The new measure weights the

hue and chroma components in the �E ∗ab measure by a func-

tion of chroma. Specifically, the measure is given by

�E ∗94 =

[(�L∗

kLSL

)2

+(

�C∗ab

kCSC

)2

+(

�H ∗ab

kHSH

)2]1/2

(23)

where

SL = 1

SC = 1 + 0.045C ∗ab

SH = 1 + 0.015C ∗ab (24)

C ∗ab =

√(a∗)2 + (b∗)2 (25)

�H ∗ab =

√(�E ∗

ab)2 − (�L∗)2 − (�C ∗

ab)2 (26)

and for reference conditions

kL = kC = kH = 1. (27)

The color term, C ∗ab, used in (24) can be taken as a reference

color against which many comparisons are made. If only a sin-gle color difference is computed between two samples, then thegeometric mean of C ∗

ab of the samples is used.The just noticeable difference thresholds of �Eab and �E94

are much lower in the experimental setting of psychovisualexperiments than in pictorial scenes. Color difference measures,such as the �Es in the sCIELab space [23], have been developedto provide an improved measure for pictorial scenes.

The uniform color spaces can be utilized as a means to display[FIG6] Color gamuts shown in chromaticity space.

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.80

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

x

y

sRGBDye-Sub GamutApple Monitor Gamut

IEEE SIGNAL PROCESSING MAGAZINE [20] JANUARY 2005

Page 8: H. Joel Trussell, Eli Saber, and Michael Vrhelbarner/courses/eleg675/papers... · 2005. 4. 16. · ology, and Wyszecki and Stiles [22] for a brief description of physiology and its

the gamut of a particular device, as shown in Figure 7 for a dye-sublimation printer and a CRT. The CIELab and CIELuv colorspaces can be used to visualize color attributes such as hue, chro-ma, and lightness. These perceptual color terms, as well as satura-tion (which is frequently confused with chroma), are defined inWyszecki and Stiles [22, p. 487]. Figure 8 displays a three-dimen-sional plot of the CIELuv space and quantities that correlate withthese attributes. A similar figure can be generated for CIELab.

DEVICE INDEPENDENT (UNRENDERED) AND DEPENDENT SPACESThe terms “device independent” (DI) and “device dependent”(DD) color spaces are frequently used in problems dealing withaccurate color recording and reproduction. A color space isdefined to be DI if there exists a nonsingular transformationbetween the color space and the CIEXYZ color space. Thesespaces are also called “unrendered” since the values in thespace describe the colorimetry and are not used to drive anoutput device. If there is no such transformation, then thecolor space is a DD color space. DD spaces are typically relatedto some particular output or input device. They are often usedfor the simple reason that most output and input devicesreport or accept values that are in a DD space. A simple exam-ple is the eight-b/channel RGB scanner that provides valuesthat are contained in a cube defined by the points (0, 0, 0) and(255, 255, 255). This cube is a DD color space. The space, how-ever, is still useful since the DD RGB values can be sent direct-ly to a monitor or printer for image display. However, it isoften the case that the output image will not look like the orig-inal scanned image.

STANDARD DD (RENDERED) SPACES To maintain the simplicity of the DD space but provide somedegree of matching across input and output devices, standard DDcolor spaces have been defined. These spaces will be referred to asstandard DD spaces. An example of such a color space is sRGB[20]. These spaces are well defined in terms of a DI space. As such,a device manufacturer can design an input or output device suchthat, when given sRGB values, the proper DI color value is dis-played. Because the values in these spaces can be used to driveoutput devices, they are often referred to as rendered spaces.

COLOR MEASUREMENT INSTRUMENTSTo measure color accurately, a given instrument must have care-fully controlled lighting conditions, optical geometry, and sen-sors (filters and detectors). If the color of a reflective material isto be measured, the illumination must be precisely known andmatched with the sensors. Media properties such as specularreflectance and translucence should be taken into account toachieve accurate results. However, if only the tristimulus valuesor some derivative of them, such as L∗a∗b∗ values, are desired, acolorimeter may be used. This device has spectral sensitivitiesdesigned to match the vector space defined by the eye. Thus, themeasurement is limited to a given set of viewing conditions. As aresult, the colorimeter needs to measure only the quantities that

can be transformed into tristimulus values. The minimal systemconsists of three filters and three detectors or changeable filtersand a single detector. Colorimeters are much less expensive thanthose devices designed to measure the entire spectrum. Themost complete information is obtained by measuring the entirevisible spectrum, thereby allowing the user to compute tristimu-lus values under any viewing condition. This is achieved by uti-lizing a spectroradiometer to measure radiant spectra or aspectrophotometer to measure reflective spectra. To measure thespectrum, it is required that the spectrum of the light be spreadphysically. This can be done with a prism, as Newton did in hisfamous early experiments, or more accurately, by utilizing opti-cal gratings, which are more precise and compact than prisms.

[FIG8] CIELuv color space and its relationship to commonly usedcolor perception terms.

[FIG7] Color gamuts shown in CIELab space: monitor grid,printer solid.

500

L*

b* a*

1000

20

40

60

80

100

−50 −500

Monitor: Grid, Printer: Solid

50100

IEEE SIGNAL PROCESSING MAGAZINE [21] JANUARY 2005

Page 9: H. Joel Trussell, Eli Saber, and Michael Vrhelbarner/courses/eleg675/papers... · 2005. 4. 16. · ology, and Wyszecki and Stiles [22] for a brief description of physiology and its

IEEE SIGNAL PROCESSING MAGAZINE [22] JANUARY 2005

To this effect, a system of lenses is used to focus the spectrumonto the detector to obtain the measurements. After the spec-trum is spread, the intensity of the light at each wavelength ismeasured accurately. This can be done in several ways. The mostcommon method, currently, is to use a linear CCD array. A mov-able slit can also be used to limit the wavelength band beingmeasured. The exact spread of the spectrum on the array isdetermined by measuring a known source. Interpolation meth-ods are then used to generate the data at the specified wave-lengths. The high-quality optics required for this task greatlyincrease the cost of these color measurement instruments.

AUTHORSH. Joel Trussell received the B.S. degree from Georgia Instituteof Technology in 1967, the M.S. degree from Florida State in1968, and the Ph.D. degree from the University of New Mexico in1976. He joined the Los Alamos Scientific Laboratory in 1969.During 1978-1979, he was a visiting professor at Heriot-WattUniversity, Edinburgh, Scotland. In 1980, he joined the Electricaland Computer Engineering Department at North Carolina StateUniversity, Raleigh, where he is professor and director of gradu-ate programs. His research has been in estimation theory, signaland image restoration, and image reproduction. He was as asso-ciate editor for IEEE Transactions on Acoustics, Speech, andSignal Processing and IEEE Signal Processing Letters. He was amember and past chair of the Image and MultidimensionalDigital Signal Processing Committee of the IEEE Signal Pro-cessing Society and was on the Board of Governors. He is aFellow of the IEEE and the corecipient of the IEEE-ASSP SocietySenior Paper Award and the IEEE-SP Society Paper Award.

Eli Saber is an associate professor in the Electrical En-gineering Department at the Rochester Institute of Technology.From 1988 to 2004, he was with Xerox. He received the B.S.degree from the University of Buffalo in 1988 and the M.S. andPh.D. from the University of Rochester in 1992 and 1996, respec-tively, all in electrical engineering. From 1997 until 2004, he wasan adjunct faculty member at the Electrical EngineeringDepartment of the Rochester Institute of Technology and at theElectrical and Computer Engineering Department of theUniversity of Rochester. He is a Senior Member of the IEEE and amember of the Electrical Engineering Honor Society, Eta KappaNu, and the Imaging Science and Technology Society. He is anassociate editor for IEEE Transactions on Image Processing,Journal of Electronic Imaging, IEEE Signal Processing MagazineApplications column, and guest editor for the special section oncolor image processing for IEEE Signal Processing Magazine. Heis chair of the Technical Committee on Industry DSP Technology.He was finance chair for the 2002 International Conference onImage Processing and general chair for the 1998 Western NewYork Imaging Workshop.

Michael Vrhel is a distinguished engineer at ConexantSystems, Redmond, Washington. He graduated summa cumlaude from Michigan Technological University with a B.S. in 1987and the M.S. degree in 1989 and a Ph.D. in 1993, from NorthCarolina State University, all in electrical engineering. From 1993

to 1996, he was a National Research Council, research associate atthe National Institutes of Health (NIH) Bethesda Maryland, wherehe researched biomedical image and signal processing problems.In 1996, he was a Senior Staff Fellow with the BiomedicalEngineering and Instrumentation Program at NIH. From 1997 to2002, he was the senior scientist at Color Savvy Systems Limited,Springboro, Ohio. From 2002 to 2004, he was the senior scientistat ViewAhead Technology, in Redmond, Washington. He has twopatents and several pending. He has published more than 40 ref-ereed journal and conference papers. He is a Senior Member ofthe IEEE and a member of the SPIE. He is a guest editor for IEEESignal Processing Magazine. He was a Conference Session chairfor ICIP-2002, ICIP-2000, and SPIE Wavelet Applications inSignal and Image Processing IV 1996.

REFERENCES[1] M. Vrhel, E. Saber, and H.J. Trussell, “Color image generation and display tech-nologies,’’ IEEE Signal Processing Mag., vol. 22, no. 1, pp. 23–33, Jan. 2005.

[2] F.A. Baqai, J.-H. Lee, A.U. Agar, and J.P. Allebach, “Digital color halftoning,’’IEEE Signal Processing Mag., vol. 22, no. 1, pp. 87–96, Jan. 2005.

[3] R. Ramanath, W.E. Snyder, Y. Yoo, and M.S. Drew, “Color image processingpipeline,’’ IEEE Signal Processing Mag., vol. 22, no. 1, pp. 34–43, Jan. 2005.

[4] R. Bala and G. Sharma, “System optimization in digital color imaging,’’ IEEESignal Processing Mag., vol. 22, no. 1, pp. 55–63, Jan. 2005.

[5] B.K. Gunturk, J. Glotzbach, Y. Altunbasak, R.W. Schafer, and R.M. Mersereau,“Demosaicking: Color filter array plane interpolation,’’ IEEE Signal ProcessingMag., vol. 22, no. 1, pp. 44–54, Jan. 2005.

[6] A. Koschan and M. Abidi, “Detection and classification of edges in colorimages,’’ IEEE Signal Processing Mag., vol. 22, no. 1, pp. 64–73, Jan. 2005.

[7] R. Lukac, B. Smolka, K. Martin, K.N. Plataniotis, and A.N. Venetsanopoulos,“Vector filtering for color imaging,’’ IEEE Signal Processing Mag., vol. 22, no. 1,pp. 74–86, Jan. 2005.

[8] J.B. Cohen and W.E. Kappauf, “Metameric color stimuli, fundamentalmetamers, and Wyszecki’s metameric blacks,” Amer. J. Psychol., vol. 95, no. 4, pp.537–564, Winter 1982.

[9] B.K.P. Horn, “Exact reproduction of colored images,” Computer Vision, Graph.Image Processing, vol. 26, pp. 135–167, 1984.

[10] K.N. Plataniotis and A.N. Venetsanopoulos, Color Image Processing andApplications. Heidelberg: Springer, 2000.

[11] G. Sharma and H.J Trussell, “Digital color processing,’’ IEEE Trans. ImageProcessing, vol. 6, no. 7, pp. 901–932, July 1997.

[12] G. Sharma, Ed., Digital Color Imaging Handbook. Boca Raton, FL: CRC Press,2003.

[13] H.J. Trussell, “Applications of set theoretic methods to color systems,’’ ColorRes. Applicat., vol. 16, no. 1, pp. 31–41, Feb. 1991.

[14] H.J. Trussell, “DSP solutions run the gamut for color systems,’’ IEEE SignalProcessing Mag., vol. 10, no. 2, pp. 8–23, Apr. 1993.

[15] B.A. Wandell, “The synthesis and analysis of color images,” IEEE Trans.Pattern Anal. Machine Intell., vol. PAMI-9, no. 1, pp. 2–13, Jan. 1987.

[16] B.A Wandell, Foundations of Vision. Sunderland, MA: Sinauer Assoc. 1995.

[17] P.L. Vora, J.E. Farrell, J.D. Tietz, and D.H. Brainard, “Image capture: simula-tion of sensor responses from hyperspectral images,’’ IEEE Trans. ImageProcessing, vol. 10, no. 2. pp. 307–316, Feb. 2001.

[18] M.J. Vrhel and H.J. Trussell, “Color printer characterization in MATLAB,’’ ICIP2002, vol. 1, Sept. 2002, pp. 457–460.

[19] H.R. Kang, Color Technology for Electronic Devices. Bellingham WA: SPIEPress, 1997.

[20] M. Anderson, R. Motta, S. Chandrasekar, and M. Stokes, “Proposal for a stan-dard default color space for the internet—sRGB,’’ in Proc. IS&T/SID 4th ColorImaging Conf.: Color Science, Systems Applications, Nov. 1996, pp. 238–246.

[21] H.B. Barlow and J.D. Mollon, The Senses. Cambridge, U.K.: Cambridge Univ.Press, 1982.

[22] G. Wyszecki and W.S. Stiles, Color Science: Concepts and Methods,Quantitative, Data and Formulae. 2nd ed. New York: Wiley, 1982.

[23] X. Zhang and B.A. Wandell, “A spatial extension of CIELAB for digital colorimage reproduction,’’ in Proc. SID Symp., 1996, pp. 731–734. [Online]. Available:http://white.stanford.edu/~brian/scielab/scielab.html

[24] CIE, “Industrial colour difference evaluation,’’ Tech. Rep. 116-1995. [SP]