[dan_e._dudgeon,_russell_m._mersereau]_multidimens(bookos.org).pdf

Download [Dan_E._Dudgeon,_Russell_M._Mersereau]_Multidimens(Bookos.org).pdf

If you can't read please download the document

Upload: jimparker

Post on 02-Jan-2016

64 views

Category:

Documents


3 download

DESCRIPTION

Two-dimensional Digital Signal Processing

TRANSCRIPT

  • CONTENTS

    PREFACE

    INTRODUCTION

    1. MULTIDIMENSIONAL SIGNALS AND SYSTEMS

    1.1 Two-Dimensional Discrete Signals 6 1.1.1 Some Special Sequences, 7 1.1.2 Separable Sequences, 8 1.1.3 Finite-Extent Sequences, 9 1.1.4 Periodic Sequences, 9

    1.2 Multidimensional Systems 12 1.2.1 Fundamental Opel'ations on Muhidimensional Signals, 12 1.2.2 Linear Systems, U 1.2.3 Shift-lnvmiant Systems, 14 1.2.4 Linear Shift-Invariant Systems, 15 1.2.5 Cascade and Parallel Connections of Systems, 20

    xiil

    1

    5

    vii

  • viii Contents

    1.2.6 Separable Syste1m, 22 1 .2. 7 Stable Systems, 23 1 .2.8 Regions of Suppon, 23

    . 1 .2.9 Vector lnput-Outpllt Systems, 25 1.3 Frequency Domain Characterization of Signals and Systems 26

    1.3.1 Frequency~ qf a 2-D LSI System, 26 1.3.2 Determining rite Impulse Response from the Frequency Response, 29 1.3.3 Multidimensional Fourier Transform, 31 1.3.4 Other Prof*1ies of the 2-D Fourier Transform, 33

    1.4 Sampling Continuous 2-D Signals 36 1 .4.1 Periodic Sampling with Rectangular Geometry, 36 1 .4.2 Periodic Sampling with Arbitrary Sampling Geometries, 39 1 .4.3 Comparison of Rectangular and Hexagonal Sampling, 44

    1.6 Processing Continuous Signals with Discrete Systems 47 1.5.1 Relationship between the System Input and Output Signals, 48 1 .5.2 System Frequency Response, 49 1.5.3 Alternative Definition of the Fourier Transform/or Discrete Signals, 50

    2. DISCRETE.FOURIER ANALYSIS OF MULTIDIMENSIONAL SIGNALS

    2.1 Discrete Fourier Series Representation of Rectangularly Periodic Sequences 61

    2.2 Multidimensional Discrete Fourier Transform 63 2.2.1 Definitions, 63 2.2.2 Properties of the Discrete Fourier Transform, 67 2.2.3 Circular Convolution, 70

    2.3 Calculation of the Discrete Fourier Transform 74 2.3.1 Direct Calculation, 75 2.3.2 Row-Column Decompositions, 75 2.3.3 Vector-Radix Fast Fourier Transform, 76 2.3.4 Computational Considerations in DFT Calculations, 81

    2.4 Discrete Fourier Transformsjor General Periodically Sampled Signals 87 2.4.1 DFT Relations for General Periodically Sampled Signals, 87 2.4.2 Fast Fourier Transform Algorithms for General Periodically Sampled

    Signals, 90 2.4.3 Some Special Cases. 96

    80

  • Contents

    *2.5 Interrelationship between M-dimensional and One-Dimensional DFTs 100 2.5.1 Slice DFI', 101 2.5.2 Good's Prime Factor Algorithm for Decomposing a 1-D DFT, 103

    3. DESIGN AND IMPLEMENTATION OF TWO-DIMENSIONAL FIR FILTERS

    3.1 FIR Filters 112 3.2 Implementation of Fl R Filters 113

    3.2.1 Direct Convolution, 113 3.2.2 Discrete Fourier Transform Implementations of FIR Filters, 114 3.2.3 Block Convolution, 116

    3.3 Design of FIR Filters Using Windows 118 3.3.1 Description of the Method, 118 3.3.2 Choosing the Window Function, 119 3.3.3 Design Example, 120 3.3.4 Image Processing Example, 124

    *3.4 Optimal Fl R Filter Design 126 3.4.1 Least-Squares Designs, 128 3.4.2 Design of Zero-Phase Equiripple FIR Filters, 130

    3.5 Design of FIR Filters for Special Implementations 132 3.5.1 Cascaded FIR Filters, 132 3.5.2 Parallel FIR Filters, 134 3.5.3 Design of FIR Filters Using Transformations, 137 3.5.4 Implementing Filters Designed Using Transformations, 144 3.5.5 Filters with Small Generating Kernels, 148

    *3.6 FIR Filters for Hexagpnally Sampled Signals 149 3.6.1 Implementation of Hexagonal FIR Filters, 150 3.6.2 Design of Hexagonal FIR Filters, 150

    4. MULTIDIMENSIONAL RECURSIVE SYSTEMS

    4.1 Finite- Order Difference Equations 163 4.1.1 Realizing LSI Systems Using Difference Equations, 163 4.1.2 Recursive Computability, 164 1.1.3 Boundtzry Conditions, 168 4.1 .4 Ordering the Computation of Output Samples, 171

    lx

    112

    162

  • X

    4.2 Multidimensional z-Transforms 174 4.2.1 Transfer Function, 174 4.2.2 The z-Transform, 175 4.2.3 Properties of the 2-D z-Transform, 180 4.2.4 Transfer Functions of Systems Specified by

    Difference Equations, 182 4.2.5 Inverse z-Transform, 186 4.2.6 Two-Dimensional Flowgraphs, 187

    4.3 Stability of Recursive Systems 189 4.3.1 Stability Theorems, 190

    *4.3.2 Stability Testing, 193 4.3.3 Effect of the Numerator Polynomial on Stability, 196 4.3.4 Multidimensional Stability Theorems, 197

    4.4 Two-Dimensional Complex Cepstrum 198 4.4.1 Definition of the Complex Cepjtrum, /98 4.4.2 Existence of the Complex Cepstrum, 199 4.4.3 Causality, Minimum Phase, and the Complex Cepstrum, 201 4.4.4 Spectral Factorization, 202

    4.4.5 Computing the 2-D Complex Cepstrum, 205

    5. DESIGN AND IMPLEMENTATION OF TWO-DIMENSIONAL IIR FILTERS

    5.1 Classicai2-D IIR Filter Implementations 218 5.1.1 Direct Form Implementations, 219 5./.2 Cascade and Parallel Implementations, 221

    5.2 Iterative lmplementMions for 2-D IIR Filters 224 5.2.1 Basic Iterative Implementation, 225 5.2.2 Generalizations of the Iterative Implementation, 228

    *5.2.3 Truncation, Boundary Con:itions, and Signal Constraints, 230

    5.3 Signal Flowg~phs and State-Variable Realizations 234 5.3.1 Circuit Elements and Their Realization, 234 S.3.2 Minimizing the Number of Shift Operators, 238 5.3 .3 State- V 'iuiab/e Realizations, 240

    5.4 Space-Domafn Design Techniques 244 5.4.1 Shanks's Method, 246 5.4.2 Descent Methodsfor Space-Domain Design, 248 5.4.3 lt.erative Pre.filtering Design Method, 250 .

    Contents

    218

  • Contents

    5.5 Frequency-DOO'Iain Design Techniques 263 5.5.1 General Minimization Procedures, 153 5.5.2 Magnitude-and Magnitude-Squared Desip Algorithms, 255 5:-:5.3 Magnitude Design with a Stability Constraint,15S 5.5.4 Zero-Phase fiR Frequency-Domain Dtsign Methods, 256 5.5.5 Frequency Transformations, 159

    5.6 Design Techniques for Specialized Structures 262 5.6.1 Cascade Designs, 262 5.6.2 Separable Denominator Designs, 261

    *5.6.3 Lattice Structures, 266 s.7 Stabilization Techniques 276

    5.7.1 Cepstra/ Stabilization, 176 5.7 .2 Shaw's Stabilization Technique, 277

    8. PROCESSING SIGNALS CARRIED BY PROPAGATING WAVES

    6.1 Analysis of Space-Time Signals 290 6.1.1 Elemental Signals, 290 6.1.2 Filtering in Wavenumber-Frequency Space, 291

    6.2 Beamforming 293 6.2.1 Weighted Delay..andSum Beamformer, 293 6.2.2 Array Pattern, 294 6.2.3 Example of an Array Pattern, 297 6.2.4 Effect of the Receiver Weighting Function, 299 6.2.5 Filter-and-Sum Beamforming, 300 6.2.6 Frequency-Domain Beamforming, 301

    6.3 Discrete-Time Beamforming 303 6.3.1 Time-Domain Beamformingfor Discrete-Time Signals, 304 6.3.2 Interpolation Beamforming, 307 6.3.3 Frequency-Domain Beamforming for Discrete-Time. Signals, 309

    6.4 Further Considerations for Array Processing Applications 311 6.4.1 Analysis of a Narrowband Beamformer, 312

    6.5 Multidimensional Spectral Estimation 315 6.5.1 Classical Spectra/ Estimation; 316 6.5.2 High-Resolution Spectral Estimation, 321 6.5.3 All-Pole Spectral Modeling, 325

    *6.5.4 Maximum Entropy Spectra/ Estimation, 331 *6.5.5 Extendibility, 338

    xi

    289

  • xii

    7. INVERSE PROBLEMS

    7.1 Constrained Iterative Signal Restoration 349 7.1.1 Iterative Procedures for Constrained Deconvolution, 350 7.1.2 Iterative Procedures for Signal Extrapolation, 354 7.1 .3 Reconstructions from Phase or Magnitude, 356

    7.2 Seismic Wave Migration 359 7.3 Reconsttuction of Signals from Their Projections 363

    7 .J.l Pr;djections, 363 1 .3.2 f,iojection-S/ice Theorem, 366 7 .Jj . f!Jiscretization of the Rec(Jnstruction Problem, 367 7 .J.J fiR/rler-Domain Reconstruction Algorithms, 369 7,3;5 Convolution/Back-Projection Algorithm, 373 7.1.6 Iterative Reconstruction .A,lgorithms, 376

    *7.3.7 Ftt~~-Beam Reconstructions, 376 7.4 Profe..1ion of Discrete Signals 379

    INDEX

    Contents

    348

    391

  • INTRODUCTION

    One of the by-products of the computer revolution has been the emergence of completely new fields of study. Each year, as integrated circuits have become faster, cheaper, and more compact, it has become possible to find feasible solutions to problems of ever-increasing complexity. Because it demands massive amounts of digital storage and comparable quantities of numerical computation, multidimen-sional digital signal processing is a problem area which has only recently begun to emerge. Despite this fact, it has already provided the solutions to important problems ranging from computer-aided tomography (CAT), a technique for combin-ing x-ray projections from different orientations to create a three-dimensional recon-struction of a portion of the human body, to. the design of passive sonar arrays and the monitoring of the earth's resources by satellite. In addition to its many glamorous and humble applications, however, multidimensional digital signal processing also possesses a firm mathematical foundation, which allows us not only to understand what has already been accompJished, but also to explore rationally new problem areas and solution methods as they arise.

    Simply stated, a signal is .any medium for conveying information, and signal processing is concerned with the extraction of that information. Thus ensembles of time-varying voltages, the density of silver grains on a photographic emulsion, or lists of numbers in the memory of a computer all represent examples of signals. A typical signal processing task involves the transfer of information from one signal

    1

  • 2 Introduction

    to another. A photograph, for example, might be scanned, sampled, and stored in the memory of a computer. In this case, the information is transferred frQm a variable silver density, to a beam of visible light, to an"'electr.ical waveform, and finally to a sequence of numbers, which, in turn, are represented by an arrangement of magnetic domains on a computer disk. The CAT scanner is a more complex example; infor~ mation about the structure of an unknown object is first transferred to a series of electromagnetic waves, which are then sampled to produce an array of numbers, which, in tum, are processed by a computational algorithm and finally displayed on the phosphor of a cathode ray tube (CR'D screen or on photographic film. The digital processing which is done cannot add to the information, but it can rearrange it so that a human observer can more readily interpret it; instead of looking at multiple shadows the observer is able to look at a cross-sectional view.

    Whatever their form, signals are of interest only because of the information they contain. At the risk of overgeneralizing we might say that signal processing is concerned with two basic tasks-information rearrangement and information reduction. We have already seen two examples of information rearrangement--computer-aided tomography and image scanning. To those we could easily add other examples: image enhancement, image deblurring, spectral analysis, and so on. Information reduction is concerned with the removal of extraneous information. Someone obsel'\'ing radar returns is generally interested in only a few bits of infor-mation, specifically, the answer to such questions as: Is anything there? If so, what? Friend or foe? How fast is it going, and where is it headed? However, the receiver is also giving the observer information about the weather, chaff, birds, nearby build-ings, noise in the receiver, and so on. The observer must separate the relevant from the irrelevant, and signal processing can help. Other examples of information-lossy

    signal processing operations include noise removal, parameter estimation, and feature extraction.

    Digital signal processing is concerned with the processing of signals which can be represented as sequences of numbers and multidimensional digital signal processing is, more specifically, concerned with the processing of signals which can be represented as multidimensional arrays, such as sampled images or sampled time waveforms which arc received simultaneously from several sensors. The restric-tion to digital signals permits processing with digital hardware, and it permits signal processing operators to be specified as algorithms or procedures.

    The motivations for Jooking at digital methods hardly need to be enumerated. Digital methods are simultaneously powerful and flexible. Digital systems can be designed to be adaptive and they can be made to be easily reconfigured. Digital algorithms can be readily transported from the equipment of one manufacturer to another or they can be implemented with special-purpose digital hardware. They can be used equally well to process signals that originated as time functions or as spatial functions and they interface naturally with lQ_gical operators such as pattern classifiers. Digital signals can be stored indefinitely witflouterror. For many applications, digital methods may be cheaper than the alternatives, and for others there may simply be no alternatives.

  • Introduction 3

    Is the processing of multidimensional signals that different from the processing of one-dimensional ones? At an abstract level, the answer is no. Many operations that we might want to perform on multidimensional sequences are also performed on one-dimensional ones-sampling, filtering, and transform computation, for example. At a closer level, however, we would be forced to say that multidimensional signal processing can be quite different. This is due to three factors: (I) two-dimen-sional problems generally involve considerably more data than one-dimensional ones; (2) the mathematics for handling multidimensional systems is less complete than the mathematics for handling one-dimensional systems; and (3) multidimen-sional systems have many more degrees of freedom, which give a system designer a flexibility not encountered in the one-dimensional case. Thus, while all recursive digital filters are implemented using difference equations, in the one-dimensional case these difference equations are totally ordered, whereas in the multidimensional case they are only partially ordered. Flexibility can be exploited. In the one-dimen-sional case, the discrete Fourier transform (DFT) can be evaluated using the fast Fourier transform (FFT) algorithm, whereas in the multidimensional case, there are a host of DFTs and each can be evaluated using a host of FFT algorithms. In the one-dimensional case, we can adjust the rate at which a bandlimited signal is sampled; in the multidimensional case, we can adjust not only the rate, but also the geometric arrangement of the samples. On the other hand, multidimensional polynomials cannot be factored, whereas one-dimensional ones can. Thus, in the multidimen-sional case, we cannot talk about isolated poles, zeros, and roots. Multidimensional digital signal processing can be quite different from one-dimensional digital signal processing.

    In the early 1960s, many of the methods of one-dimensional digital signal processing were developed with the intention of using the digital systems to simulate analog ones. As a result, much of discrete systems theory was modeled after analog systems theory. In time, it became recognized that. while digital systems could simulate analog systems very well, they could also do much m()re. With this awareness and a strong push from the technology of digital hardware, the field has blossomed and many of the methods in common use today have no analog equivalents. The same trend can be observed in the development of multidimensional digital signal processing. Since there is no continuous-time or analog two-dimensional systems theory to imitate, early multidimensional systems were based on one-dimensional systems. In the late 1960s, most two-dimensional signal processing was performed using separable two-dimensional systems, which are little more than one-dimensional systems applied to two-dimensional data. In time, uniquely multidimensional algo-rithms were developed which correspond to logical extrapolations of one-dimensional algorithms. This period was one of frustration. The volume of data demanded by many two-dimensional applications and the absence of a factorization theorem for two-dimensional polynomials meant that many one-dimensional methods did not generalize well. Chronologically, we are now at the dawn of the age of awareness. The computer industry, by making components smaller and cheaper, has helped to solve the data volume problem and we are recognizing that, although we will always

  • 4 Introduction

    have the problem of limited mathematics, multidimensional systems also give us new freedoms. These combine to make the field both challenging and fun.

    In this book we summarize many of the advances that have taken place in th1s exciting and,Tapidly growing field. The area is one that has evolved with technology. Although we do describe many applications of our material, we have tried not to make it too technology dependent, lest it become technologically obsolete. Rather, we emphasize fundamental concepts so that the reader will not only understand what has been done but will also be able to extend those methods to new applications.

    To accomplish all of this, it is necessary to assume some background on the part of the reader. Specifically, we assume that the reader is familiar with one-dimen-sional linear systems theory and has a basic understanding of one-dimensional digital signal processing (at the level of Oppenheim and Schafer [l), Chaps. 1-6).

    In this book our interest is in the processing of all signals of dimensionality greater than or equal to 2. Whereas there is a substantial difference between the theories for the processing of one- and two-dimensional signals, there seems to be little difference between the two-dimensional and higher-dimensional cases, except for the issue of computational complexity. To avoid cluttering up the discussions, equations, and figures of the book, we therefore state the majority of our results only for the two-dimensional case, which is the most prevalent one in applications. In most cases, the generalizations are straightforward, and when they are not, they will be explicitly given. In a similar spirit, we do not belabor results that are obvious generalizations of the one-dimensional case.

    We ho~ the reader will find what we found when we first became involved in the area of multidimensional digital signal proces

  • 1

    MUL TIDIMEN SIO NAL S1G NALS AND SYSTEMS

    A multidimensional signal can be modeled as a function of M independent variables, where M > 2. These signals may be classified as continuous, discrete, or mixed. A continuous signal can ~ modeled as a function of independent variables which range over a continuum of values. For example, the intensity l(x, y) of a photographic image is a two-dimensional continuous signal. A discrete signal, on the other hand, can be modeled as a function defined only on a set of points, such as the set of integers. A mixed signal is a multidimensional signal that is modeled as a function of some continuous variables and some discrete ones. For example, an ensemble of time waveforms recorded from an array of electrical transducers is a mixed signal. The ensemble can be modeled with one continuous variable, time, and one or more dis-crete variables to index the transducers.

    In this .chapter we are concerned primarily with multidimensional discrete signals and the systems that can operate on them. Most of the properties of signals and systems that we will discuss are simple extensions of the properties of one-dimensional discrete signals and systems and therefore, most of our discussions wi11 be brief. The reader who desires further details is referred to one of several excellent textbooks that tover the one-dimensional case [l-3]. It will become apparent, however, that many familiar one.dimensional procedures do not readily generalize to the multidimensional case and that many important issues associated with multidimen-

    6

  • 6 Multidimensional Signals and Systems Chap. 1

    sional signals and systems do not appear in the one-dimensional special case. [n these cases, our treatments will necessarily be more complete.

    1.1 TWO-DIMENSIONAL DISCRETE SIGNALS

    A two-dimensional (2-0) discrete signal (also referred to as a sequence or array) is a function defined over the set of ordered pairs of integers. Thus

    x = {x{n1 , 112.), -oo < 11 1 ,112 < oo} (1.1) A single element from the sequence will be referred to as a sample. Thus x(11 1, n2.) represents the sample of the sequence x at the point (11 1, n;a). Sample values can be real or complex. On occasion, if 111 and 11 2 are interpreted as variables, a reference to x(n 1, 11 2) will be interpreted as a reference to the entire sequence. Although this convention is abusive, it has become commonplace in the engineering literature and it should cause no confusion.

    It may be _helpful, on occasion, to regard the signal x as the coll~tion of its samples rather than simply as a function that is evaluated at integer values of its arguments. With this interpretation, there is no temptation to define x for values of 11 1 and 112 other than integers. A 2-D sequence is depicted graphically in Figure l.l.

    Figure 1.1 Graphical representation of a two-dimensional sequ~

    Two-dimensional sequences, as we have defined them, extend infinitely far since 11 1 and n2 may take on any integer values. In practice, however, most 2-D sequences have sample values which are known only over a finite region in the (rat> 112)-plane. For example, when a black-and-white photograph is scanned, samples ar.e not taken beyond the edges of the photograph. Rather than restrict the domain of definition of the resulting 2-D sequence, we simply assume that the values of the samples outside the finite region are all equal to zero.

  • . .

    Sec. 1.1 Two-Dimensional Discrete Signals 7

    1.1.1 Solllfl Special Sequences

    Some sequences ~re sufficiently important to warrant special names and symbols. One of these is the 2-D unit impulse, t5(n 1 , nz), also called the unit sample. The unit

    impulse is defined by

    . . .

    . . .

    " . . . . ____.,.._ Ill

    tf(nt, nz.) = {1' 0, n1 = n2.= 0 otherwise

    (1.2)

    If the one-dimensional (1-D) unit impulse is defined as !S{n) = {1,

    0, n=O n#=O

    (1.3)

    then the 2-D unit impulse can be written as the product of two 1-D unit impulses.

    (1.4) Figure l.Z Two-dimensional unit sample sequence .S(n 1, n:~. ). The circle represents a sample of value I. The dots represent samples of value 0.

    In Figure 1.2 we show a stylized graphical representation of the 2-D unit impulse.

    A 2-D line impulse is a sequence that is uniform in one variable and impulsive in the other. The sequences

    x(nu nz.) = 6(n1) (l.5a) and

    (1.5b) which are represented in Figure 1.3, are examples of line impulses. In the M-dimen-sional case, we can define n

  • . .

    -nl

    . . .

    Multidimensional Signals and Systems Chap. 1

    Another special sequence is the 2-D unit step sequence u(nt> n2), which is shown in Figure 1.4. The step is defined by

    n1 > 0 and n2 ? 0 otherwise

    We can also interpret u(n~o n2 ) as the product u(n 1, n2 ) = u(n 1)u(n,.)

    where

    u(n) = {1, 0,

    n>O n < 0-

    ( 1.6)

    (I. 7)

    ( 1.8) Figure 1.4 Two-dimensional unit step sequence u(n1o nz). is the one-dimensional unit step. The 2-D unit step is non-

    zero over one quadrant of the (n 1, n2)-plane. Exponential sequences are defined by

    x(n n ) = a"'/:"' I> 2 > ( 1.9) where a and bare complex numbers. When a and b have unity magnitude, they may be written as

    In this case, the exponential sequence becomes the complex sinusoidal sequence

    x(n1, n2) = exp (jcc>1n1 + j(JJ,_n,) = cos ((JJ 1n1 + (JJ2n1,) + j sin (ro1n1 + C02n2) (1.10)

    Exponential sequences are particularly important because, as we shall see later, they are eigenfunctions M 2-D linear shift-invariant systems.

    1.1.2 Separable Sequences

    All the special sequences that have been defined to this point can be expressed in the form

    (!.II) Any sequence that can be expressed as the product of 1-D sequences in this form is said to be separable.

    Although very few signals encountered in practice are separable, any 2-D array with a finite number of nonzero samples can be written as the sum of a finite number of separable sequences:

    N x(n1, n2) = ~ x11 (nl)x11(n2)

    1~1 (1.12)

    where N is the number of nonzero rows or columns. One of the simplest such repre-sentations can be obtained by Jetting x be expressed as the sum of its isolated rows.

  • Sec. 1.1 Two-Dimensional Discrete Signals

    This is done by choosing x11 (n 1) = x(n 1 , i) X 12(n2) = &.,n2 - i)

    9

    (l.l3a) (l.I3b)

    Other sum-of-separable decompositions are possible and, on occasion, they can be quite useful.

    Separable sequences can be quite valuable when used as test inputs for evaluat-ing and debugging experimental systems.

    1.1.3 Finite-Extent Sequences

    Finite-extent 2-D sequences are another important class of discrete signals. The modifier "finite-extent" implies that these signals are zero outside a region of finite

    extent (or area) in the (n 1 , n2)-plane. This region is called the signal's region of support. One typical finite-extent sequence,

    n 2 shown in Figure 1.5, is nonzero only inside the rectangle

    0

    (1.14) Although rectangular and square shapes are used most often for the region of support of finite-extent sequences, it is also possible to consider regions with other shapes as well.

    o---.-.~~--~-------

    The alert reader will recognize that there is an ambiguity in the definition of the region of support for a 2-D finite-extent sequence. Clearly, if a sequence is zero outside a region R, it is also zero outside any larger region that contains R. By embedding a sequence with an irregularly shaped region of support into a larger rectangular region, the repre-sentation of that sequence and operations perfor~ed on it can often be simplified.

    N 1 , n 1 0

    Figure 1.5 Finite-extent sequence with a rectangular region of sup-port.

    1.1.4 Periodic Sequences

    Periodic discrete signals form another important class of 2-D sequences. Like its 1-D counterpart, a periodic 2-D sequence can be thought of as a waveform that repeats itself at regularly spaced intervals. Because a 2-D signal must repeat in two different directions at once, however, the furmal definition of a periodic 2-D sequence is ntore complex than that of a periodic 1-D sequence. We shall build to the general definition with a special case.

    Consider a 2-D sequence i(n" n2) which satisfies the following constraints: x(n 1 , n2 + N 2) = x(n 1 , n2) i(n 1 + N,, n2) = x(n 1, n2)

    ( J.l5a) ( 1.15b)

  • 0

    10

    . . . . . .

    N I I'll

    Multidimensional Signals and Systems Chap. 1

    This sequence is doubly periodic; its values are repeated if the variable n 1 is incremented by N 1 or if the variable nz is incremented by N 1 Figure 1.6 shows a sketch of such a sequence. We shall call N 1 and N,. the horizontal and vertical periods of x if they are the smallest positive integer values for which equations (1.15) are true .

    Only N 1N2 samples of x are independent; the remain-ing samples are determined by the periodicity condition. Any connected region of the (n 1, n2)-plane containing exactly N 1N2 samples will be called a period of x if those sample values are independent. Often the most convenient

    Figure 1.6 2-D periodic sequence shape for the period is the rectangle ((n 1, n 2), 0 :S: n1 :S: with N1 = N2 = 3. N 1 - 1, 0:::;;; n2 :::;;; N2 - 1} but this is not the only possi-

    bility. The shaded region shown in Figure 1.7, for example, can also be used to represent one period of a periodic sequence.

    Figure 1.7 2-D periodic sequence with an irregularly shaped period.

    Now consider a 2-D sequence .X(n 1, n,.) that satisfies the more general periodicit) constraints

    where

    x(n1 + Nu, n2 + N2,) = x(n1, nl) x(n 1 + Nu, n2 + N,.1,) = x(nl> n~)

    (l.16at (l.l6bt

    (1.17) The ordered pairs (N11 , N11 )' and (N12 , Nu)' can be interpreted as vectors N 1 and N 2 which represent the displacements from any sample to the corresponding samples of two other periods. (The prime denotes the transposition operation, which converts

  • Sec. 1.1 Two-Dimensional Discrete Signals

    the ordered pair into a column vector.) One period of such a sequence~urJ)Mift in the parallelogram-shaped region whose two adjacent sides are formed b)' rq 1 and N2 We leave it to the reader to show that the number of samples in this region is 1 D I Figure 1.8 depicts a general 2-D periodic sequence with N 1 = (7, 2)' and N 1 = (-2, 4)'.

    . .

    . .

    . . . .

    ..

    ..

    .

    .

    .

    .

    . . . . . . .

    . . . .

    .

    . . . . . . . . .

    "z

    Figure 1.8 Periodic sequence with periodicity vectors (7, 2)' and ( -2, 4)'.

    This idea of periodicity is readily generalized to M-dimensional signals. For notational convenience, we shall let n denote the ordered M-tuple of integer variables (n 1 , n2o, . .. , nM)'. Then xtn) is an M-dimensional periodic sequence if there exist M linearly independent Mdimensional integer vectors, N 1, , NM such that

    x(n + N,) = x(n), i= I, . .. ,M (1..18) The vectors N1 are called periodicity vectors and they can be arranged to form the columns of an M X M matrix N called the periodicity matrix.

    (1.19) The requirement that the periodicity vectors be linearly independent is.equivalent to requiring that N have a nonzero determinant. In the special case that N is a diagonal matrix, we will say that x(n) is rectangularly periodic. This is the special case we considered earlier.

  • 12 Multidimensional Signals and Systems

    If x(n) is periodic with periodicity matrix N, then x(n + Nr) = x(n)

    Chap. 1

    ( 1.20) for any integer v~ctor r. Using this fact, we see that if P is any integer matrix, then NP will also be a periodicity matrix for x(n). Thus the periodicity matrix is not unique for ahy periodic sequence. As an aside, we can note that the absolute value of the determinant of the periodicity matrix gives the number of samples of x(n) contained in one period. This fact will be exploited in Chapter 2, where we define an M-dimensional discrete Fourier transform.

    1.2 MULTIDIMENSIONAL SYSTEMS

    T[J

    Systems transform signals. Formally, a system is an operator that maps one signal (the input) into another (the output). Figure 1.9 illustrates this simple point by showing a system that maps x into y. The operator embodied in this system is represented by T[. ], so we

    Figure 1.9 Pictorial representation of a may write system. Here x represents the input sequence y = T[x] (1.2!) and y represents the output sequence.

    The operator T[ ] can represent a rule or set of rules for mappmg an input signal into an output signal, or even a list of output signals that correspond to various input signals.

    In this section we explore some simple, yet useful, multidimensional systems. In particular, we focus our attention on linear shift-invariant systeins and their characterizations. Before we get that far, however, we shall discuss some simple operations that can be performed on multidimensional discrete signals.

    1.2.1 Fundamentlll 0/Hirlltiona on Mu/tldlme/UIIonal SigiJIIIII

    Signals may be combined or altered by a variety of operations. Here we describe some_ of the basic operations on signals which will serve as building blocks for the development of more complicated systems.

    Let w and x represent 2-D discrete signals. These signals can be added to yield a third signal, y. The addition is performed sample by sample so that a particular sample value y(n 1 , n 2) is obtained by adding the two corresponding sample values w(n 1, n2) and x(n~o n2).

    ( 1.22) Two-dimensional sequences may also be multiplied by a constant to form a

    new sequence. If we let c represent a constant, we can form the 2-D sequence y from the scalar c and the 2-D sequence x by multiplying each sample value of x by c.

    (1.23)

  • Sec; 1.2 Multidimensional Systems 13

    A 2-D sequence x may also be linearly shifted to form a new sequence y. The operation of shjfting simply slides the entire sequence x to a new position in the (n 1 , n1)-plane. The sample values of yare related to the sample values of x by

    y(n 1, n2) = x(n1 - m1, n2 - m:) (1.24) where (m1, m2 ), is the amount of the shift. An example of shifting a 2-D sequence appears in Figure 1.10.

    . . .

    ...

    (b)

    Figure 1.10 Operation of shifting the 20 sequence x(n1, n:~;).

    Using the fundamental operations of addition, scalar multiplication, and shifting, it is possible to decompose any 2-D sequence into a sum of weighted and shifted 2-D unit impulses.

    (1.25)

    Here J(n 1 - k 1 , n2 - k2) represents a unit impulse that bas been shifted so that its nonzero sample is at (k 10 k 2); the values x(k 1, k2) can be interpreted as scalar multi-pliers for the corresponding unit impulses.

    Two other fundamental operations on 2-D sequences are worth mentioning. The first, which we call a spatially varying gain, can be viewed as a generalization of scalar multiplication. Each sample value of a 2-D sequence .""t is multiplied by a number c(n 11 n2) whose value depends on the position of the corresponding sample.

    y(n 1, n2) = c(n1 , n2 )x(n1, n2 ) (1.26) The collection of numbers c(n 1 , n2 ) may also be regarded as a 2-D sequence.

    Thus equation (1.26) can also be interpreted as the sample-by-sample multiplication of two sequences.

    Two-dimensional sequences may also be subjected to nonlinear operators. One important type of nonlinear operator, called a memoryless nonlinearity, op

  • 14 Multidimensional Signals and Systems Chap. 1

    The squaring operation is a rnemoryless nonlinearity, since the computation of the output value at (n 1, n2) depends only on the single input value at (n 1 , n2).

    1.2.2 Linear Systema

    A system is said to be linear if and only if it satisfies two conditions: if the input signal is the sum of two sequences, the output signal is the sum of the two corre-sponding output sequences, and scaling the input signal produces a scaled output signal. Therefore, if L[ ] represents a linear system, and

    Y1 = L[xt); Y2 = L[x2J then

    (1.28) for aU input signals x 1 and x 2 and all complex constants a and b.

    Linear systems obey the principle of superposition. The response of a linear system to a weighted sum of input signals is equal to the weighted sum of the responses to the individual input signals. In equation (1.25), an arbitrary 2-D sequence was represented as a linear combination of shifted unit impulses. If we use this sequence as the input to a 2-D discrete linear system L[ ], we will get the output sequence

    y(nhn2) = LL,~-k~~~ 'c(k~>k 2)t5(n 1 - k.,n 2 - k 2)] By exploiting the fact that the system is linear, this can be rewritten as

    y(n.,nz) = i; E x(khk 2)L[t5(n1 - k1on2.- k 2 )] k1-oo ktao-cc

    00 00

    = I; I; x(k ~o k 2)hk,k,(n 1, n2) ,t,.,._oo kt-oo

    (1.29)

    where hk,k. is the response of the system to a unit impulse located at (k 1, k 2). If the spatially varyhtg impulse response hk,1 .(n 1, n2) is known for each (k1 , k2), the response of the linear' system to any input can be found by superposition.

    1.2.3 ShHt-lm~arlllllf Systems

    A shift-invariant system is one for which a shift in the input sequence implies a corre-sponding shift in the output sequence. If

    y(n~> n2 ) = T[x(n 1, n:~.)] the. system T[ ] is shift invariant if and only if

    T[x(n 1 - m 1, nz- rn2)] = y(n 1 - rn~o n1 - rn2) (1.30) for all sequences x and all integer shifts (rn 1, m2).

    Linearity and shift invariance are independent properties of a system; neither property implies the presence of the other. For example, the spatially varying gain,

    L{x(nt, n2)] = c(n 1, n2).x(n~o n2.) {1.31) which multiplies the input sequence by c(n]! n2), is linear but it is not shift invariant. On the other hand, the system

  • Sec. 1.2 Multidtmen&ional Systems

    T{x(n 1 , 112.)] = [x(n1, n~W is shift invariant, but it is not linear.

    1.2.4 LiiHIIII' Sltlft-lnvlll'illnt Systems

    15

    (1.32)

    To study multidimensional systems productively, it is necessary to restrict our investi-gations to certain classes of operators which have properties in common. Linear

    sbift~invariant (LSI) discrete systems !lre the most frequently studied class of systems for procesaing di$Crete signals of any dimensionality. T.ftese systems are both ea8y to design and analyze, yet they are sufficiently powerful to solve many practical problems. The behavior of these systems can also, in many cases, be studied without regard to the specific input to the system. The class of linear shift-invariant systems is certainly not the most general class of systems which can be studied, but it does represent a good starting point.

    In (1.29) we derived an expression for the output sequence of a linear system to the input x. If this system is also shift invariant, further simplifications can be made. The spatially varying impulse response is defined by

    h~r,,..(nu n2) A L{c5(nt - k1, n2- k2)J Fot the special QlSe where k 1 = k 1 = 0, we have

    (1.33)

    h0o(n1, n,.) = L{~n 1 , n2.)J (1.34) Applying the principle of shift invariance embodied by equation (1.30), we get

    h~~:,~~:.(nu n2) = h00(n 1 - k 1, n2 - k,.) (1.35) The spatially varying impulse response becomes a shifted replica of a spatially invariant impulse response. Defining h(n1 , n2) ~ h00(n 10 n2), we can then write the output sequence as

    .. 00

    y(nh n2) = l: l: x(k 1, k 2 )h(n1 - kh n2 - k 2 ) kt-aa k;z::::-oo

    (1.36) This relation is known as the 2-D convolr:ttion sum. ConceptualJy, the input sequence x(n1, n2) is decomposed into a weighted sum of shifted impulses according to equation (1.25). Each impulse is transformed by the LSI system into a shifted copy of the impulse response h(n .. n2). Superposition of these weighted, shifted impulse responses forms the output sequence, with the weighting coefficients given by the sample values of the input sequence x(n,, n2). Equation (1.36) impJies that an LSI system is com-pletely characterized by its spatially invariant impulse response h(n 1, n2).

    If we make the substitution of variables n 1 - k1 = 11 and n 2 - k, = 12 , equa-tion (1.36) can be written in the alternative form

    00 00

    y(nt. n2 ) = I; I; h(l., lz)x(n, - /" n2 - / 2 ) lt=-co 1=-04

    (1.37) Thus we see that convolution is a commutative operation. As a notational device, we shall use the double asterisk () to denote 2-D convolution. [A single asterisk () will denote 1-D convolution.] Equations (1.36) and (1.37) can be written using this shorthand notation as

  • 16 Multidimensional Signals and Systems Chap. 1

    y = X** h = h **X ( 1.38) By using vector notation, the output sequence of an M~dimensional LSI system

    can be expressed as the M~dimensional convolution of the output sequence and the impulse response

    y(n) = f: x(k)h(n - k) ( 1.39) Two~dimensional convolution is not substantially different from its 1~1):-cqunter

    part. As in the 1-D case, there is a computational interpretation for the convolution sum. Consider x(k1, k,) and h(n 1 - k 1, n:z.- k2) as functions of k1 and k:z.. To generate the sequence h(n1 - kto n2 - k 2) from h(k1 , k2), h is first reflected about both the k1 and k2 axes and then translated so that the sample h(O, 0) lies at the point (n 1, n2) as illustrated in Figure 1.11. The product sequence x(ku kJh(n 1 - k 1, n2 - k2) is

    .....

    . . .

    kl

    (a)

    ... ....

    . . .

    . .

    . . .

    (b)

    Figure 1.11 (a) The sequence h{k" k 2). (b) The sequence h(nt - k~, n2 - kz) for n, = 2, n2 = 3.

    formed, and the output sample value y(n 1, n2.) is computed by summing the nonzero sample values in the product sequence. As n1 and n:z. are varied, the sequence h{n 1 - k1 , n 2 - k2 ) is shifted to other positions in the (k" k:z.)~plane, leading to other product sequences and, consequently, other output sample values. If the alternative form of the convolution sum, equation (1.37), is used, the roles of x(nt> n 2) and h(n1, n1) are interchanged in describing the computation. Example 1

    . Let us consider a 2-D discrete LSI system whose output at the sample {11" 112) represents the accumulation of the input sample values over a region below and to the left of the point (nt. n2). Roughly speaking, this system is one type of 2-D digital integrator; its impulse response is the 2-D unit step sequence u(n19 112) described in Section 1.1.1.

    For the input sequence x(n., n2 ), we shall use a 2-D finite~xtent sequence whose sample values are equal to I inside the rectangular region 0 S n1 < N 1 ; 0:::; n2 < N1. and equal to 0 outside it.

  • Sec. 1.2 Multidimensional Systems 17

    To compute the output sample value y(n1, n1) using equation (1.36), we form the product sequence x(kt. k 1 )h(n1 - kt. n2 - k2). Depending on the particular value of (n1 , n2), the nonzero regions of x(kt. k2) and h(n1 - kt. n2 - k 1 ) overlap by different amounts. We can distinguish five cases which are illustrated in Figure l.l2.

    Case! Case 2

    Case 3 Case 4

    CaseS

    Figure I.U Convolution of a square pulse with a two-dimensional step se-quence. The nonzero regions of each sequence are crosshatched. The product sequence x(kt, kz)h(nt - k~,n2- k2) is nonzero only in the doubly cross-hatched areas.

  • 18 Multidimensional Signals and Systems

    in tt,is figure, the nonzero portions of each sequen;:e are crosshatched, and th;: ztro samples are not shown.

    Case 1. n1 < 0 or n2 < 0. Lookmg at Figure I .12, we see that for the~e vaLJes of (n~o n2), h(n 1 - kt. n2 - k 2) and x(k 1 , k 2 ) do not overlap. Hence their product wd th'! value of these samples of the convolution arc zero.

    Case 2. 0::;; n1 < N;, 0 ::=::: n2 < N 2 Here there is partial overlap. .,.he "~:curnc~lation c,f the nonzeiJ :;a"'rle value!' in the product sequence yields

    y(n~. nz) = L )~ 1 = (nl + l){.r.z + 1) k1=t 1 ka-V

    Case 3. n 1 > N,, 0 .:

  • Sec. 1.2 Multidimensional Systems 19

    For this particular example, it should be noted that x and hare both separahle sequences and that their convolution is also separable, since we can write

    where y(nl, n2) = y,(nl)Y2(n2) (1 .45)

    10,

    Y1(n1) = n1 + 1, N~o

    n 1 < 0

    {

    0, Yz.(n2) = n2 + 1, 0 < n2 < Nz.

    N 2 , n2 > N 2 This property is true in general; the convolution of two separable sequences is alway separable (see Problem 1.9). Example 2

    In some cases, one may only be interested in the extent of the nonzero region of the output of a convolution operation. For example, consider the convolution of the finite-extent signal x(nl> n2) shown in Figure I.l4(a) with the finite-extent 1mpulse

    n21

    x(n 1, n 2)

    ~ nl

    (a I (b) k2 112 x(n 1.n 2)h(" . '2)

    e

    " it

    . \ 0 .

    8 \ l3 f) ') t;\

    J.l II, id)

    (c)

    Figure 1.14 Pictorial representation of the convolution in Example 2. (a) The input sequence. (b) The impulse response. (c) The product sequence at (n~, n2) = (l, l). (d) The region of support of the convolution.

  • 20 Multidimensional Signals and Systems Chap. 1

    response h(nt. n2 ) shown in Figure 1.14(b). [For the moment, we shaJJ not be concerned with the values of the nonzero samples of x(nt. n2) and h(n1, n2).] It is obvious that the result of this convolution, which we shall call y(nt. n:~,), will also be a signal of finite extent. We want to sketch the region of support for this output signal.

    Proceeding as before, we form the 2-D sequence h(n1 - kto n2 - k2) as a function of (k~o k2). Starting with (nt. n2 ) = (0, 0), we slide h(n1 - kt. n2 - k 2) over the sequence x(k~. k2 ). When the two sequences overlap, we have a (potentially) nonzero point in the output sequence y(n~o n2). Figure 1.14(c) shows the overlap for (n11 n:~,) = (1, 1) and Figure l.t4(d) shows the region of support for y(n~o n2).

    Even within this region, some samples of y(n~o n3 ) may have a value of zero, because the terms in the summation on the right side of equation (1.36) may cancel each other for a particular value of (n 1 , n2). In general, however, y(n~o n1 ) will be nonzero in this region, and it will certainly be zero outside it.

    As an exercise, the reader can compute the values of the samples of y(n 1, n1) in its region of support for the simple case where x(nt. n:~.) and h(n~o n2 ) are both equal to one in their respective regions of support [Figure 1.14(a) and (b)].

    In this section we have presented two relatively simple examples of performing 2-D convolutions. You have undoubtedly noticed that some effort is involved in these calculations. Fortunately, we do not often perform such calculations by hand. Some familiarity with the basic operations, however, is necessary in order to write the required computer programs and to interpret their results. It is virtually impossible to perform a 2-D convolution correctly without identifying the relevant cases to consider. This should be the first step whenever a convolution is performed.

    1.2.5 Cll:SCildellnd Parlllle/ Connet:titHI8 of Systems

    One of the virtues of linear shift-invariant systems is the ease with which they can be ana)yzed when they are connected together. This is due in part to some properties of the convolution operator. We have already seen that convolution is commutative.

    X** h = h **X (1.46) Convolution is also associative. If the convolution of x with h is convolved with g, the result is the same as if x were convolved with the convolution of h and g.

    (x **h) """ g = x ** (h **g) (1.47) Because of the associative property, the parentheses can be omitted when talking about N-fold convolutions.

    Convolution also obeys the distributive law with respect to addition.

    x ** (h + g) = (x ** h) + (x **g) (1.48) The associative and distributive properties of the convolution operator are straight-forward to demonstrate. This is left as an exercise for the reader (see Problem 1.4).

    Two systems are said to be connected in cascade if the output of the first is the input to the) second, as illustrated in Figure 1.15. Jf the two systems are linear and shift invariant, their cascade connection can be shown to be linear and shift invariant.

  • Sec.1.2 Multidimensional Systems

    x(n 1, n2) h(11 1, 11 2} g(n 1, 11 2) y(n 1, n 2)

    x(n1, n2) g(11 1,n 2) h(n 1, n 2)

    y(n 1, n 2)

    Figure 1.15 Each figure represents a cascade connection of two systems. If both systems are linear and shift invariant, the order of the cascade is immaterial and from an input-output point of view the two cascades above are equivalent.

    If w denotes the output of the 1irst system in the cascade, it foJiows that W =X** h y = w g = (x h) g

    From the associative law, however, (t .49) can be rewritten as y = X** (h **g)

    and thus the equivalent impulse response for the cascade system is

    21

    (1.49)

    (1.50)

    hequlv = h ** g (1.51) Going one step further and applying the commutative law, we see that the equivalent impulse response is unchanged if the order of the two systems in the cascade is reversed. Thus two cascaded LSI systems which are identical except for the order of their subsystems are equivalent; they will produce the same output if they are excited by the same input. If N LSI systems are arranged as a cascade combination, the equivalent impulse response is the N-fold convolution of their respective impulse responses. Furthermore, these systems can be cascaded in any order without affecting the equivalent impulse response.

    Figure 1.16 illustrates two systems which are connected in parallel. They have a common input and their individual outputs are summed to produce a single output. It can be straightforwardly shown that if these two systems are linear and shift invariant, the overall system will be linear and shift invariant. To find the equivalent impulse response, we observe that

    y = (x **h) + (x g) (1.~2)

    Figure 1.16 Parallel connection of two systems.

  • 22 Multidimensional Signals and Systems Chap. 1

    Applying the distributive law, it follows that

    Y =X** (h +g) (1.53) from which it follows that

    (1.54) This rule generalizes in the obvious fashion to parallel connections of more than two LSI systems.

    Sometimes it is useful to decompose an impulse respoose into several com-ponents, particularly if the impulse response has a finite, but oddly shaped, region of support which can be represented as a collection of smaller, more regularly shaped regions. The input sequence can be convolved with each of the component impulse responses and the final output sequence formed by summation, thus implementing the system of interest as the parallel connection of simpler systems.

    1.2.6 Se/HfTable Systems

    A separable system is an LSI system whose impulse response is a separable sequence. The input signals processed by a separable system and the output signals produced by it need not be separable. As with any other LSI system, the output can be computed from the input using the convolution sum. but for separable systems. the convolution sum decomposes. As we shall see in Chapters 3 and 5, this property makes these systems very efficient to implement. To see how the convolution sum decomposes, let the impulse response of the system be denoted by

    h(n" n1) = h 1(n 1)h1(n,.) (1.55) The output of the S-ystem is then

    .. 00

    y(n~o n2 ) = I; I; x(n, - k~o nz - k 2)h,(k,)hz(kz) kt-OO kt~-oo

    00 00 (1.56)

    = :E h,(kt) :E x(n 1 - kh n1 - k 2)hz(k1 ) kt--oo kr- -oo

    The innermost sum represents a 2-D arrA)cof numbers. If we define 00

    g(n 1, n1) A I; x(n,, n2 - k1,)h2(k2) -.--(:10 (1.57)

    equatien (1.56) becomes "' y(n 1, n2) = I: h,(k,)g(n, - k,, n2)

    kt=-oo

    The array g(n 1 , n2 ) can be computed by performing a 1-D convolution between each column of x (n 1 =constant) and the 1-D sequence h2 The output array y can then be computed bJ convolving each row of g (n2 =constant) with the 1-D sequence h,. Alternatively, the row convolutions could be computed before the column convolu-tiuns; the same output signal will result in either case. The important point is that the outrtt can be obtained as a series of 1-D convolutions.

  • Sec. 1.2 Multidimensional Systems 23

    TheM-dimensional case is similar. A separable system can again be implemented using 1-D convolutions, but the number of such convolutions grows rapidly with the dimensionality of the signal. For example, consider the M-dimensional input sequence x(n 1, n2 , , nM) defined over the N x N x N x x N hypercube. If this signal is convolved with a separable sequence of the form h(n~> n2, .. . , nM) = h 1(n 1)h2(n 2 ) hM(nM), then MNM-t 1-D convolutions are required to obtain .the output sequence.

    1.2. 7 Stable Systems

    As in the 1-D case, the only truly useful systems are those which are stable. It is reasonable to require, for example, that a system's output sequence should remain bounded if its input sequence is bounded. To distinguish this type of stability from others, we will say that such systems are BIBO (bounded input, bounded output) stable. For a BIBO stable system, when I x{n 1, n2) I< B, there must exist a B' such that ly(n1 , n2) I< B' for all (n~o n2). A necessary and sufficient condition for an LSI system to be BIBO stable is that its impulse response be absolutely summable

    ~ ~

    L; L lhCnt,n2)1 = S 1 < oo (1.58) 111=-coo 111=-oo

    The proof of this fact is identical to the 1-D case [I]. A weaker form of stability is mean-square stability. An LSI system is mean-

    square stable if 00 00

    L: L: lh(nt,n2)1 2 = s2 < oo (1.59) lll=-e 111=-oo

    A BIBO stable system is mean-square stable, but the converse is not necessarily true. If we simply refer to a system as stable, we will be referring to a system which is BIBO stable.

    The definitions above would seem to imply that multidimensional stability is very similar to 1-D stability. As we shall see in Chapter 4, this is definitely not the case. Multidimensional stability is far more difficult both to understand and to test than 1-D stability.

    1.2.8 Regions of Support

    In studying 1-D systems, we found it useful to characterize systems as causal if their outputs could not precede their inputs. Such systems were useful for processing signals whose independent variable was time, both because the constraint made physical sense and because it yielded systems that could be implemented in real time.

    For most 2-D applications, the independent variables do not correspond to time, and causality is not a natural constraint for such systems. We are forced to consider generalizations of causal systems, however, when we consider system implementations.

    The impulse response h(n) of a causal l-D discrete LSI system is zero for n < 0.

  • 24 Multidimensional Signals and Systems Chap 1

    Consequently, one generalization of the concept of causality can be made by requiring that an impulse response be zero outside some region of support.

    Earlier we discussed the special case of sequences whose support is a region of finite extent. Sequences that are nonzero only in one quadrant of the (n 1 , n2)-plane form another important special case. They are said to possess quadrant support. The notion of quadrant support can be generalized to include regions of support that are wedge-shaped. We will say that a sequence ha!> support on a wedge if it is nonzero only at points inside (and on the edges) of a sector defined by two rays emanating from the origin, providing that the angle between the two rays is strictly less than 180 degrees. An example of a sequence with support on a wedge is shown in Figure Ll7(a).

    (a)

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . - . 8 ..

    (b)

    Figure 1.17 (a) Sequence with support on a wedge. (b) Sequen" with support on one quadrant derived from the sequence in (a) under the linear transforma-tion of variables with (Ntt. N21) = (2, 1) and (N12, N22) = (1, 2). Small dots imply a sample of value zero.

    Any sequence with support on a wedge can be mapped into a sequence with support on the first quadrant through a linear transformation of variables [4]. For example, suppose that the vectors

    N, = (Ntt Nzt)' (1.60) Nz = (Ntz, Nu)'

    lie along the edges of a wedge-shaped region. (N11 , N21 , N 12 , and Nz2 are integers.'\ Furt.hermore, assume that there are no common factors between N 11 and N1 , or between N 12 and N22 Since N1 and N 2 are not collinear

    The change of variables m 1 = Nun 1 - N 12n2 m 2 = -N21n 1 +. N 11 n2

    (1.61

    (1.62)

  • Sec. 1.2 Multidimensinoal Systems 25

    will map the wedge in question onto the first quadrant. Such a transformation is not unique. In this case, N 1 is mapped onto (D, 0)' and N 2 is mapped onto (0, D)'. Figure 1.17(b) shows the result of mapping the wedge in Figure 1.17(a) onto the first quad-rant. Because D = 3 for this example, not every point in the first quadrant of Figure l.l7(b) lies in the range space of the linear transformation (1.62). There will be samples in the (m 1 , m 2)-plane to which no sample in the (n 1,n2)-plane will be mapped. This phenomenon is a result of using discrete signals; we insist that an integer vector (n 1, n2 )' be mapped into another integer vector (m 1, m 2)'. It can be shown that in order for every integer-valued order pair in the first quadrant of the (m 1, m2)-plane to lie in the range space of the linear transformation, it is necessary and sufficient that I Dj = 1.

    *1.2.9 Vectorlnput-Output Systems

    Systems with several inputs andjor several outputs are important in some practical applications. We shall take a very brief look at these systems and how they are related to multidimensional LSI systems.

    Consider a system that processes signals received by an array of sensors equally spaced along a line. The ith sensor provides a 1-D discrete4ime signal to the system, which produces a number of 1-D discrete-time output signals. We shall denote the ith input signal by x,(n) and the jth output signal by y 1(n). For simplicity, we shall assume that the system under consideration is linear and time invariant (shift invariant in the discrete variable n). If the ith input signal is a 1-D unit impulse 6(n) and all the other input signals are zero, the jth output signal will be the impulse response h1in). In general, of course, there will be arbitrary discrete-time signals at each input port of the system, so the jth output signal must be written

    ypi) = 1: 1: h11(m)x1(n- m) (I .63) i m

    We can relate equation (1.63) to the 2~D convolution sum discussed in Section !.2.4 by defining the 2-D sequences

    p(i, m) L> x1(m) q(j, n) L> y ln) ( 1.64)

    At this point we will naively assume that the sequences p and q are related by the 2-D convolution sum

    q(j, n) = E "Ef(i, m)p(i - j, n - m) (1.65) I m

    which can be subjected to a change of variables to give q(j, n) = " l:f(i- j, m)p(i, n - m) (!.66)

    I m

    By comparing equations (1.66) and (1.63), we see that the linear time-invariant vector input/output system can be regarded as a 2-D LSI system if

    h11(m) = f(i- j, m) (1.67)

  • 26 Multidimensional Signals and Systems Chap 1

    This requirement essentially imposes shift invariance in the variable corresponding to the input-output index. If equation (1.67) does not hold, then the vector input-output system can still be regarded as a 2-D linear system, but not a 2-D LSI system.

    3 FREQUENCY-DOMAIN CHARACTERIZATION OF SIGNALS AND SYSTEMS

    In the preceding section we saw that the response of a 2-D LSI system to an input signal could be obtained by convolving the input signal with the impulse response of the system. By representing the input signal as the superposition of shifted impulses, the output signal could be represented as the superposition of shifted impulse responses. Frequency-domain representations of LSI systems also exploit the super-position principle, but in this case the elemental sequences are complex sinusoids. Let us begin by considering the responses of LSI systems to sinusoidal inputs.

    1.3.1 Frequency Response of a 2-D LSI System

    Consider a 2-D LSI system with unit impulse response h(n 1, n2 ) and an input whtch is a complex sinusoid of the form

    x(n 1, n2) = exp (jro 1nt + jro1n2) (1.68) where co 1 and co2 are real numbers called the horizontal and vertical frequenc1es, respectively. We can determine the output signal by convolution.

    "" 00 y(n~o n2 ) = I; L: exp [jco 1(nt- k 1) + jco 2(n 2 - k 2)]h(kl> k 2 ) k1=-oo ka=-ao

    = exp (jco 1n 1 + jco2n2>[k.~~k,too h(kt> k 2)exp( -jco1k 1- jco2k2)~ (1.69) = exp (jcotnt + jco2n2)H(coh COz)

    The output signal is a complex sinusoid with the same frequencies as the input signal, but its amplitude and phase have been altered by the complex gain H(co 1 , ro 2). This gain is called the system'sfrequency response, and it is given by

    H(rol,COz) L> L:~h(nl,n2)exp(-jrotn 1 -jco2n2) (170) IJJ ,,

    An LSI system is able to discriminate among sinusoidal signals on the basis of their frequencies. If I H(co 1, ro2) I is approximately equal to one for a particular value of the ordered pair (ro 1, co 2), sinusoidal signals at that frequency will pass through the system without being attenuated. On the other hand, if I H(co 1 , ro 2) 1 is close to zero for some (ro 1 , co2 ), sinusoids at that frequency will be rejected by the system.

    It is straightforward to show that the frequency response H(ro1 , ro2 ) is periodic in both the horizontal and vertical frequency variables with a period of 2n.

    H(co1 + 2n, C02) = H(rot, co2 ) H(ro 1, ro2 + 2n) = H(wt> ro2 )

    The proof is left to the reader (see Problem 1.12).

    (1.71)

  • Sec. 1.3 Frequency-Domain Characterization of Signals and Systems

    Example 3 As a simple example, let us compute the frequency response of the system wi1h the impulse response h(nlt n2) = 6(nt + 1, n2) + O(n 1 -I, n2) + O(n" n2 +I)+ O(n~o n2 - I) (1.72) This sequence is drawn in Figure l.18(a). The frequency response is given by

    ~

    .I; h(n 1, n2) exp ( -jro1n1 - jro2n2) ftt=-CIOIII=-00

    nt=-oo =-oo + O(nt. n2 - I)] exp ( -jrotn1 - j ro2n2 )

    = e'"'' + e-Jcu, + el + e- }OJ = 2(cos ro 1 + cos ro2)

    This frequency response is shown as a perspective plot in Figure 1.18(b).

    "t

    (a)

    (b)

    Figure 1.18 (a) Impulse response of Example 3. (b) Frequency response of Example 3.

    i 1.73)

  • 28 Multidimensional Signals and Systems Chap. 1

    Example 4 Consider the system whose impulse response is given by

    1

    0.125, nt=1, n2 =1 0.25, lit= 1, n2 =0

    h(nh n,_) = 0.25, lit = 0, n2 = 1 0.5, 11 1 = n2 = 0 0, otherwise

    (l.74)

    Applying the definition of the frequency response, we get

    "" "' H(w" Oh) = 2: 'L h(n~o nz) exp ( -jw1nt - jWznz) llt"'=- 00 11t=-oo

    = 0.5 + 0.25(e-J"' + elo" + e-J., + el"') + 0.125(e-'"''e-'"'' + e-J"'el""-+ e'""e-'"'' + el"'e'"'')

    (I. 75)

    = 0.5(1 + cos w1)(1 + cos w,) which is shown in Figure 1.19. This is an example of a simple lowpass filter. The gam of the filter is approximately 2 rJear the origin, and it is approximately 0 when either C0 1 ::::= n or ro2 ::::= n.

    Figure 1.19 Frequency response of the simple lowpass filter of Example 4.

    The system in Example 4 has a separable impulse response, and from (1.75) we see that its frequency response is also a separable function. This result is true in general. If

    (1. 76) then

    where

    and (I. 77)

    The proof is left for the reader in Problem 1.13.

  • Sec. 1.3 Frequency-Domain Characterization of Signals and Systems 29

    If the input sequence to an M-dimensional LSI system is a complex sinusoid of the form

    N x(n~> n2 , , nM) =II exp (jw~n,) t-1

    (1.78)

    its output is also a complex sinusoid of the same form multiplied by a complex gain. Using vector notation, we can rewrite equation (1.78) as

    x(n) = exp (jm'n) (1.79) where n = (n 1 , n2 , , nM)' and m = (w1, m2 , , roN)'. The output of the M-dimensional LSI system is given by

    y(n) = H(m) exp (jm'n) (1.80) where theM-dimensional frequency response H(m) is given by

    H(m) = :E h(n) exp ( -jm'n)

    1.3.2 Determining the lmpu/IUI Respo~~&e from the Frequency Response

    (1.81)

    The frequency response of a discrete LSI system is generally a continuous 2-D peri-odic function which can be expressed as a linear combination of harmonically related complex sinusoids, as demonstrated by the definition of H(m1, m2) in equation (1. 70). This relation not only defines H{m 1, m 2.), it also serves as a 2-D Fourier series expansion of H((J)" m2). The coefficients of the expansion are the values of the impulse response samples h(n1 , n2). It should not be surprising, therefore, that the impulse response of an LSI system can be obtained from the frequency response.

    The inverse relationship can be derived by multiplying both sides of equation (1.70) by a complex sinusoid and integrating over a square in the frequency plane. In detail, we form

    [ 21'/C J: .. exp [-jm2(n2 - k2)] dm2 J

    It is straightforward to demonstrate that

    1 f." 2 exp[-jco(n- k)] dw = 6(n- k) 1C -

    (1.82)

    (1.83)

  • 30 Multidimensional Signals and Systems Chap. 1

    so that the right side of equation (1.82) becomes, upon evaluation of the double sum, simply h(k 1, k2). This, then, provides a means of evaluating the value of the impulse response at (k 1 , k2).

    Restating this result in terms of the more familiar integer variables (n 1, n2 J, we get

    {1.84)

    The area of integration for the double integral in (1.84) extends over exactly one period of H( (J) 1 , (1) 2). Although we chose to use the period centered at the origin in writing these expressions, in fact any period of H(ro 1, ro2) could have been used.

    71'

    b

    Figure 1.20 Frequency response of the ideal rectangular lowpass filter . used for Ex.ample 5.

    R

    Figure 1.21 Frequency response of the ideal circularly symmetric lowpass filter of Example 6.

    Example 5 Let us use this result to find the impulse response of the ideal lowpass filter specified by the frequency response

    I a> 1 I ::;:; a < tr:, I COz I < b < 7t otherwise

    (1.85) which is shown in Figure 1.20. This example is quite simple because it is a separable system. Thus

    1 f" fb . . h(n" n2) = 41t2 _,. -t exp (JO>Int + JC02n2) d(J)z dOJt

    t J6 21t -b exp (/a>2n2) da>2 (1.86) sin an 1 sin bnz

    = ---mr;- nn;-

    Example 6 As a slightly more complex example, consider the prob-lem of determining the impulse response of the ideal circular lowpass filter given by

    H(a> 1J) = ' { 1 rozt + roll :::;; Rl < nl

    1 ' z o. otherwise (1.87)

    This frequency response, which is not separable, is shown in Figure 1.21. In this example,

    h(nt. n,) = 4~2 JI exp (Jm 1n 1 + jm2nz) dW 1 dro 2 A

    (1.88) The integral over the circular region A is more easily

  • 0

    -16

    Sec. 1.3 Frequency- Domain Characteri:::ation of Signals and Systems 31

    I

    performed if ro 1 and ro 2 are replaced by polar coordinate variables. Therefore. we define

    With these definitions, (1.88) becomes

    I 1R12>< h(n,, nz.) = 4n2 o 0

    OJ exp [Jrov'nr + n~ cos (fJ - f/J)] df/J dro

    = 21n iR OJJo(rov'nr + ni) dro _ R J,(Rv'nf + ni) - 2n v'nf + n!

    (I 89)

    where J 0(x) and J 1(x) are the Bessel functions of the first kind of orders of 0 and 1, respectively. This impulse response is a sampled, circularly symmetric function. Along the n1-axis, it has the form

    which is sketched in Figure 1.22.

    R 2- Jl (nlR) ,rn,

    I I 'II II' I

    0

    (1 90)

    Figure 1.22 Impulse response for the circularly symmetric lowpass filter pf

    16 Example 6 evaluated along the n ,-axb.

    1.3.3 Multldlmenalonal Fourier Trll$lorm In Section 1.2.1 we saw that an arbitrary 2-D sequence could be written as the sum of weighted and shifted impulses as in equation (1.25). An LSI system will respond to each impulse with its impulse response, appropriately weighted. Thus the output sequence can be interpreted as the superposition of the weighted and shifted impulse responses.

    In this section we shall demonstrate that a 2-D sequence can, in most practical cases, be written as a weighted sum of complex sinusoids using the multidimensional

  • 32 Multidimensional Signals and systems Chap. 1

    Fourier transform. Since we know the response of an LSI system to a sinusoidal input, we can write the output sequence as the superposition of the sinusoidal responses of the LSI system.

    ~f we look carefully at the inverse frequency-response operator given by (1.84), we see that, in addition to providing a formula for h(n 1 , n2), it also represents the sequence has a superposition of complex sinusoids. Let us use a similar representation for the input sequence x and write

    1 f" f" . . x(n,,nl) = 4n2 - _, X(co" rol)exp(Jco,n 1 + JC0 2n2.)dco1 dco2 (1.91) The complex function X, which is called the2-D Fourier transform ofx, can be evalu-ated using

    ~ ~

    X( co" co2 ) = I; ~ x(n" n2) exp ( -jco1n 1 - jcoln2) (1.92) 1""'-oo JIIIF-00

    With this definition, we see that the frequency response of an LSI system is the Fourier transform of the system's impulse response.

    Now; suppose that we have a 2-D LSI system L[ ]which possesses an impulse response h(n 1, n2) and a frequency response H(Q?,1, col). We know that

    (1.93) Using the property of linearity in conjunction with the representation of x(n 1 , n2) as the integral of weighted complex sinusoids, equation (I .91), we can write

    y(n 1 , n2 ) = L[x(n 1, n 2.)]

    = L[ 4~2 J:~L" .. X( a>,, co2)exp(ja>1n1 + ja>2n2)da>1 dco2 J (1.94) 1 J"J"' . . = 4nz _,. _,.X( a>" a>2)L[exp (JC01n1 + JC02n2)] da>1 dco2

    Finally, using equation (1.93), we get

    l J" f" . . y(n~o n2) = 4n 2 - - H(a>,, col)X(a>~o co2)exp (Ja>1n1 + JC02n2) da> 1 da>2 (1.95) We have tacitly assumed that X(a> 1, a>2) and H(0>1, col) are well defined. This allowed us to interchange the order of the integration and the linear operator L[ ] in equation (1.94).

    Equation (1.95) gives us an aJtemative way of expressing the output of an LSI system. The relative weighting of the complex sinusoidal components comprising the input seq~ has been altered by multiplication with the system's fraquency response H(a> 1 , a>l). Naturally, the output sequence computed by equation (t.9S) is identioal to the output sequence computed by the convolution sum equations (1.36) and (1.37) (see Problem 1.18).

  • Sec. 1.3 Frequency-Domain Characterization of Signals and Systems 33

    The output sequence y(n 1, n2) may also be written in terms of its Fourier trans-form Y(ro 1 , co2 ) as

    1 J" J" y(n 1 , n2) = 4n 2 - .. - Y(co~> co 2) exp (jro 1n 1 + jco2n2 )dro 1 dm 2 (I 96) A comparison of equations (1.95) and (1.96) implies that

    Y(m 1 , Wi) = H(rol, mz)X(cot, m1) (I 97) if y = h ** x. This result, often referred to as the convolution theorem, is extremely important; the Fourier transform of the convolution of two 2-D sequen\,:es is the product of their Fourier transforms.

    The Fourier transform defined by (1.92) can be shown to exist whenever the sequence x(n 1, n2) is absolutely summable.

    :E :E lx(nhn2)I=St

  • 34 Multidimensional Signals and Systems Chap 1

    Spatial shift. If

    then ( 1.1 03)

    Shifting a sequence x(n 1 , n2) by an amount (m 1 , m2) corresponds to multiplying Its Fourier transform X(ro1, ro2) by the linear-phase term exp (-jrolm 1 - jw2m2).

    Modulation. x(n~> n2) exp (j9 1n 1 + j02n2 ) ~ X(ro 1 - 0 10 COz- fJ2) (1.104)

    Multiplying a sequence by a complex sinusoidal sequence corresponds to shifting its Fourier transform.

    Multiplication.

    . 1 J"J" c(n~o n2)x(n~o n2) ~ 4n2 - _,. X(O~o O~C(co 1 - O~o C0 2 - 02) dfJ1 d92 1 l.l ..

    = 4n2 _, - X(cot- (J~> co,_- 02)C(8~o 8z)d8~o d6 2 (1.105)

    The multiplication of two sequences results in the convolution of their Fourier trans-forms as indicated in (1.1 05). Note that the convolution integral has a special form; the integrand is doubly periodic and the integral extends over exactly one period of the integrand. The property of modulation (1.1 04) can be regarded as a special case of the multiplication of two sequences.

    Differentiation of the Fourier transform.

    -Jn x(n n ) ~ aX(coh COz) I I 2 arol

    Transposition.

    Reflection.

    -jn x(n n ) ~ aX(roto Wz) 2 h 2 aw2

    -n n x(n n ) ~ a2X(co1> Wz) 1 2. I' 2 aro, a(/)2

    x( -lilt, n2) ~ X( -co 1 , ro2) x(n 1, -n2)- X(w 1, -co,_)

    x(-n,, -n2)- X(-rot, -ro2)

    (1.106a)

    (1.106b)

    {l.l06c)

    (1.107)

    (1.108a) (l.l08b) (I.I08c)

  • Sec. 1.3 Frequency- Domain Characterization of Signals and Systems

    Complex conjugation. x*(nu n2) --). X*( -ro1, -ro,.)

    Real and imaginary parts. Re [x(n 1, n2)] +--). i(X(rou ro2) + X*(-co,, -ro2)]

    jim [x(n 1, n2)] +--). ![X(ro,, rol)- X*(-eo,, -rol)] ![x(n 1 , n2) + x*(-n 1 , -n 2)] +--). Re [X(ro1 , ro1)]

    ![x(n~o n2.)- x*(-n 11 -n1)] ~jIm [X(co,, ro2)]

    35

    (1.109)

    ( l.llOa) ( l.liOb) (!.lila) (l.lllb)

    In the special case when x(n 1 , n2) is a real-valued sequence, these relationships impl) that

    X(ro 1 , ro2) = X*( -ro1, -ro2) Re [X(w" a>1)] = Re [X( -w1 , -ro:1)J Im [X(ro" w2)] = -1m [X(-ro 1, -Wz)J

    ( J.ll2a) (l.ll2b) (l.ll2c)

    The real part of the Fourier transform possesses even symmetry with respect to the origin, and the imaginary part possesses odd symmetry with respect to the or1gin. When x(n 1 , n2) is real valued, the left sides of (Lilla) and (l.lll b) become the even and odd parts of x(n 1 , n2), respectively.

    then

    Parseval's theorem. If

    x(n 1 , n2 ) ~ X(W 1 , W 2) and

    ( 1.113)

    This remarkable relationship can be interpreted and applied in a variety of ways. The left side of equation (1.113) defines an inner product between two 2-D sequence~ while the right side defines an inner product between two 2-D Fourier transforms. Parseva1's theorem says that inner products are preserved by the Fourier transform operation.

    Equation (1.1 13) reduces to the convolution theorem when w(n 1 , n2) is chosen to be h*(m 1 - n 1, m 2 - n2), as in Problem 1.19.

    Another important special case occurs when w{n 1, n1) = x(n 1 , n2), so that equation (1.113) becomes

    1 r r" -F ~ lx(nl, n2 )/ 2 = 411: 2 J_,.J_,IX(w1 , W 2)/2 dro, dm2 (l.ll4)

    The left side of equation (1.114) can be interpreted as the total energy in the discrete signal x(n 1, n2). The function IX(w 1 , co2)/2 can be interpreted as the energy-densit) spectrum, since its integral is equal to the total energy in the signal.

  • 36 Multidimensional Signals and Systems Chap. 1

    1.4 SAMPUNG CONTINUOUS 2-D SIGNALS

    Nearly all discrete sequences are formed in an attempt to represent some underlying continuous signal. Many discrete representations of continuous signals are possible-Fourier series expansions, Taylor series expansions, and expansions in terms of

    ~onsinusoidal ortllogonal functions, for example-but periodic sampling is by far the representation used most often, partly due to the simplicity of its implementation. In this section we look at the relationships between continuous signals and the discrete sequences which are obtained from them by periodic sampling. We do this twice-first for the specific case of rectangular periodic sampling, and then for a more general case where different geometries are chosen for the sampling locations.

    1.4.1 Periodic SampliniJ with Recta111Jular Geometl"/

    Of the several ways to generalize 1-D periodic sampling to the 2-D case, the most straightforward is periodic sampling in rectangular coordinates, which we wiii simply call rectangular sampling. If x.(t 1 , t 1) is a 2-D continuous waveform, the discrete signal x(n h n2) obtained from it by. rectangular sampling is given by

    (1.115) where T1 and T2 are positive real constants known as the horizontal and vertical sampling intervals or periods. The sample locations in the (t 1, t 2)-plane are shown in Figure 1.23. For a sequence formed in this fashion, we would like to determine two

    . . . . ~ . . . . .

    Figure 1.23 Sampling locations in the (t1, t2)-plane for rectangular sampling.

    things: Call the waveform x.(t ~> t 2) be recovered from x(n 1, n2.)? And how is the Fourier transform of x related to the Fourier transform of x .. ?

    First, let us define the 2-D Fourier transform relations for continuous signals:

    X..(OhO:l) A L .... r~ x..(thtl)exp(-jO.Itl -j02t,;)dtl dt2 (1.116) r~ J ...

    x.(t h t'J.) = 4n2 1.. -- x.coh Oz) exp un.t I + jflzf z) dOl dnl (1.1 1 7)

  • Sec. 1.4 Sampling Continuous 2-D Signals 37

    (1.118)

    Next, we will manipulate this expressiOn into the form of an inverse Fourier transform for discrete signals. We can begin by making the substitutions w 1 = 0 1 T1 and Oh = 0:2 T2 to get the exponential terms into the correct form. This yields

    x(nt. n2) = 41

    2 J~ J~ T 1T x .. (';t, ~2) exp,(ja>tn 1 + jw2n2) da>t dwz (1.119) 7t -~ -~ I 2 1 2 The double integral over the entire (a> 1, a>2)-plane can be broken into an infinite series of integrals, each of which is over a square of area 4n: 2 Let SQ(kto k2) reprtsent the square -n: + 21lk 1 < a> 1 < 1l + 2n:k 1 ; -n: + 27lk2 < a> 2 < 1r: + 2nk 2 Then equa-tion (1.119) can be written as

    x(n~> n2) = 4~2 t,; :t:. JJ T11T2 x .. (~;. ~:) exp (ja>1n1 + jw2n2) da>1 dw2 SQ(Ir~okt)

    Replacing ro 1 by ro 1 - 2n:k 1 and a>2 by ro2 - 2nk2 we can remove the dependence of the limits of integration on k 1 and k2 , giving

    ( ) = _1 J,. J~ [-1 ~~X (rot- 2nk 1 W 1 - 2n:k2)] x nt, n2 4 z T T " T ' T 1l -1< -.. 1 2 I I 1 2 (1.120) exp (ja> 1n 1 + ja>2n2 ) exp (-j2nk 1nt - j2nk2n2) dw1 dw,_

    The second exponential factor in equation (1.120) is seen to be equal to one for all values of the integer variables n 1, k., n2 and k2 Equation (1.120) now has the same form as an inverse Fourier transform, so we conclude that

    (1.121)

    or alternatively,

    X(OtTh OzTz) = TIT :E :Ex .. (nt - 2nTk1, 02- 2nTkz) I 2 kt lrt 1 2

    (1.122) Equation (1.122) gives us the relation we seek between the Fourier transforms of the continuous and discrete si_gnals. The right side of this expression can be interpreted as a periodic extension of X .. (O: 1 , 0:2), which yields the periodic function X(Q 1 T1, !12 T z.).

    Equation (1.122) can be further simplified in the case where the continuous signal x"(t 1, t1.) is bandlimited. The Fourier transform Xa(Q 1, 0: 2) of a bandlimited signal is equal to zero outside some region of finite extent in the (Q10 0:1)-plane. For simplicity, let us assume that the sampling periods T1 and T2 are chosen small enough so that

    (1.123) Then equation (l.l22) becomes simply

  • 38 Multidimensional Signals and Systems Chap. 1

    (1.124) for

    and

    The values of X(0:1 T1, 0 2 T2) outside this region are given by the periodicity of X(fl 1T 1 , Cl2 T2).

    In Figure 1.24(a), we see a ,kct.-1, of the Fourier transform of a continuous bandlimited signal. Forming the periodic extension of this transform gives us the perio!Jic function pictured in Figure 1.24(b).

    (a)

    (b)

    Figure 1.14 (a) Fourier transform of a continuous bandlimited signal. (b) Periodic extension of tile transform.

    As long as X.,(0 1, 0 2) satisfies equation (1.123), X,.(0:1, 0:2) can be recovered from X(Q 1T1 , 0 2 T2) by inverting equation (1.124) to get

    {T1T2X(O,Tt.U:T2), lOti

  • Sec. 1.4 Sampling Continuous 2-D Signals 39

    For notational convenience, we have defined W 1 t::. n!Tt and W2 t::. n/T2. :-.low, we proceed by expressing X(0 1 T 1 , D2 T2) in terms of x(n 1 , nJ.

    T T fw' fw, = 4~22 ~~x(n 1 ,n 2) -w, -w.exp[j01(t 1 -n 1T 1) (1.127)

    + j02(t 2 - nzTz)] d0 2 dUt _ I: E ( ) sin [W 1(t 1 - n 1 T 1)] sin [W2(t2 - n 2 T 2)) - nt ns X n!, nz Wl(t! - n!TI) Wz(lz- nzTz)

    Equations (1.115), (1.125), and (1.127), taken together, form the basis of the 2-D sampling theorem. It states that a bandlimited continuous signal may be recon-structed from its sample values. The sampling periods T 1 and T 2 must be small enough, or equivalently the sampling frequencies 2W1 and 2Wz must be large enough, to ensure that condition (l.l23) is true.

    A continuous signal that is not bandlimited may still be sampled, of course, but in this case, equations (1.124) and (1.125) will not be true since contributions from other replicas of X..(!l 1 , 0 2) in the periodic extension (1.122) will fold into the region I nl Tl I < 1t, I nzTz I < 1t. As in one-dimensional signal processing, this condition is called aliasing, since high-frequency components of X..{0 1 , 0 2) will masquerade as low-frequency components in X{!l1 T., 0 2 Tz).

    1.4.2 Periodic Sampling with Arbitrary Sampling Geometries

    The concept of rectangular sampling can be generalized in a straightforward manner. If we define two linearly independent vectors v1 = (v 11 , v 21 )' and v2 = (u 12 v22 )', we can write the locations of a doubly periodic set of samples in the (t ~> t 2)-plane as

    t1 = V11nt + Vunz t 2 = V 21 n 1 + v22n2

    Using vector notation, we can express these relations as t = Vn,

    (I. l28a) (l l28b)

    ( 1.129) where t = (t~> t 2)', n = (n 1,n2 )', and Vis a matrix made up of the sampling vectors V1 and V2

    (1.130) Because v 1 and v2 were chosen to be linearly independent, the determinant of V is nonzero. We shall refer to V as the sampling matrix.

    Sampling a continuous signal x,(t) produces the discrete signal (1.131)

  • 40 Multidimensional Signals and Systems Chap. 1

    tz

    t I

    Figure 1.15 Sample locations in the Ct1. t 2)-plane determined by the vectors

    Vt and v1 which comprise the sampling matrix V.

    The sampling locations are shown in Figure 1.25. Again, we can ask the questions: How are the Fourier transforms of x(n) and X 0 (t) related? Under what circumstances can we reconstruct x.,(t) from its samples x(n)? We proceed as before, by first defining the two-dimensional Fourier transform

    (l.132a) and observing that

    xit) = 4~2 f_~ X.,(O) exp (jO't) dO (1.132b) where the frequency vector 0 is given by (01, 0 2)'. [Note that the integrals in ( 1.132) are really double integrals because the differential quantities dt and dO are vectors.]

    The Fourier transform of x(n) can be written in vector notation as X(m) c. ~ x(n) exp ( -jm'n) {l. 133a)

    D

    where m = (ru 1, w2)'. Then l f: x(n) = 4n2 _,. X(m) ex.p (jm'n) dm (l.133b)

    Since x(n) is obtained from X 4(t) by sampling, we can write

    l f~ x(n) = x,(Vn) = 4n2 -~ X,(ll) ex.p (jO'Vn) dO Making the substitution m = V'O yields

    x(n) = 4~ 2 J_~Ide!VI X.(V'- 1m)exp{jm'n)dm (l.l34) As before, we perform the integration over them-plane as an infinite series of integra-tions over square areas. The result is analogous to equation (1.120).

  • Sec. 1.4 Sampling Continuous 2-D Signals 41

    x(n) = 4~2 i:[, ac! vI~ Xa(V'- 1(0> -- 2nk)). exp {jco'n) exp ( -j2xk'n)J dco ( 1.135)

    where k is an integer-valued vector. Again the second exponential factor is always equal to l, so that a comparison of equations (l.l35) and (I.l33b) implies that

    X((!))= I de~ VI~ X,(V'- 1(co- 2nk)) ( 1.136) or equivalently,

    X(V'n) = 1 ae! v

    1 ~ x.cn- Uk) (1.137)

    where U is a matrix that satisfies U'V = 2nl ( 1.138)

    and I is the 2 x 2 identity matrix. Equation (1.131) provides the desired relation between the Fourier transforms of x(n) and x,(t).

    When rectangular sampling is used, the matrices V and U become

    v = [~~ ;J; det V = T1T1 U = [~ OJ= [2W1 0 J

    0 2n 0 2W2 Tz

    and equation (1.137) reduces to equation (1.122). X(V'Sl) can again be interpreted as a periodic extension of X.,(Q), but now the

    periodicity is described by the general matrix U, which can be thought of as a set of two periodicity vectors u1 and u2

    (1.139) Since X( co) is periodic in both 'ro 1 and w 2 with a period of 2n, it follows that

    X(V'O) is periodic in 0 with the periodicity matrix U. X(V'(fi + Uk)) = X(V'n + 2nk) = X(V'O)

    Consider the continuous signal xa

  • 42 Multidimensional Signals and Systems Chap. 1

    FiRure 1.26 Periodic function X(V'n), plotted as a function of n for the case where the sampling matrix V is given by equation (1.140).

    At this point, we again consider the important case where the continuou:\ signal x..(t) is bandlimited. The Fourier transform XiO) is identically zero outside a region of finite extent B, which we shall call the baseband. By varying the sampling matrix V, we can adjust the periodicity matrix V so that there is no overlap among the periodically repeated versions of Xo(O) on the right side of equation ( 1.137).

    By varying U in this manner, we ensure that there is no aliasing. Then equation (l. 1 37) becomes simply

    X(V'O) = I de! VI X,.(O) (1.142)

    for values of V'O lying in the square centered on the origin with sides of length 2n. Consequently, X.(O) can be recovered from X(V' fi) and therefore the continuous signal x..(t) can be recovered from the sequence x(n). We can write

    X,.(O) = {lod,et VI X(V'n), n E B otherwise (1.1 43)

    By taking the inverse Fourier transform of both sides and expressing X(V'O) in terms of the sample values x(n 1, ~),we get an equation analogous to (1.127): namely,

    x.(t) = I d~t; I :E x(n) f exp [jO'(t- Vn)] dfi (1.144) 1t " Ja

    where the integral is over the baseband B in the frequency plane. For notational ease, let us rewrite (1.144) as

    x,.(t) = :E x(n)f(t - Vn) (1.145)

    where

    (1 146)

  • I)

    Sec. 1.4 Sampling Continuous 2-D Signals 43

    The interpolating function f(t) allows us to construct the values of x.(t) at points in between the sample locations given by t = Vn.

    Let us summarize this more general derivation. We have a bandlimited, con-tinuous signal x..(t). Its Fourier transform X.,(O) is zero outside region B in then frequency plane. We want to represent x.(t) by a sequence of sample values x(n). To do this, we must find an appropriate sampling matrix V that will allow us to recover x.,(t) from x(n) using equation (1.145).

    From equation (1.137) we see that the Fourier transform of the discrete signal x(n) is equal to a scaled and periodically repeated version of Xa(O). The periodicity matrix U represents the two linearly independent directions in which X,(O) will be replicated. To achieve our goal, we must choose U so that there is no overlap among the replicated versions of X.(n), thus avoiding aliasing. In this case, X.,(n) satisfies equation (1.143).

    The choice of the periodicity matrix U determines the sampling matrix V, since U and V are related by equation (l.138). The choice of U is not unique, in general; an adequate density of samples in the t-plane will allow us to represent any bandlimited signal with several sampling geometries. However, it is often desirable to represent x.(t) with as few samples as possible. It can be shown that the density of samples per unit area is given by I/1 det V 1- Minimizing this quantity is equivalent to minimizing I det U 1- Therefore, to provide an efficient sampling scheme for a bandlimited signal, we choose the periodicity matrix U which has the smallest value of I det U I and which avoids aliasing for the particular shape of the signal's baseband B.

    The derivation of the generalized sampling theorem is easily extended to M-dimensional signals. Since we have used vector notation, the only significant change in the equations would be the replacement of the constant 4n1 by the more general constant (2n )M. Example 7

    ':::i

    The preparation of marine seismic maps can result in data which is periodically sampled in an unusual fashion. In Figure 1.27, a boat towing a line of sensors moves with a speed B knots in a direction perpendicular to an ocean current of speed C knots. The sensors are uniformly spaced along the line with a spacing D and all are

    ~D

    __ L _________ _ -~ '

    1 C knots Current

    (a) (b)

    R=DcosO S=Dsinll

    Figure 1.27 (a) Scenario for Example 7. (b) Resulting sampling raster.

    s

    i

  • 44 Multidimensional Signals and Systems Chap. 1

    periodically sampled and digitally recorded. Denote the temporal sampling period by T. How is the underlying spatial process sampled?

    To a first approximation, the sensors remain in a straight line, but that line is directed away from the direction of the boat movement, at an angle

    e = tan- 1 ~ The resulting sampling grid is shown in Fi