[ieee 10th annual international symposium on geoscience and remote sensing - college park, md, usa...

5

Click here to load reader

Upload: dlb

Post on 13-Apr-2017

213 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: [IEEE 10th Annual International Symposium on Geoscience and Remote Sensing - College Park, MD, USA (20-24 May 1990)] 10th Annual International Symposium on Geoscience and Remote Sensing

BIDIRECTIONAL REFLECTANCE MODELING OF FOREST CANOPIE~ USING BOOLEAN MODELS AND GEOMETRIC OPTICS

Alan H. Strahler Center for Remote Sensing

Boston University 725 Commonwealth Avenue

Boston, MA 02215 USA

ABSTRACT

Principles of geometric optics and Boolean models for random sets in a three-dimensional space provide the mathematical basis for a model of the bidirectional radiance or reflectance of a forest or woodland. The model may be defined at two levels: whole-canopy and individual-leaf. At the whole-canopy level, the scene includes four components: sunlit and shadowed canopy, and sunlit and shadowed background. The reflectance of the scene is modeled as the sum of the reflectances of the individual components as weighted by their areal proportions in the field of view. At the leaf level, the canopy envelope is an assemblage of leaves, and thus the reflectance is a function of the areal proportions of sunlit and shadowed leaf, and sunlit and shadowed background. Because the proportions of scene components are dependent upon the directions of irradiance and exitance, the model accounts for the “hotspot” that is well-known in leaf and tree canopies. Keywords: Directional reflectance; hotspot; BRDF; forest canopy.

INTRODUCTION

Mathematical modeling of the interaction of electromagnetic radiation with vegetation canopies is a research field that has been highly active in recent years. Modeling a canopy as a “turbid medium” of leaf and foliage elements in a slab geometry, the general equations of radiative transfer [I] have been approximated and solved in various ways using various descriptions of the canopy. During the past several years, we have pursued the development of a series of vegetation canopy models that treat the vegetation cover as a complex assemblage of large, three-dimensional, vegetation-filled objects (e. g., trees) that are largely separated and distinct from one another and are placed on a contrasting background [Z]. This type of model is most appropriate to the case of forests and woodlands, in which the vegetation cover is not at all homogeneous in the plane.

With such a premise, it is possible to use optical principles and parallel-ray geometry in connection with Boolean models of random sets in a three-dimensional space to model the bidirec- tional radiance or reflectance of forests and woodlands. In this model, the forest is treated as a collection of regular geometric shapes that cast shadows on a background and are viewed and illuminated from different directions within the hemisphere [3]

David L. B. Jupp Division of Water Resources

C. S . I. R. 0. Box 1666

Canberra, ACT 2601 Australia

[4]. The directional radiance of the forest or woodland is then dependent on the mixture of four components-sunlit and shaded tree crown, and sunlit and shaded background-that is seen from a given viewing position. The areal proportions of these four components, for given illumination and viewing directions, will be a function of the sizes, shapes, orientations, and placements of the objects (i. e., individual tree crowns) within the scene. Moreover, if size, shape, and orientation are fmed or characterized by distributions with known parameters, and object centers are distributed randomly, then the propor- tions may be estimated using the Boolean model of Serra [5]. This model accounts for the changes in proportions that occur with random overlapping of objects as the density of objects increases.

The principles of Serra’s Boolean models and geometric optics are easily extended to the case of leaves as objects in succes- sive layers above a background, and thus the bidirectional reflectance or radiance of leaf canopies may be modeled using this approach as well. Again, the scene is modeled as consist- ing of four components: sunlit leaf, shaded leaf, sunlit back- ground, and shaded background. As in the case of canopy envelopes as objects, shape, size, orientation and spacing of leaves are the parameters that drive the estimation of bidirec- tional radiance or reflectance. This extension leads naturally to the formulation of two-stage models, in which leaves are objects inside canopy envelopes.

MODELING FOREST SIGNATURES

In developing models for the radiance or reflectance of forests as imaged by aircraft or spacecraft sensors, the simplest is the areal-proportion model. It is structured around two assump- tions. First, there are only four kinds of ground covers, referred to as scene components: sunlit canopy (symbol C), shaded canopy (T), sunlit background (G), and shaded back- ground (Z). Each has regarded as having a characteristic radi- ance. Second, the radiance of the scene, Is, is a linear combi- nation of the radiance or reflectance of each component weighted by its areal proportion as seen from a given viewpoint. That is,

I ~ = K c I c + K T I T + K G I G + K z I z (1) where I C , IT , I G , and I Z indicate the radiances of the four com- ponents as named above, and K indicates the propoaion of

1751 CH2825-8/90/000041751/$01 .OO

Page 2: [IEEE 10th Annual International Symposium on Geoscience and Remote Sensing - College Park, MD, USA (20-24 May 1990)] 10th Annual International Symposium on Geoscience and Remote Sensing

each component within the scene. Since there are only four components, K ~ + K T + K G + K Z = 1. Note that if the view is off-nadir, or the objects are located on a sloping background, then it will be necessary to include this geometry in modeling the areal proportions of the four components [6].

PROPORTIONS OF SCENE COMPONENTS

The mean and variance of areal proportions of the four scene components within an IFOV are determined by four factors: (1) the geometric size and shape of the objects in the scene; (2) the illumination and viewing angles; (3) the degree of overlap- ping of objects and components, which is a function of the den- sity of objects as conditioned by (1) and (2); and (4) the size and shape of the instrument’s field of view.

Considering the case of randomly-located, discrete objects of arbitrary shape within a layer of a three-dimensional medium, a Boolean model shows that the gap probability q(8,$) between objects within a layer is q (0, I$)= where h is the density of object centers on a plane at the base of @e layer, 8 is the viewing angle, + is the viewing azimuth, and A (8, $) is the average area of an object projected at angle e,$ onto the base of the layer. The complement of this proportion, 1 -e-”(ev~@v), will be the proportion of the area within the field of view that is covered by trees. Thus, we may write

Let us now take the position of the sun as 8, with the azumith angle QJ. Sunlight will illuminate a proportion of the back- ground q @,, +s)=e-hx(es*@s); meanwhile, the proportion of the background visible to the viewer will be q(e ” ,$ ” ) = e - ” ( e v ~ @ v ) . What proportion of the scene will be background that is both sunlit and viewed (i.e., KG)? Serra’s Boolean model shows the sunlit and visible background proportion KG to be

K~ = exp(-h [W3,, $,) +A(% $,.) - Q(e,, $$, e,, $” 11 I (4)

where o(e,, $,, e,, $,) is the mean area of overlap between the projected area of each object onto the background at angle e,, $, with its projected area at angle e,, Qv .

From this expression for KG, the proportion of the scene in viewed, sunlit area, we can immediately obtain the proportion in viewed, but shadowed background area, since both sum to the visible gap fraction, q (e,, , 9). That is,

KZ= (e, 9 $ V I - KG . ( 5 )

To specify the remaining proportions K c and KT, we must assume some knowledge of the shape of the objects. Here we assume that the trees are spheroids-on-sticks, with known ratios of blr and blh , where r is the horizontal radius of the spheroid, b is the vertical radius, and h is the height of the stick (Figure 1). Note also that if the objects are taken to be spheroids-on-sticks, then their projected areas will be depen- dent only on the zenith angles 8; further, the overlap areas will be dependent only on the zenith angles and the relative

azimuth between illumination and viewing positions. We may thus simpllfy the notation a bit by letting + = +, - $,, and chang- ing the explicit specification of azimuth for viewing and illumination positions to e,, e,, and + (unsubscripted).

Let the function tan 0’ = ( b Ir ) tan 8 define angles 0; and 8;. For the nadir view, A(0)=nr2, an j for the off-nadir view, A(0) = A (0)lcos 8‘. The mean area, A (e), can be determined without difficulty provided that the r-distribution is known, as will be the case in the forward application of the model.

The radiance of a crown can be modeled in a variety of ways. In the simplest mode, the composite crown is separated into sunlit and shaded fractions. One model for this separation is to use the visible areas of lit and shaded surface of a solid spheroid congruent with the canopy envelope (Figure 2). Although this is an initial approximation, it captures the observed phenomenon that trees have shaded areas on the side away from the sun and brighter areas on the side facing the sun. Considering the geometry of a spheroid viewed and illuminated at different angles, we may determine the relative proportion of the spheroid in sunlit condition ( K c ) as

and thus

where I+! is the phase angle between direction vectors with zen- ith angles 0; and 0; and relative azumith $. From this, it sim- ply follows that

and

K T = +(I -COSl+f) [1 -q(e,,)] .

These formulations ignore the mutual shading of spheroids by

Figure 1. Geometry of a spheroid-on-a-stick. r , b , h , shape parameters; A (e), area of shadow of spheroid.

1

1752

Page 3: [IEEE 10th Annual International Symposium on Geoscience and Remote Sensing - College Park, MD, USA (20-24 May 1990)] 10th Annual International Symposium on Geoscience and Remote Sensing

A

Figure 2. Nadir and off-nadir areas of illumination and view- ing shadows as projected onto the ground plane. A c , area of sunlit crown; AT, area of shaded crown; A z , area of shaded background. A, nadir view; B, off-nadir view.

their neighbors, which is probably not unreasonable for a sin- gle layer of discrete canopies unless illumination and/or view- ing positions are far from nadir.

OVERLAP FUNCTION

There remains only to provide the function 0 (Os, 0, , Q, r ) for the overlap of a spheroid-on-a-stick’s shadow at angle 8, with its shadow at O , , Q and radius r . In general, this overlap, involving the area common to two ellipses at arbitrary orienta- tion on a plane, will have a tedious formula. Jupp et al. has provided an approxjlnation to this function that seems to work well. It relies upon the fact that it is easy to calculate the area of intersection of two circles of differing radii at a fixed dis- tance. The approximation is to transform each ellipse to a disk of equivalent area, and to take the distance between the disk centers to be the same as the distance between the two ellipse centers.

Whether the exact elliptical overlap or the disk approximatiyn is used, it is still necessary to obtain the mean overlap, 0 . Here we consider the form to be constant (b /h , b / r fixed>- If the distribution of r is not badly- behaved, we may take 0 as the single value associated with A (0).

INVERSION STRATEGY

In the practical case, we would like to use this geometric- optical model to reveal infonnation about the objects in the scene from a digital image. This information will be the size (as A (0) or r ) and density (h) of the objects in the case of spa- tial variance, and the shape ( b l r , b / h ratios) in the case of angular variance. In earlier work, we have shown how the size and density of objects can be obtained from the variance of successive radiometric measurements that are regularized by a sensor with a finite field of view [6]. However, the recovery of shape information has not been demonstrated, but should be possible.

As shown in Eq. (4), KG is sensitive to the shape of the tree crown, since the shape parameters blr and blh determine how functions A and 0 change with angle. Consider the case of a tall, narrow, spheroidal crown viewed at the hotspot. As the viewing angle increases or decreases upward or downward within the principal plane (Q = 0), the crown shadow will be revealed gradually because the object is tall. On the other hand, as the viewing angle diverges from the principal plane in the azumithal direction, the narrow shadow will be revealed much more quickly. Thus, the hotspot will be assymetric in these two directions in a fashion that is sensitive to b and r . The height of the spheroid on the stick will also influence the shape of the hotspot function. For a spheroid of a given size, the higher the position above the background, the more quickly solar and viewing shadows will separate with diverging view angle and thus the more rapidly the hotspot will fall off. Although we have not yet applied an inversion strategy for determining b Ir and b lh from these phenomena, it provides one of the most interesting opportunities of the more flexible instrumentation for angular imaging that will be available in the coming decade.

TWO-STAGE MODELS

In the two-stage model, one geometric-optical model is nested within another [7]. Our one-stage model may be regarded as an “envelope” model, in that the tree is taken to be a compact and distinct object that intercepts and scatters light as a unit. In the two-stage model, the envelope is a boundary within which objects are now leaves, again described by parameters of size, shape, and density. Light penetrates the envelope and interacts with the leaves, but follows the same geometric- optical principles that apply to trees as objects. The only difference is that leaf orientation is now important, whereas with trees the assumption is that all shapes are vertically oriented.

THE DISCRETE-LEAF CANOPY

The leaf canopy consists of multiple objects (leaves) scattered throughout the canopy volume. This raises the possibility that density of leaf centers varies with height z inside the canopy, where z = O at the top of the canopy and z = H at the bottom. Thus, we have h ( z ) , the Poisson density for the Boolean model, which is now a density of leaf centers per unit volume

1753

Page 4: [IEEE 10th Annual International Symposium on Geoscience and Remote Sensing - College Park, MD, USA (20-24 May 1990)] 10th Annual International Symposium on Geoscience and Remote Sensing

Figure 3. Section of a canopy of spheroids-on-sticks. A, per- spective view. B, cross-section. A 1, area of one crown in sec- tion at z .

at depth z in the canopy. Then N ( z ) , the number of leaves with centers in a section of the canopy between z and z +dz , will be N ( z ) = h ( z ) dz . Note that the interaction between illumination and viewing shadows is manifest only within a local depth for each leaf. At this “decorrelation depth” and beyond, the gap fractions for illumination and viewing are independent.

With these considerations, we may express L G (z ), the propor- tion of the section at depth z that is both sunlit and viewed (i. e., sunlit gap), as

L G ( z ) = exp ( -jib ( t ) [ice,, r + A(o,, r )

Here, A@,, t ) and K(e,, t ) are functions describing the aver- age projected areas of leaves at depth t within the canopy, and 0 (es, e,, $, z -f ) is the mean overlap area, which is now also a function of the distance between z and t , the depth variable of integration. This expression is quite tractable, and produces analytical results for simple cases. Following similar logic to that of our earlier sections, we note that

-o(e , ,e , ,g ,z- t ) ldt 1

L~ ( z ) + ~ ~ ( z ) = q (e, , z ) = exp[-jih(r)i(e,, t ) d t ]

and thus L z ( z ), the proportion of shaded gap at depth i , is

L z ( z ) = q ( e v , z ) - L G ( z ) .

The fraction of foliage which is visible and sunlit down to depth z is the summation of the foliage proportions in the sun- lit and viewed proportion of each section above i . That is, for a section between U and U + du which is above i , assume the

density in the thin section is small enough that there is no over- lap between leaves. Then, the number-of leaves will be h(u)du and the cover will be h(u )A(€ l , , u )du , where A (e,, U ) is the mean projected area of leaves in the section at U . Hence, the contribution to the fraction of visible sunlit canopy (Lc(u)) from the section between U and U +du will be

d L c (U ) = LG (U ) h(u ) K ( e , , U ) du ,

and

L~ (z ) = j%G (U )A(e,, )du .

For the remaining component L r ( s ), we have simply

L T ( z ) = 1 - q ( e , , z ) - L c ( i )

UNEQUAL ILLUMINATION AND VIEWING CANOPY DEPTHS

This treatment can be easily generalized to the case of a geometric envelope enclosing the leaf canopy. Consider a forest canopy composed of (spheroidal) crown envelopes. Let z denote depth of the entire canopy, ranging from 0 at the top of the tallest crown to H at the background surface. A section between z and z +dz will provide a set of (circular) cross- sections of canopy envelopes (Figure 3). The area of these cross-sections will provide an estimate of the crown cover due to foliage at depth z .

Considering only one crown in the section z , z +dz with cross-sectional area A 1, the fraction of the cross-section of the crown through which sunlight passes and which is also visible is the mean of the probabilities for each point (x , y ) over the cross-sectional area A 1. That is,

Consider now the illumination and viewing paths associated with each point x , y in the cross-section (Figure 4). Let U and

X

Figure 4. Illumination and viewing paths for point x , y in sec- tion of crown at depth z . U , depth at which illumination path enters canopy envelope; w , depth at which viewing path exits canopy envelope.

1754

I

Page 5: [IEEE 10th Annual International Symposium on Geoscience and Remote Sensing - College Park, MD, USA (20-24 May 1990)] 10th Annual International Symposium on Geoscience and Remote Sensing

w denote the depth for each path respectively at which the path enters or leaves the canopy envelope. Then, assuming the leaves in the interior of the crown are distributed in a similar way to the previously-considered leaf canopy,

LG.1 (x ,Y , z ) = exp(-loih(t)A(e,,r)dt - j 2 h ( r ) ~ ( e , , t ) d t

+ j;ax(u ,H, ,h 0) d@,, 8, 9 4 , z --I ) dt 1 By computing U and w over the points ( x , y ) of each cross- section, L G ( z ) may be computed. Using the formula for L c ( z ) from the previous section, a single scattering approxi- mation for the bidirectional reflectance of a single crown may thus be characterized.

However, there is no guarantee that the illumination or viewing path will pass through only one canopy envelope. If illumina- tion or viewing angles are large, then the path may pass through several envelopes. This adds considerable complexity to the problem, but in an earlier paper, we have shown how the distributions of such compound path lengths may be derived [7]. A full and proper treatment of the two nested geometric- optical models will require use of these path length calcula- tions.

CONCLUSIONS

Geometric optics presents an approach to modeling vegetation canopies that contrasts with more conventional models that are derived from radiative transfer theory. The geometric-optical approach is particularly well-suited to describing the bidirec- tional reflectance of forest and woodland canopies, where the concentration of leaf material within crowns and the resulting between-tree gaps make plane-parallel, radiative-transfer models inappropriate and where tree and shadowed back- ground interactions account for a large proportion of the vari- ance in images. Further, it leads to invertible formulations, in which spatial and directional variance provide the means for remote estimation of tree crown size, shape and total cover from remotely-sensed imagery. The geometric-optical approach can also be extended to plane-parallel canopies,

where it presents a viable altemative to such radiative transfer models. And, when tree and leaf models are combined, a sin- gle, nested model results that has the potential to carry the geometric-optical approach to a full description of the bidirec- tional reflectance of forest canopies.

ACKNOWLEDGEMENTS

This research was supported in part by NASA grant NAGW- 1474. We are particularly indebted to Professor Xiaowen Li, of the Institute of Remote Sensing Application, Academia Sin- ica, for developing and executing much of the modeling work that led to full development of the geometric-optical approach.

REFERENCES

[I J

[2]

S. Chandrasekhar, Radiative Transfer, London: Oxford University Press, 1950. X. Li, and A. H. Strahler, “Geometric-Optical Modeling of a Conifer Forest Canopy,” IEEE Trans. Geosci. Remote Sensing, vol. GE-23, pp. 705-721, 1985. X. Li and A. H. Strahler, “Geometric-Optical Bidirec- tional Reflectance Modeling of a Coniferous Forest Canopy,” IEEE Trans. Geosci. Remote Sensing, vol.

X. Li, and A. H. Strahler, “Modeling the Gap Probability of a Discontinuous Vegetation Canopy,” IEEE Trans. Geosci. Remote Sensing, vol. 26, pp. 61-170, 1988. Sena, J., Image Analysis and Mathematical Morphology, Academic Press: London, New York, 1982. A. H. Strahler, Y. Wu, and J. Franklin, J., “Remote Esti- mation of Tree Size and Density from Satellite Imagery by Inversion of a Geometric-Optical Canopy Model,” in Proc. 22nd Int. Symp. on Remote Sensing of Environ., Abidjan, Cote D’Ivoire, Oct. 20-26, 1988, pp. 377-348. A. H. Strahler, C. E. Woodcock, and J. A. Smith, “On the Nature of Models in Remote Sensing,” Remote Sens. Environ., vol. 20, pp. 121-139, 1986.

[3]

GE-24, pp. 906-919, 1986.

[4]

[5]

[6]

[7]

1755