real-time high dynamic range image-based...

7
Real-time High Dynamic Range Image-based Lighting César Palomo Department of Computer Science PUC-Rio, Rio de Janeiro, Brazil ABSTRACT In this work we present a real time method of lighting virtual objects using measured scene radiance. To compute the il- lumination of CG objects inserted in a virtual environment, the method uses previously generated high dynamic range (HDR) image-based lighting information of the scene, and performs all calculations in graphics hardware to achieve real-time performance. This work is not intended to be a breakthrough in real-time processing or in HDR image-based lighting (IBL) of CG objects, but still we expect to provide the reader with the overall IBL technique knowledge up- to-date, an in-depth understanding of the HDR rendering (HDRR) pipeline in current graphics cards’ hardware, and interesting images’ post-effects which deliver higher realism for the scene final viewer. Keywords Image-based lighting, High Dynamic Range, Real-time Ren- dering, Shading, Graphics Hardware Programming 1. INTRODUCTION We describes the use of a previously captured HDR map with lighting information of a scene to illuminate computer- generated virtual objects as if they were seamlessly placed in the captured scene. All computations are performed using graphics hardware, by means of extensive use of vertex and fragment shaders in order to enable for real-time interaction and display. We provide effects like reflection, refraction, fresnell effect and chromatic dispersion [29] as options for the lighting equation. Furthermore, post-effects such as bloom [12] are produced to give the final scene a more realistic appearance. We make the simplification of not performing inter-reflection among virtually computer-generated objects to achieve a high frame-rate. Specially video games can benefit from this technique as it creates more realistic scenes than with standard light- ing, which allowed developers and artists to produce incred- ible effects as in games like Half-Life 2:Lost Coast (Valve Software R ), Oblivion (Bethesda Softworks R ) and Far Cry (Ubisoft R ), to make the list short. The rest of this paper is organized as follows. In the next section we discuss related work relevant to this paper. Sec- tion 3 provides background about the image-based lighting technique and how lighting calculations are done, since this is the core of our method of lighting the virtual objects. Sec- tion 4 presents the concepts involved in HDR rendering: ac- quiring HDR images, storage, rendering, tone mapping and commonly used post-effects. Section 5 describes in depth the general method used in this work, joining the use of graph- ics hardware and techniques IBL and HDRR described in previous sections to produce real-time and realistic scenes. Section 6 presents results obtained, and we conclude in sec- tion 7. 2. RELATED WORK Debevec [5] uses high-dynamic range images of incident illu- mination to render synthetic objects into real-world scenes. However, that work employed non-interactive global illumi- nation rendering to perform the lighting calculations. Sig- nificant work has been developed to approximate these full illumination calculations in real-time by using graphics hard- ware. As a particular example, [17] and [18] use multi- pass rendering methods to simulate arbitrary surface re- flectance properties. In this work we use a multi-pass render- ing method to perfom all the illumination and post-effects calculations to render the final scene at a highly interactive frame rate, similar to Kawase’s 2003 work [19]. 3. IMAGE-BASED LIGHTING TECHNIQUE Image-based lighting can be summarized as the use of real- world images of a scene to make up a model representing a surrounding surface, and the later use of this model’s light- ing characteristics to correctly illuminate added subjects to the 3D scene. From the explanation above, we can iden- tify two main decisions which need to be made when using IBL: how to represent the surrounding scene into a model and how to perform the illumination calculations of added subjects into the scene. Subsection 3.1 discusses the main kinds of environment maps commonly used with IBL, and also makes a brief comparison among them, while subsection 3.2 lists the lighting effects used in this work. 3.1 Environment Mapping Techniques

Upload: hanguyet

Post on 20-Oct-2018

216 views

Category:

Documents


0 download

TRANSCRIPT

Real-time High Dynamic Range Image-based Lighting

César PalomoDepartment of Computer SciencePUC-Rio, Rio de Janeiro, Brazil

ABSTRACTIn this work we present a real time method of lighting virtualobjects using measured scene radiance. To compute the il-lumination of CG objects inserted in a virtual environment,the method uses previously generated high dynamic range(HDR) image-based lighting information of the scene, andperforms all calculations in graphics hardware to achievereal-time performance. This work is not intended to be abreakthrough in real-time processing or in HDR image-basedlighting (IBL) of CG objects, but still we expect to providethe reader with the overall IBL technique knowledge up-to-date, an in-depth understanding of the HDR rendering(HDRR) pipeline in current graphics cards’ hardware, andinteresting images’ post-effects which deliver higher realismfor the scene final viewer.

KeywordsImage-based lighting, High Dynamic Range, Real-time Ren-dering, Shading, Graphics Hardware Programming

1. INTRODUCTIONWe describes the use of a previously captured HDR mapwith lighting information of a scene to illuminate computer-generated virtual objects as if they were seamlessly placed inthe captured scene. All computations are performed usinggraphics hardware, by means of extensive use of vertex andfragment shaders in order to enable for real-time interactionand display. We provide effects like reflection, refraction,fresnell effect and chromatic dispersion [29] as options for thelighting equation. Furthermore, post-effects such as bloom[12] are produced to give the final scene a more realisticappearance. We make the simplification of not performinginter-reflection among virtually computer-generated objectsto achieve a high frame-rate.

Specially video games can benefit from this technique asit creates more realistic scenes than with standard light-ing, which allowed developers and artists to produce incred-

ible effects as in games like Half-Life 2:Lost Coast (ValveSoftware R©), Oblivion (Bethesda Softworks R©) and Far Cry(Ubisoft R©), to make the list short.

The rest of this paper is organized as follows. In the nextsection we discuss related work relevant to this paper. Sec-tion 3 provides background about the image-based lightingtechnique and how lighting calculations are done, since thisis the core of our method of lighting the virtual objects. Sec-tion 4 presents the concepts involved in HDR rendering: ac-quiring HDR images, storage, rendering, tone mapping andcommonly used post-effects. Section 5 describes in depth thegeneral method used in this work, joining the use of graph-ics hardware and techniques IBL and HDRR described inprevious sections to produce real-time and realistic scenes.Section 6 presents results obtained, and we conclude in sec-tion 7.

2. RELATED WORKDebevec [5] uses high-dynamic range images of incident illu-mination to render synthetic objects into real-world scenes.However, that work employed non-interactive global illumi-nation rendering to perform the lighting calculations. Sig-nificant work has been developed to approximate these fullillumination calculations in real-time by using graphics hard-ware. As a particular example, [17] and [18] use multi-pass rendering methods to simulate arbitrary surface re-flectance properties. In this work we use a multi-pass render-ing method to perfom all the illumination and post-effectscalculations to render the final scene at a highly interactiveframe rate, similar to Kawase’s 2003 work [19].

3. IMAGE-BASED LIGHTING TECHNIQUEImage-based lighting can be summarized as the use of real-world images of a scene to make up a model representing asurrounding surface, and the later use of this model’s light-ing characteristics to correctly illuminate added subjects tothe 3D scene. From the explanation above, we can iden-tify two main decisions which need to be made when usingIBL: how to represent the surrounding scene into a modeland how to perform the illumination calculations of addedsubjects into the scene. Subsection 3.1 discusses the mainkinds of environment maps commonly used with IBL, andalso makes a brief comparison among them, while subsection3.2 lists the lighting effects used in this work.

3.1 Environment Mapping Techniques

In short, environment mapping (EM ) simulates objects re-flecting its surroundings. This technique assumes that anobject’s environment is infinitely far from the object, andthat there is no self-reflection. If that assumptions hold, theenvironment surrounding the subject can be encoded andmodeled in an omnidirectional image known as an environ-ment map. The method was introduced by Blinn and Newell[3].All EM methods start with a ray frow the viewer to a pointin the reflector. This ray, then, is reflected or refracted withrespect to the normal at that point. This resulting directionis used as an index to an image containing the environmentto determine the color for the point on the surface.

3.1.1 Spherical Environment MappingThe early method described in 1976 by Blinn and Newell[3] is known as Spherical Environment Mapping. For eachenvironment-mapped pixel, the reflection vector is trans-formed into spherical coordinates, which in turn are trans-formed to the range [0, 1] and used as (u, v) coordinates toaccess the environment texture. Despite of being easy toimplement, this technique has several disadvantages as de-scribed in [1], such as the limitations of view-point depen-dency, distortions in the environment map’s poles and therequired computational time.

3.1.2 Cubic Environment MappingTen years later, in 1986, Greene [8] introduced the EM tech-nique which is by far the most popular method implementedin modern graphics hardware, due to its speed and flexibility.The cubic environment map is obtained taking 6 projectedfaces of the scene that surrounds the object. This cube mapshape allows for linear mapping in all directions to six pla-nar texture maps. For that reason, the resulting reflectiondoes not undergo the warping nor damaging singularities as-sociated with a sphere map, particularly at the edges of thereflection.For its characteristics of being view-independent, not pre-senting singularities and common implementation in currentgraphics hardware, cube maps have been our choice in thiswork.

3.2 Lighting calculationsHaving made the decision of how to represent the model ofthe surrounding scene, the illumination model for the IBLtechnique still needs to be chosen. Below we briefly presentthe physical basis for the selected illumination models avail-able in this work. A deeper explanation can be found in[29].

3.2.1 Reflection

When an incident vector I from the viewer’s position reachesthe object’s surface in a point P , the reflection vector R iscalculated taking into account the normal N at point P andthe incident angle θI , as depicted in figure 3.2.1. This vector

R is used to access the cube map texture in the correct face.If we assume that the object is a perfect reflector such as amirror, the vector R can be computed in terms of the vectorsI and N with Equation 1.

R = I − 2N(N.I) (1)

3.2.2 Refraction

When light passes through a boundary between two mate-rials of different density (air and glass, for instance), thelight’s direction changes, since light travels more slowly indenser materials. Snell’s Law describes what happens inthis boundary with Equation 2. η1 and η2 are the refractionindex for media 1 and 2, respectively.

η1 sin θI = η2 sin θT (2)

Basically, we use the built-in GLSL function refract to com-pute vector T to be used to lookup the environment map.Vector T is calculated in terms of the vectors I, N and theratio of the index of refraction η1/η2.

3.2.3 Fresnell effectIn real scenes, when light hit a boundary between two ma-terials, some light reflects off the surface and some refractsthrough the surface. The Fresnell equation describes thisphenomenon precisely, but since it is a complex equation, itis common to use the simplified version depicted in Equa-tion 3. In this case, both reflection and refraction vectorsare calculated and used to look up the environment map,but the resulting color in the incident point is calculated asshown in Equation 4

reflCoef = max(0,min(1, bias+ scale ∗ (1 + I ∗N)power))(3)

finalColor = reflCoef∗reflCol+(1−reflCoef)∗refracCol(4)

3.2.4 Chromatic dispersion

The assumption that the refraction depends on the surfacenormal, incident angle and ratio of indices of refraction is infact a simplification for what happens in reality. In additionto the mentioned factors, also the wavelength of the incidentlight affects the refraction. This phenomenon is known aschromatic dispersion, and it is what happens when whitelight enters a prism and emerges as a rainbow.

Figure 3.2.4 illustrates chromatic dispersion conceptually.The incident illumination (assumed to be white) is split intoseveral refracted rays. That effect can be simulated in com-puters by making the simplification that the incident vectorI is split into 3 refracted vectors, one for each RGB chan-nel, namely TRed, TGreen and TBlue. So, to calculate theresulting illumination, 3 ratios of indices of refraction areprovided and used, one for each channel.

4. HIGH DYNAMIC RANGE IMAGING ANDRENDERING

Usual digital images are created in order to be displayed onmonitors which support up to 16.7 million colors (24bits).For that reason, it is logical to store numeric images whichmatch the color range of the display. For instance, imageformats like bmp and jpeg traditionally use 16, 24 or 32 bitsfor each pixel. These image formats are known to have alow dynamic range, or LDR.

HDR images basically use a wider dynamic range, whichallows pixels to represent a larger contrast and a larger dy-namic range. When a picture is taken with a conventionalcamera, the exposure time chosen by the photographer in-dicates which areas should be taken into account and there-fore be captured by the picture, while the other areas areignored. For instance, if a photographer chooses a long ex-posure time, bright areas will not be correctly captured bythe picture, while details in dark areas will be visible in thefinal image. In constrast, if a short exposure time is set,only bright areas will reach the sensors at the camera, andso they will be the only areas with depicted details in thefinal image.

In short, HDR rendering avoids any clipping of the valuesand everything is calculated more accurately. The result isan image that resembles reality more closely because it cancontain both extremely dark and extremely bright areas. Itallows the user to set a specific exposure time and simulatereal photographs as if taken of the real scene.

4.1 Acquiring and Storage FormatAn early source of high dynamic range images in computergraphics were the renderings created by global illuminationand radiosity algorithms. For instance, RADIANCE syn-thetic imaging system by Greg Ward [27] outputs its ren-dering in Ward’s ”Real Pixels” format [26]. Later, Debevecwould present in his seminal paper [4] a method for acquiringreal-world radiance using standard digital cameras photog-raphy. We use HDR images acquired with this technique inthis paper, stored in RGBE format [28],[10].

4.2 Tone Mapping

Since conventional current monitors have limitations of con-trast and dynamic range, with common contrast ratios be-tween 500:1 and 3000:1, and HDR images have pixel val-ues which by far exceed those limitations, we need to usea method called Tone Mapping to display those images inthese conventional monitors. These tone mapping operatorsare simply functions which map values from [0;∞) to [0; 1).Several operators have been developed and give different re-sults, as can be seen in [7], [20], [22], [23] and [2]. Someof them can be simple and efficient to be used in real-time,while others can be very complex and customizable for im-age editing.The tone mapping operator used in this work is describedin Equation 5, where Exp is the exposure level for the sceneand Bright Level is a maximum luminance value for thescene. This operator allows the user to control exposure:with small exposure, the image is dark and some hiddendetails in bright areas appear, and conversely, with a highexposure, the image is very bright and contains details indark areas.

Y = Exp×Exp

Bright Level+ 1

Exp+ 1(5)

4.3 BloomingA very common effect applied to HDR images is called bloom[12]. It enhances bright areas and highlights, giving a vividimpression to an image.

To achieve a good blooming result as explained in [15], suc-cessive gaussian blur median filters [13] must be applied,with different kernel sizes. However, applying filters withbig kernels does not allow for real-time performance, so analternative approach is usually taken.

With bilinear interpolation on, a gaussian filter is applied tothe original image. Then this image is downsized (usually bypowers of 2), and the gaussian filter is applied again to theresulting image. This process continues until a tiny imageis obtained and no further downsizing improves the effect.Then, all these images are blended altogether, delivering agreat blooming result.

The trick of the technique is that, due to the image down-sampling, different gaussian filter’s kernels can be achievedwith little expense. In fact, if a original texture of 128x128is downsized to 64x64, then to 32x32 and finally to 16x16,and if a 3x3 kernel is used to blur all images, the tiniest tex-ture, with 16x16 dimensions, blurs as if a 24x24 kernel wereapplied to the original texture, but using much less compu-tations, and for that reason allowing for better performance.

4.4 Rendering PipelineAs already stated, HDR images have values beyond the [0, 1]range. For that reason, the normal 3D Graphics Pipelinecan not be used to render those images, otherwise the imagepixels’ values would be clamped to [0, 1] range.

With the support for floating-point textures, arithmetic andrender targets in graphics hardware, HDRR became possi-ble, and no workarounds would be necessary such as those

previously suggested by Debevec et al [6].

There are many methods for implementing HDR Renderingin current graphics hardware, but the following pipeline isvery commonly used in OpenGL 2.0 API [16], which includesthe application of bloom effect [12]:

• The HDR image is sent to the graphics card using theGL RGBA16F internal storage format, so that pixels’values do not get clamped to [0, 1] range.

• To apply the bloom effect [12], first of all very brightareas of the image are extracted in a HDR FrameBuffer Object (FBO). This texture is then downsam-pled, with bilinear filtering active, into other textures.These down-sampled textures are blurred with a me-dian filter shader [13].

• The original HDR texture is composited by adding alldown-sampled blurred textures with additive blending.In this phase we obtain a HDR Frame Buffer Objectwith the desired bloom effect, but its associated colortexture still contains high-dynamic range values.

• Tone mapping is applied to the resulting HDR frame-buffer to convert it to an LDR image, allowing for dis-play by the conventional framebuffer. The operatordescribed in Equation 5 was implemented in this workusing the GLSL [24] shading language.

• Finally, a quad [16] is drawn with the LDR texturegenerated in the previous step active, so that the finalimage can be displayed for the viewer.

5. GENERAL METHODTo achieve realtime HDR IBL in this work, we apply a ren-dering pipeline consisting of the steps depicted below. Forthe sake of clarity, some of them are described in followingsubsections.

• A HDR cross image RGBE file [28], representing theenvironment scene, is loaded into memory.

• The 6 faces of this cube map image are extracted into6 different images, corresponding to positive and neg-ative directions for x, y and z axis [8].

• A cube map texture is created using OpenGL’s 2.0GL RGBA16F ARB internal format [16], and each ex-trated face is sent to graphics hardware into its corre-sponding face of the cube map texture.

• Useful shaders for performing the lighting, applyingthe bloom effect and tone mapping the final HDR im-age are created. More details are provided in subsec-tion 5.1.

• With one framebuffer object [16] FBO1 active, thecube map texture is applied to a big cube, which worksas a skybox for the scene [14], representing the envi-ronment around the virtual objects.

• The virtual objects are positioned appropriately intothe scene and their rendering is done still with FBO1active and with the selected lighting effect (reflection,refraction, fresnell effect or chromatic dispersion) shaderactive, and with the cube map texture active and bound.At the end of this phase, FBO1 contains the renderedHDR scene in its color texture [16], with the virtualobjects already illuminated accordingly to the environ-ment.

• Another framebuffer object FBO2 and the shader re-sponsible for extracting bright areas from a HDR im-age are activated, and with FBO1 ’s color texture ac-tive, a quad is [16] drawn. At this point FBO2 ’s colortexture contains an HDR image with only bright areas.

• To apply the gaussian blur median filter, FBO2 ’s colortexture is down-sampled into some other framebufferobjects FBO DS to allow for better blooming results[15]. At the end of this phase the collection of frame-buffer objects FBO DS contains blurred HDR images,downsized from the original one.

• FBO DS ’ color textures are composited with activeblending into framebuffer object FBO2. That resultsin the bloom effect which can be added to the finalimage.

• Finally, FBO2 is composited with FBO1 into the verysame FBO1, using the tone mapping shader to performthe HDR to LDR conversion.

• To display the final rendered scene, FBO1 ’s color tex-ture is activated and a quad is drawn.

5.1 ShadersIn this work, seven shaders (seven pairs of fragment and ver-tex shaders) perform all the illumination calculations, applythe bloom effect and tone map the final HDR image. All ofthem have been developed using the GLSL shading language[24].

Four shaders perform the illumination calculations of reflec-tion, refraction, fresnell effect and chromatic dispersion [29]each.

Two shaders are part of the bloom effect [12]: one extractsbright areas of an HDR image and the other applies thegaussian blur filter to an HDR image, outputing a blurredHDR image.

To perform the tone mapping operator presented in Equa-tion 5, there is one last shader whose output is an LDRimage which can be displayed in the conventional OpenGL’sframebuffer [16].

6. RESULTSIn a Intel Duo Core R©1.83MHz with a NVIDIA GeForce R©Go7600 graphics card, the technique implemented as presentedhere performed a frame-rate of 60 FPS at 640x480 screenresolution, lighting a model with 4500 triangles faces. Evenat 1400x900 screen resolution, it can run at 20 FPS illumi-nating the same subject, with either available illuminationeffect, clearly mantaining a highly interactive rate.

This work provides a higher frame-rate than Kawase’s one[19], which runs at 45 FPS at 640x480 screen resolution inthe same machine, but with other effects like depth of field[11] and glare [25]. At 1400x900, its performance drops to15 FPS, clearly showing that the addition of further effectsreally impairs the final performance of rendered scenes withHDR.

The main dificulties in the course of this work have beenfinding good OpenGL FBO implementation examples andproblems to debug the written GLSL shaders. After extenseresearch, good code samples have been found for FBO imple-mentation on OpenGL. To solve the issues on shaders, GLIn-tercept function call interceptor has been used. It could showtextures status at each frame, allowing for debugging theresults for each phase of the proposed pipeline for HDRR,easing the development process.

In the end of this article we show some snapshots for resultsof our work. They illustrate how the bloom effect providesa more realistic impression of the scene, and how each oneof the illumination effect applied to the virtual subject con-tributes differently to the resulting scene.

7. CONCLUSIONIn this work we presented a consolidated method for illu-minating virtual objects inserted in a scene whose lightingcharacteristics are captured in a HDR image, using this envi-ronment lighting information to correctly calculate the shad-ing of the CG subjects in real-time using graphics hardwarecomputation power. We achieved a real-time performanceas intentioned, and the final images presented to the vieweraccomplished the aim of compositing virtual objects with areal-scene environment without viewer’s noticing what is vir-tual and what is real, due to good illumination calculationsmade possible with the HDR values from the environment.

The assumption that there is no inter-reflection among vir-tual subjects or with the surrounding scene is a necessarylimitation in our work to enable for real-time performance.As a future work, this assumption could be mitigated by us-ing the proposed method in Hakura’s and Snyder’s work [9],where they use a combination of minimal ray tracing for lo-cal objects and layered environment maps to produce reflec-tions and refractions tha closely match fully ray traced solu-tions. Using their technique associated with graphics hard-ware programming through shaders programming would prob-ably result in good visual results and improved performanceif compared to their approach.

Some improvements to this work, not done yet due to lackof time, could be the inclusion of other effects beside bloom-ing, like depth of field [11] and glare [25], which can imitatesome cameras lenses’s physical characteristics in response tolight, and can bring even more reality to the final compositedscene.

8. REFERENCES[1] AKENINE-MOLLER, T., AND HAINES, E.

Real-time rendering. (2002), 2nd edition, pp. 153-161.ISBN 1-56881-182-9.

[2] ASHIKMIN, M. A Tone Mapping Algorithm for HighContrast Images. In 13th Eurographics Workshop onRendering (2002)

[3] BLINN, J.F., And M.E NEWELL. Texture andreflection in computer generated images.Communications of the ACM (October 1997), vol. 19,no. 10, pp. 542-547.

[4] DEBEVEC, P.E., And MALIK, J. Recovering highdynamic range radiance maps from photographs. InSIGGRAPH ’97 (August 1997), pp. 369-378.

[5] DEBEVEC, P. Rendering synthetic objects into realscenes: bridging traditional and image-based graphicswith global illumination and high dynamic rangephotography. In SIGGRAPH ’98 (July 1998).

[6] COHEN, J., TCHOU, C., HAWKINS, T., AndDEBEVEC P. Real-time high dynamic range texturemapping. In Proceedings of the EurographicsRendering Workshop (2001).

[7] DRAGO, F., MYSZKOWSKI, K., ANNEN, T. AndCHIBA, N. Adaptive Logarithmic Mapping forDisplaying High Contrast Scenes. In Eurographics2003

[8] GREENE, NED. Environment Mapping and OtherApplications of World Projections. IEEE ComputerGraphics and Applications (November 1986), vol. 6,no. 11, pp. 21-29.

[9] HAKURA, ZIYAD S. And SNYDER, J. M. RealisticReflections and Refractions on Graphics HardwareWith Hybrid Rendering and Layered EnvironmentMaps. In 12th Eurographics Workshop on Rendering(2001), pp 286-297.

[10] HOLZER, B. High Dynamic Range Image Formats.(2006).

[11] http://www.cambridgeincolour.com/tutorials/depth-of-field.htm

[12] http://en.wikipedia.org/wiki/Bloom (shader effect)

[13] http://en.wikipedia.org/wiki/Gaussian blur

[14] http://en.wikipedia.org/wiki/Skybox (video games)

[15] http://harkal.sylphis3d.com/2006/05/20/how-to-do-good-bloom-for-hdr-rendering/

[16] http://www.opengl.org/

[17] KAUTZ, J., AND MCCOOL, M. D. Approximation ofglossy reflection with prefiltered environment maps.Graphics Interface (2000), pp. 119-126. ISBN1-55860-632-7.

[18] KAUTZ, J., AND MCCOOL, M. D. Interactiverendering with arbitrary BRDFs using separableapproximations. Eurographics Rendering Workshop1999 (June 1999).

[19] KAWASE, M. Real-Time High Dynamic RangeImage-Based Lighting.http://www.daionet.gr.jp/ masa/rthdribl/

[20] PATTANAIK, S.N., TUMBLIN, J., YEE, H. AndGREENBERG D.P. Time-Dependent VisualAdaptation for Realistic Image Display. In Proceedingsof ACM SIGGRAPH (2000)

[21] RANDIMA FERNANDO And MARK J. KILGARD

The Cg Tutorial: The Definitive Guide toProgramming Real-Time Graphics. ISBN-10:0321194969. ISBN-13: 978-032119496

[22] REINHARD, E. And DEVLIN, K. Dynamic RangeReduction Inspired by Photoreceptor Physiology. InIEEE Transactions on Visualization and ComputerGraphics (2004)

[23] REINHARD, E., STARK, M., SHIRLEY, P. AndFERWERDA J. Photographic Tone Reproduction forDigital Images. In ACM Transactions on Graphics(2002)

[24] ROST, RANDI J. OpenGL Shading Language. 2ndEdition. ISBN-10: 0-321-33489-2. ISBN-13:978-0-321-33489-3

[25] SPENCER, G., SHIRLEY, P., ZIMMERMAN K.,And GREENBERG D. P. Physically-Based GlareEffects for Digital Images. In SIGGRAPH (1995)

[26] WARD, G. Real Pixels. Graphics Gems II (1991),80-83.

[27] WARD, G. J. The RADIANCE lighting simulationand rendering system. In SIGGRAPH ’94 (July 1994),pp. 459-472.

[28] WARD, G. J. High Dynamic Range Image Encodings.http://www.anyhere.com/gward/hdrenc/Encodings.pdf

[29] WESLEY, ADDISON Environment MappingTechniques.http://www.developer.com/lang/other/article.php/10942 2169281 1

Figure 1: Effects illustrations of this work: reflection (top-most, left), refraction (top-most, right), fresnelleffect (two images in the middle), and chromatic dispersion (two bottom-most images)