silencing the noise on elysium - lukegoddard.info · silencingthenoiseonelysium luke goddard∗...

1
Silencing the noise on Elysium Luke Goddard * Image Engine Figure 1: Closeup of Neill Blomkamp’s Elysium. c 2013 CTMG. During Neill Blomkamp’s sci-fi epic Elysium (Fig.1), we experi- enced severe noise issues when ray-tracing the glossy reflections and heavy indirect lighting on the colossal habitat, within a tight production schedule. We present a new solution to the common issue of ray-traced noise, drastically reducing render times by ex- panding the use of a temporal filter to attenuate the weighting of a non-local means algorithm which selectively refines it’s result. 1 Overview Noise is a common occurrence in production rendering and an issue which affects all ray-tracers. A common workflow to efficiently reduce noise is to iteratively refine each render pass that contributes to the final image by increasing the number of ray samples until the result converges to a clean image. However, as iteration cost increases, removing the last of the noise becomes exponentially less efficient. We decided to clamp our render-time iterations and clean up our image with a post-processing solution. Designed to process single images, nonparametric kernel-based fil- ters such as Data-adaptive Kernel Regression and Optimal Spatial Adaption are a general solution to grain removal, but tend to soften some characteristics of CG renders such as high-frequency texture detail and irregular noise patterns [Seo et al. 2007]. However, if motion reference is available, a temporal filter will produce better results on any slow moving sequence by sampling across multi- ple frames, with the results only diminishing under extreme motion or dynamic lighting. Our solution combines the best features of both approaches by expanding a temporal filter to selectively apply kernel-based filtering in areas of excessive temporal variation. 2 Our Approach We begin with an initial 3D render which is iteratively refined until it is deemed too costly to reduce it’s noise further. We then out- * e-mail: [email protected] put two additional images: a map of unique identifiers for each pixel (such as a point reference pass), and forward motion vectors. We generate backwards motion vectors by adding pixel-space co- ordinates to the forward vectors, converting them to a sparse point cloud, and querying each pixel by averaging the nearest neighbour values on their proximity [Franke and Nielson 1980]. We implement the temporal component of our filter using the mo- tion passes to trace each pixel’s value across adjacent frames, dis- carding samples whose identifier deviates too far from the source. We use a median weighted Gaussian filter, which reduces flickering visual anomalies, to blend each set of samples. Finally, using infor- mation on the sample’s range and variance, we perform a process of weighted averages similar to the popular Non-Local Means Algo- rithm to further improve the result. However, unlike [Buades et al. 2005], we use the temporal samples of each pixel within a search window, rather than their neighborhoods, to calculate the weighted contributions for the denoising process. We derive a weight for each set of blended temporal samples, requiring that each sample have both a larger variance and a value range which overlaps that of the sources. These weights are used to adaptively control the strength of the filter, limiting it to areas of high temporal inconsistency. 3 Discussion Our solution enables lighters to work more efficiently by selectively denoising problematic passes as a post process (Fig.2). This greatly reduced render times and improved shot turnaround. By using in- formation gathered from the temporal samples to attenuate the spa- tial filtering, we found that the result converges to a clean image without introducing any artifacts other than those associated with a temporal filter. We saw minor softening in areas of dynamic light- ing, which we consider negligible compared to the improvement in visual continuity. This approach allowed us to produce images that we otherwise could not render within a production time frame. Figure 2: Original self-illumination pass (L). Temporal filtering (C). Temporally informed spatial filtering (R). c 2013 CTMG. References BUADES, A., COLL, B., AND MOREL, J.-M. 2005. A non-local algorithm for image denoising. In Proceedings of CVPR’05 - Volume 2, IEEE, 60–65. FRANKE, R., AND NIELSON, G., 1980. Smooth interpolation of large sets of scattered data. International Journal for Numerical Methods in Engineering, 15: 16911704. SEO, H. J., CHATTERJEE, P., TAKEDA, H., AND MILANFAR, P. 2007. A comparison of some state of the art image denoising methods. In Proceedings of the Forty-First Asilomar Conference on Signals, Systems and Computers, IEEE, 518–522.

Upload: dangnhan

Post on 14-Aug-2018

218 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Silencing the noise on Elysium - lukegoddard.info · SilencingthenoiseonElysium Luke Goddard∗ Image Engine Figure1: Closeup of Neill Blomkamp’s Elysium. c 2013 CTMG. During Neill

Silencing the noise on Elysium

Luke Goddard∗

Image Engine

Figure 1: Closeup of Neill Blomkamp’s Elysium. c©2013 CTMG.

During Neill Blomkamp’s sci-fi epic Elysium (Fig.1), we experi-enced severe noise issues when ray-tracing the glossy reflectionsand heavy indirect lighting on the colossal habitat, within a tightproduction schedule. We present a new solution to the commonissue of ray-traced noise, drastically reducing render times by ex-panding the use of a temporal filter to attenuate the weighting of anon-local means algorithm which selectively refines it’s result.

1 Overview

Noise is a common occurrence in production rendering and an issuewhich affects all ray-tracers. A common workflow to efficientlyreduce noise is to iteratively refine each render pass that contributesto the final image by increasing the number of ray samples untilthe result converges to a clean image. However, as iteration costincreases, removing the last of the noise becomes exponentially lessefficient. We decided to clamp our render-time iterations and cleanup our image with a post-processing solution.

Designed to process single images, nonparametric kernel-based fil-ters such as Data-adaptive Kernel Regression and Optimal SpatialAdaption are a general solution to grain removal, but tend to softensome characteristics of CG renders such as high-frequency texturedetail and irregular noise patterns [Seo et al. 2007]. However, ifmotion reference is available, a temporal filter will produce betterresults on any slow moving sequence by sampling across multi-ple frames, with the results only diminishing under extreme motionor dynamic lighting. Our solution combines the best features ofboth approaches by expanding a temporal filter to selectively applykernel-based filtering in areas of excessive temporal variation.

2 Our Approach

We begin with an initial 3D render which is iteratively refined untilit is deemed too costly to reduce it’s noise further. We then out-

∗e-mail: [email protected]

put two additional images: a map of unique identifiers for eachpixel (such as a point reference pass), and forward motion vectors.We generate backwards motion vectors by adding pixel-space co-ordinates to the forward vectors, converting them to a sparse pointcloud, and querying each pixel by averaging the nearest neighbourvalues on their proximity [Franke and Nielson 1980].

We implement the temporal component of our filter using the mo-tion passes to trace each pixel’s value across adjacent frames, dis-carding samples whose identifier deviates too far from the source.We use a median weighted Gaussian filter, which reduces flickeringvisual anomalies, to blend each set of samples. Finally, using infor-mation on the sample’s range and variance, we perform a processof weighted averages similar to the popular Non-Local Means Algo-rithm to further improve the result. However, unlike [Buades et al.2005], we use the temporal samples of each pixel within a searchwindow, rather than their neighborhoods, to calculate the weightedcontributions for the denoising process. We derive a weight for eachset of blended temporal samples, requiring that each sample haveboth a larger variance and a value range which overlaps that of thesources. These weights are used to adaptively control the strengthof the filter, limiting it to areas of high temporal inconsistency.

3 Discussion

Our solution enables lighters to work more efficiently by selectivelydenoising problematic passes as a post process (Fig.2). This greatlyreduced render times and improved shot turnaround. By using in-formation gathered from the temporal samples to attenuate the spa-tial filtering, we found that the result converges to a clean imagewithout introducing any artifacts other than those associated with atemporal filter. We saw minor softening in areas of dynamic light-ing, which we consider negligible compared to the improvement invisual continuity. This approach allowed us to produce images thatwe otherwise could not render within a production time frame.

Figure 2: Original self-illumination pass (L). Temporal filtering(C). Temporally informed spatial filtering (R). c©2013 CTMG.

References

BUADES, A., COLL, B., AND MOREL, J.-M. 2005. A non-localalgorithm for image denoising. In Proceedings of CVPR’05 -Volume 2, IEEE, 60–65.

FRANKE, R., AND NIELSON, G., 1980. Smooth interpolation oflarge sets of scattered data. International Journal for NumericalMethods in Engineering, 15: 16911704.

SEO, H. J., CHATTERJEE, P., TAKEDA, H., AND MILANFAR, P.2007. A comparison of some state of the art image denoisingmethods. In Proceedings of the Forty-First Asilomar Conferenceon Signals, Systems and Computers, IEEE, 518–522.