spie proceedings [spie spie defense, security, and sensing - orlando, florida (monday 25 april...

12
Real-time maritime scene simulation for ladar sensors Chad L. Christie, 1Efthimios (Themie) Gouthas, 1 Leszek Swierkowski 1 and Owen M. Williams 2 1 Defence Science and Technology Organisation, PO Box 1500, Edinburgh, Australia 5111 2 Daintree Systems Pty Ltd ABSTRACT Continuing interest exists in the development of cost-effective synthetic environments for testing Laser Detection and Ranging (ladar) sensors. In this paper we describe a PC-based system for real-time ladar scene simulation of ships and small boats in a dynamic maritime environment. In particular, we describe the techniques employed to generate range imagery accompanied by passive radiance imagery. Our ladar scene generation system is an evolutionary extension of the VIRSuite infrared scene simulation program and includes all previous features such as ocean wave simulation, the physically-realistic representation of boat and ship dynamics, wake generation and simulation of whitecaps, spray, wake trails and foam. A terrain simulation extension is also under development. In this paper we outline the development, capabilities and limitations of the VIRSuite extensions. Keywords: Real-time maritime simulation, ladar, boat and ship dynamics, wakes and waves, Graphics Processing Units 1. INTRODUCTION The continuing advancement in laser and sensor technology has stimulated a growing interest in ladar sensors for both military and commercial applications. Examples include terrain mapping, ground navigation, robotics, missile seekers and target recognition. The complex real-time signal processing used in many of the systems under test continues to motivate efforts to develop cost-effective real-time synthetic environments. An important tool is a ladar scene simulator capable of generating real-time ladar imagery. Advanced ladars combine range imagery with traditional infrared or visual imagery at unprecedented levels of speed and resolution, driven by continuing advances in focal plane array technology. Furthermore, advanced real-time algorithms are used for processing the available multi-dimensional information. Ladar scene simulators therefore need to be capable of simultaneously generating complex range and passive imagery in order to become useful tools for developing, characterizing and testing modern ladars within a real-time synthetic environment. In this paper we report on development of a ladar scene simulation system implemented on a commodity PC using a standard CPU and a programmable GPU (Graphic Processing Unit). The system is capable of generating simultaneous range and radiometric data in real time. The ladar scene generator reported here is an evolutionary extension to our VIRSuite infrared scene simulation software. 1-3 It includes all previous features such as the simulation of ocean surfaces in different sea states, a physically-realistic representation of boat and ship dynamics, interaction between floating bodies and the water, wake generation and simulation of surface effects including whitecaps, spray, wake trails and foam. Terrain simulation is also under development. Many factors that impact the performance of the system under test need to be taken into consideration when designing the simulation system, particularly when diverse roles are planned such as hardware-in-the-loop simulation, synthetic injection and sensor modelling. Our approach to addressing these factors is discussed in this paper. An example of the colour-coded range imagery that can be generated is shown in Figure 1. Email: [email protected]; [email protected]; [email protected]; [email protected] Technologies for Synthetic Environments: Hardware-in-the-Loop XVI, edited by Scott B. Mobley, R. Lee Murrer, Jr., Proc. of SPIE Vol. 8015, 80150G · © 2011 SPIE · CCC code: 0277-786X/11/$18 · doi: 10.1117/12.883770 Proc. of SPIE Vol. 8015 80150G-1 Downloaded From: http://proceedings.spiedigitallibrary.org/ on 08/27/2013 Terms of Use: http://spiedl.org/terms

Upload: scott-b

Post on 13-Dec-2016

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: SPIE Proceedings [SPIE SPIE Defense, Security, and Sensing - Orlando, Florida (Monday 25 April 2011)] Technologies for Synthetic Environments: Hardware-in-the-Loop XVI - Real-time

Real-time maritime scene simulation for ladar sensors

Chad L. Christie,1∗ Efthimios (Themie) Gouthas,1 Leszek Swierkowski1 and Owen M. Williams2

1Defence Science and Technology Organisation, PO Box 1500, Edinburgh, Australia 5111 2Daintree Systems Pty Ltd

ABSTRACT

Continuing interest exists in the development of cost-effective synthetic environments for testing Laser Detection and Ranging (ladar) sensors. In this paper we describe a PC-based system for real-time ladar scene simulation of ships and small boats in a dynamic maritime environment. In particular, we describe the techniques employed to generate range imagery accompanied by passive radiance imagery. Our ladar scene generation system is an evolutionary extension of the VIRSuite infrared scene simulation program and includes all previous features such as ocean wave simulation, the physically-realistic representation of boat and ship dynamics, wake generation and simulation of whitecaps, spray, wake trails and foam. A terrain simulation extension is also under development. In this paper we outline the development, capabilities and limitations of the VIRSuite extensions. Keywords: Real-time maritime simulation, ladar, boat and ship dynamics, wakes and waves, Graphics Processing Units

1. INTRODUCTION

The continuing advancement in laser and sensor technology has stimulated a growing interest in ladar sensors for both military and commercial applications. Examples include terrain mapping, ground navigation, robotics, missile seekers and target recognition. The complex real-time signal processing used in many of the systems under test continues to motivate efforts to develop cost-effective real-time synthetic environments. An important tool is a ladar scene simulator capable of generating real-time ladar imagery. Advanced ladars combine range imagery with traditional infrared or visual imagery at unprecedented levels of speed and resolution, driven by continuing advances in focal plane array technology. Furthermore, advanced real-time algorithms are used for processing the available multi-dimensional information. Ladar scene simulators therefore need to be capable of simultaneously generating complex range and passive imagery in order to become useful tools for developing, characterizing and testing modern ladars within a real-time synthetic environment. In this paper we report on development of a ladar scene simulation system implemented on a commodity PC using a standard CPU and a programmable GPU (Graphic Processing Unit). The system is capable of generating simultaneous range and radiometric data in real time. The ladar scene generator reported here is an evolutionary extension to our VIRSuite infrared scene simulation software.1-3 It includes all previous features such as the simulation of ocean surfaces in different sea states, a physically-realistic representation of boat and ship dynamics, interaction between floating bodies and the water, wake generation and simulation of surface effects including whitecaps, spray, wake trails and foam. Terrain simulation is also under development. Many factors that impact the performance of the system under test need to be taken into consideration when designing the simulation system, particularly when diverse roles are planned such as hardware-in-the-loop simulation, synthetic injection and sensor modelling. Our approach to addressing these factors is discussed in this paper. An example of the colour-coded range imagery that can be generated is shown in Figure 1.

∗ Email: [email protected]; [email protected]; [email protected];

[email protected]

Technologies for Synthetic Environments: Hardware-in-the-Loop XVI, edited by Scott B. Mobley, R. Lee Murrer, Jr., Proc. of SPIE Vol. 8015, 80150G · © 2011 SPIE · CCC code: 0277-786X/11/$18 · doi: 10.1117/12.883770

Proc. of SPIE Vol. 8015 80150G-1

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 08/27/2013 Terms of Use: http://spiedl.org/terms

Page 2: SPIE Proceedings [SPIE SPIE Defense, Security, and Sensing - Orlando, Florida (Monday 25 April 2011)] Technologies for Synthetic Environments: Hardware-in-the-Loop XVI - Real-time

Figure 1: Example of a simulated infrared image (left panel) and a colour-coded range image derived from the central box (right panel)

In Section 2 we present an overview of VIRSuite, emphasising our maritime simulation application. In Section 3, the ladar scene generation extension is described in detail and in Section 4, terrain simulation.

2. VIRSUITE OVERVIEW

VIRSuite is a suite of software tools that have been developed in DSTO for supporting real-time scene generation for test and evaluation of visual and infrared sensors. Initially developed for aircraft scenarios4 in support of air-to-air combat simulation requirements, VIRSuite was later significantly extended to open ocean maritime scenarios,1-3 the latter demanding the inclusion of a number of complex effects to accurately simulate the dynamic interaction between boats and water. The main VIRSuite components are illustrated in Figure 2. Our new extensions are the subject of this paper. Readers interested in radiometric aspects are referred to reference 3. 2.1 Core tools The real time generation of radiometrically-scaled infrared imagery is supported by five distinct software tools: VIRScene is the central application that runs the simulation in real-time, generating MWIR, LWIR or visual scenery. VIRPaint is an off-line support tool for assigning radiometric data to the objects in the simulated scene. It allows 3D geometry to be loaded and supports the efficient application of radiometrically-coded false colours to either facets or vertices. The tool (in extended form) is also used for configuring boat hull geometry, as required for the evaluation of boat and water interactions within maritime scenario simulations. VIRParticle allows the user to generate dynamically-varying translucent emitters such as plumes and flares from a sequence of billboard particles. VIRPolar is a validation tool used off-line to generate both 2D and 3D radiant intensity polar plots of a target object that has been radiometrically-coded within VIRPaint. The polar plots are compared with measured trials data or calculated data from physics-based models, allowing the radiometric scaling within VIRPaint to be adjusted to match. VIRCubemap assists the user in creating cubemaps for simulating distant radiation sources such as a sky background, sun and clouds. This module is used in both the aircraft and maritime scenarios. 2.2 Maritime scenario tools Our maritime simulation module is comprised of three main elements: 1) ocean surface simulation; 2) boat and ship dynamics simulation, and 3) simulation of local water disturbances such as boat wakes and water spray. A simulated maritime scene is illustrated in Figure 3.

Proc. of SPIE Vol. 8015 80150G-2

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 08/27/2013 Terms of Use: http://spiedl.org/terms

Page 3: SPIE Proceedings [SPIE SPIE Defense, Security, and Sensing - Orlando, Florida (Monday 25 April 2011)] Technologies for Synthetic Environments: Hardware-in-the-Loop XVI - Real-time

Figure 2: VIRSuite components The maritime simulation is supported by two additional software components: VIROcean is a GUI-based ocean configuration and validation tool that enables fast and easy parametric management, handling the considerable number of parameters required to reproduce a dynamic ocean surface under specific environmental conditions such as wind speed and direction. VIROcean also provides additional statistical analysis capabilities that assist in correlating the simulated wave effects against real-world still and video sea surface imagery. VIROcean is currently limited to fully developed, unbounded off-shore seas. It employs the semi-empirical Phillips wave spectrum in the frequency domain for modelling the ocean surface at a single instance in time. The simulation then propagates the waves from frame to frame by use of a Fast Fourier transform, using a realistic dispersion relation for deep water waves.1, 2, 5 Several techniques have been applied to ensure that a high degree of realism is achieved. The main features include:

• Tiling of a simulated finite patch of ocean water to produce a continuous ocean surface over a large domain.

• Wave animation at dual resolution covering both a low and high spatial frequency height field using bump-mapping for increased fidelity.

• The use of a view-dependent Level-Of-Detail (LOD) approaches.

• Trochoidal wave shape for emulating wave choppiness in windy conditions.

• Simulation of foam and whitecaps in choppy conditions.

• Physics-based radiometry of reflected and emitted light for simulating infrared and visual imagery. VIRMotion is a GUI based tool for configuring parameters that influence the dynamics of ships and boats as well as their interaction with the surrounding sea and related secondary effects such as wake, foam and spray. The boat dynamics are based on the major forces and moments acting on a faceted hull model as a result of its immersion and motion. These allow the interaction of the boat with water to be modelled realistically. Gravitation and hydrostatic buoyancy are responsible for the free floating of the boat. The propulsion force drives the boat and is applied to its stern where vertical and horizontal deflections allow for steering. Drag and lift result from the relative motion of the water against the boat. The buoyancy, drag and lift forces and their moments are calculated separately for each hull facet. Buoyancy arises from the hydrostatic pressure exerted by water on immersed facets, while drag and lift are assumed to be proportional to the square of the facet relative velocity. With all forces specified, the total force and torque on the boat are calculated by vector summation across the facets and fed into dynamic equations that determine the full 6-DOF motion of the boat. It is worth mentioning that all forces exerted on the boat are calculated based on the total instantaneous water surface including waves and wakes from other boats or ships, thus allowing for realistic dynamic interactions between boats in close proximity.

Proc. of SPIE Vol. 8015 80150G-3

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 08/27/2013 Terms of Use: http://spiedl.org/terms

Page 4: SPIE Proceedings [SPIE SPIE Defense, Security, and Sensing - Orlando, Florida (Monday 25 April 2011)] Technologies for Synthetic Environments: Hardware-in-the-Loop XVI - Real-time

Figure 3: Example of a visual band maritime scene simulation VIRMotion also provides configuration services for the wake generation subsystem within which the wake resulting from the displacement of water generated by the boat motion is simulated. For real-time simulation, we currently adopt an approximate approach that models a complex water surface disturbance as a combination of a large number of individual local disturbances (wave particles).1, 2, 6 The starting point is the computation of the water volume displaced by each facet of the moving hull during a simulation time interval. Wave particles are launched around the hull, the amplitude of each being proportional to the local volume disturbance. Each particle spreads into an arc-shaped wavefront, allowing a smooth wave field to be generated by superposition, thus simulating the wake. The reader is referred to previous papers for more detailed description.1, 2, 6 Configuration services are also provided for the wake trail foam and spray subsystems responsible for simulating effects like spray, foam and rooster tails. These phenomena play a significant role in adding to the realism of the boat and wave simulation. Since it is not currently feasible to calculate the effects in real time based on physics-based models, we instead use the graphic techniques of billboard particles and decals,1 and match their dynamic properties to the boat and wake dynamics.

3. LADAR SYNTHETIC SCENE GENERATION

Ladar system performance depends on a number of factors, including the laser emitter and sensor/receiver characteristics, atmospheric propagation effects, target reflection properties and signal processing algorithms. Development of a high fidelity ladar simulation system to support engineering studies would have to address each factor in turn and would involve specific modelling of hardware and software components. Since our current focus is limited to scene generation, we do not explicitly address ladar hardware and software modelling. We also are limited by our requirement for real-time processing which necessitates adoption of simplified simulation methods in order to overcome processing limitations. For the present purpose we assume that the system under test is a flash ladar based on a staring focal plane array. In such a system, the range information is collected simultaneously by an array of detectors, enabling a high resolution range image to be generated on a frame-by-frame basis. Examples of low resolution flash ladar images are shown in Figure 4. Our scene generation system is able to simulate such imagery in real time, together with the generation of synthetic infrared imagery. In principle, the system could also be used to simulate scanning ladar imagery. In this case, a composite range image is built up from the outputs from one or a few scanning detectors. A relatively large time then exists between frames, during which the scene can change. As a result, the number of scene rendering passes needed to generate one composite frame exceeds the capability of current real-time computer graphics

Proc. of SPIE Vol. 8015 80150G-4

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 08/27/2013 Terms of Use: http://spiedl.org/terms

Page 5: SPIE Proceedings [SPIE SPIE Defense, Security, and Sensing - Orlando, Florida (Monday 25 April 2011)] Technologies for Synthetic Environments: Hardware-in-the-Loop XVI - Real-time

hardware. Although special simplified techniques could be developed to encompass scanning ladars, such developments lie beyond the scope of this paper.

Figure 4: Examples of low resolution real images from a flash ladar. The left panels show colour-coded range images. In-band infrared images are shown in the right panels.

3.1 Light sources Ladar imaging requires the presence of a laser scene illuminator. In our system, the laser is simulated as a point source situated at the position of the imaging detector array. The laser beam is characterised by its central wavelength and its spectral width is assumed to be of the order of nanometres. The laser beam profile is further assumed to be Gaussian around its centreline; i.e.,

I = I0 exp(- φ2/2 σ12),

where φ is the angular distance from the laser beam propagation direction and where the standard deviation σ1 of the intensity distribution defines the beam divergence. In our scene generation system, the camera field of view (FOV) is set at approximately 6σ1. This ensures that most of the light reflected from the target towards the camera is captured. For a choice larger than 6σ1 the detectors near the edge of the array would not be illuminated. In the opposite case the illumination would be more uniform but not all of the laser return would be captured. Other light sources can be included in addition to the laser. Reflected light from sources such as the sun, sky and clouds, together with target and path radiance contribute to the passive background. In matching our passive infrared scene generation process, we implement the passive light sources in the form of cubemaps which determine the background scenery in all directions needed to be represented.3, 7 3.2 Material properties and target light reflection model Ladar sensors capture the light reflected into the camera FOV from 3D objects in the scene, a result of illumination from the primary laser source and from secondary sources such as those mentioned above. The reflection properties of each object in the scene are characterised by a Bidirectional Reflectance Distribution Function (BRDF). In our current first generation system we simulate BRDFs by adopting a simple Blinn-Phong reflection model, allowing us to assign Blinn-Phong parameters and an in-band reflection coefficient to each vertex on each object model. More sophisticated BRDF models may be implemented in future but will need to be assessed in terms of their real-time processing requirements. For the present, the Blinn-Phong model is a good first choice.

Proc. of SPIE Vol. 8015 80150G-5

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 08/27/2013 Terms of Use: http://spiedl.org/terms

Page 6: SPIE Proceedings [SPIE SPIE Defense, Security, and Sensing - Orlando, Florida (Monday 25 April 2011)] Technologies for Synthetic Environments: Hardware-in-the-Loop XVI - Real-time

In the particular case of infrared reflection from a water surface, specularity can be assumed to a good degree of approximation. The angular dependence of both the reflectance and the emissivity can be calculated from the Fresnel equations and tabularised or fitted to a simple functional form to aid fast real-time calculations.3 3.3 Light propagation Light from the primary laser and from secondary sources is attenuated during propagation through the atmosphere, on its way to the target and after reflection towards the sensor. We use MODTRAN as a standard tool for calculating in-band transmission coefficient values. Results of offline MODTRAN calculations across a range of camera and target altitudes and slant ranges are tabularised and/or fitted to a parametric function.3 During real-time simulation, a single value of transmission coefficient is extracted on each frame by interpolation of the tabulated data or by calculation from the fitting function. MODTRAN is also used for calculating path radiance. In general, the process of calculating infrared radiometry for real-time ladar scene generation is similar to infrared scene generation and readers are referred to our previous paper3 for an extended description. It is worth mentioning that for ladar scene simulation the spectral width involved is very narrow, of the order of nanometres and therefore the contributions from all passive secondary sources, including target and water emission, and path radiance, are small. The passive contributions are still important, however, since they constitute unwanted background and diminish the signal-to-noise ratio (or, rather, the signal-to-clutter ratio). Their effect is to add uncertainty to the range estimate, particularly for weak signal returns contaminated by clutter. 3.4 Signal noise In ladars, the range estimation process depends on how much of the return laser light is captured by the detectors and whether the system can separate the useful signal from the unwanted noisy background. Stochastic contributions include atmospheric fluctuations, detector dark current and amplifier noise. For sensitive detectors capable of detecting weak signals (especially single photon detectors), signal-induced shot noise might be the dominant contribution. Instead of implementing a specific model for each possible source of noise, we simulate the overall signal noise by adding a stochastic irradiance δE to the detector irradiance E. δE is assumed to be a random variable governed by a Gaussian probability distribution P(δE):

P(δE) = exp(-δE2/2 σ22) / σ2√2π,

where σ2 is the standard deviation. For some systems, a Gaussian distribution may not be a good choice. In such cases, other more appropriate models can easily be adopted. For example, when shot noise is the dominant contribution a Poisson distribution choice would be appropriate. For most systems, however, where many sources contribute to the overall noise, a Gaussian noise model is a sensible first choice. 3.5 Range image generation Within GPU-based scene generation, range information is extracted as a difference between the positions of the camera and the objects in world coordinate systems (interpolated from the positions of the vertices within the 3D wireframe models). The resulting range image is an idealisation. In real ladar systems the range information is derived only when sufficient laser light reflected from an object to be detected by the sensor. In order to simulate this process, we consider the total radiance from all sources, including again the secondary passive sources and adding the random noise component described above. Only those pixels for which the overall detected signal is larger than a threshold value are valid and form the simulated range image. The threshold should be chosen sufficiently high to minimise false range estimation due to clutter but, at the same time, should be as low as possible to allow for the proper utilisation of weak return signals.

Proc. of SPIE Vol. 8015 80150G-6

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 08/27/2013 Terms of Use: http://spiedl.org/terms

Page 7: SPIE Proceedings [SPIE SPIE Defense, Security, and Sensing - Orlando, Florida (Monday 25 April 2011)] Technologies for Synthetic Environments: Hardware-in-the-Loop XVI - Real-time

As mentioned above, random signal noise and passive clutter can affect the derivation of range information. Even in the case of a strong signal returns, the range estimation is contaminated by a small additive error. We assume here that the random error δR associated with the range measurement R is described by the normal probability distribution

P(δR) = exp(-δR2/2 σ32) / σ3√2π,

where σ3 is a system-dependent standard deviation. Errors randomly generated according to the above distribution are added to the final range image. Examples of simulated range images in the presence and absence of noise are shown in Figure 5. Colour-coded range images as viewed by the camera are shown in the upper panels, where the range noise is seen as the random changes in pixel colour. For better visualisation of the three dimensionality of the image data, in the lower panel the same range scene is slightly rotated; that is, it is shown from different angle from that of the camera line-of-sight. In this case, the pixels are laterally shifted, thus accentuating the noise, and shadows appear in regions where camera image data does not exist.

Figure 5: Effect of noise on range images: left panels - noiseless range image, and right panels - with noise included

Proc. of SPIE Vol. 8015 80150G-7

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 08/27/2013 Terms of Use: http://spiedl.org/terms

Page 8: SPIE Proceedings [SPIE SPIE Defense, Security, and Sensing - Orlando, Florida (Monday 25 April 2011)] Technologies for Synthetic Environments: Hardware-in-the-Loop XVI - Real-time

4. TERRAIN SIMULATION

4.1 Introduction We are currently developing a terrain simulation capability in order to enable land to be integrated with our existing ocean scenery. With this development, VIRSuite will be able to be used for the generation of near-shore and dry land scenes in addition to deep ocean scenes. In our scene simulation processing, ocean and land are considered essentially as height-fields that define 3D surface and attribute maps that span the surface and encapsulate all data pertinent to the observational physics of the targeted sensor. Both domains share common elements but also present unique challenges. The challenges in implementing a land simulation in VIRSuite arise from differences in both the nature of the respective geometries and their surface properties. 4.1.1 Geometry challenges The ocean is geometrically dynamic in nature but can be efficiently modelled computationally since it is a composition of simple periodic waves. Although physically expansive, the entire ocean can be modelled by constraining the modelling process to a small region and to wavelengths that are related to the region size. These constraints allow an extensive ocean surface to be composed by tiling smaller regions into a much larger surface mosaic. In the terrain case, although land is static, it can only be defined by highly variable and non-repetitive elevation and attribute data. The consequence is that the quantity of data required to adequately describe the land surface typically exceeds physical memory for all but the most simple of scenarios. 4.1.2 Surface attributes Water bodies are comprised of a predominantly homogeneous material, allowing us the opportunity to dynamically synthesise the surface properties. Conversely, land may be covered by vastly varying materials characterised by a variety of properties. The surface material properties must be specified explicitly and tend to stress the available physical memory to an even greater extent than does the elevation data. In order to circumvent physical memory limitations and reduce computational loads, Level of Detail (LOD) schemes may be employed that serve to minimise the complexity of the geometry, typically as a function of range from the viewpoint and viewing frustum. Their objective is to increase rendering performance whilst maintaining apparent detail at the eye plane. From the many published LOD schemes available, we have chosen to implement Geometry Clipmaps8 for our combined land and water simulation. 4.2 Geometry clipmaps Geometry clipmaps are amenable to GPU implementation,10 characterised by several features that can be exploited in implementing both land and water scenery; notably concentricity of resolution and toroidal data updates. 4.2.1 Concentricity of resolution Geometry clipmaps can be viewed as concentric ring-like meshes (albeit, square rings) of reducing resolution, centred along the viewpoint axis. The innermost ring is filled with a high resolution mesh. As the eyepoint distance increases, rings are removed from the centre outwards, as illustrated in Figure 6, leaving only rings of a size that meet some minimum resolution criteria. This is typically based on range and viewing angle.

Proc. of SPIE Vol. 8015 80150G-8

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 08/27/2013 Terms of Use: http://spiedl.org/terms

Page 9: SPIE Proceedings [SPIE SPIE Defense, Security, and Sensing - Orlando, Florida (Monday 25 April 2011)] Technologies for Synthetic Environments: Hardware-in-the-Loop XVI - Real-time

Figure 6: Mesh structure As illustrated in Figure 7, geometry clipmaps are realised by constructing a pyramid of downsampled data (elevation, land cover classification, texture, etc.) over a number of levels. The geometry clipmap caches a square window of the data at each level. Each clipmap covers the same number of vertices but represents a different geometrical extent as determined by its level in the hierarchy. The clipmaps are rendered in order from the most detailed level to the least. At each level, the internal regions covered by more detailed clipmaps are not re-rendered. In combination, the clipmaps therefore form concentric, reducing level-of-detail rings of terrain data. The clipmaps are reconstructed from frame to frame as the viewpoint and view direction are moved.

Figure 7: Clipmap level hierarchy. Each clipmap is identical in terms of data size but represents a different geometrical extent relating to its respective level in the pyramid.

Since the LOD evaluation is not directly determined by any dynamic land or water characteristics, the same mesh can be applied to both land and water. Typically, however, land data is more commonly available at coarse resolution (e.g., 10m or 30m) whereas in our ocean simulation the mesh size is chosen at 1m or less in order to allow the boat/water interaction dynamics to be properly calculated. Sampling a land mesh at the same level of resolution would be inherently wasteful since the input data would necessarily have to be interpolated, imposing a performance penalty without any quality improvement benefit. With geometry clipmaps this problem can be addressed by stripping away inner rings of the mesh until the mesh resolution matches that of the land data. The mesh itself and all of its vertex attributes remain unchanged. The payoff is that one LOD evaluation is shared by both the land and water render processes, thus reducing overall processing.

Proc. of SPIE Vol. 8015 80150G-9

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 08/27/2013 Terms of Use: http://spiedl.org/terms

Page 10: SPIE Proceedings [SPIE SPIE Defense, Security, and Sensing - Orlando, Florida (Monday 25 April 2011)] Technologies for Synthetic Environments: Hardware-in-the-Loop XVI - Real-time

4.2.2 Toroidal data updates The most significant hurdle to real-time land rendering is the low bandwidth at which data can be streamed from disk to CPU RAM and from CPU RAM to Video RAM. Traditional terrain rendering approaches manage terrain data as a mosaic of data tiles that are size-optimised and compressed for fast loading from disk. They require complex caching systems and compression algorithms to further minimise disk I/O. Once in CPU, terrain tiles would have to be sent on demand to the video card across the relatively slow video bus. Toroidal updates, as demonstrated in Figure 8, together with the pyramidal LOD hierarchy of geometry clipmaps, can be used to simplify the process of loading and caching terrain, buying valuable performance gains and simplifying data management. A further benefit is that terrain data is directly cached in video RAM without the need for caching in CPU RAM.

Figure 8: a) data centred about the existing viewpoint; b) the viewpoint moves and the data origin is relocated with respect to the buffer origin; c) newly visible data is updated toroidally by wrapping the coordinate system with respect to the new data origin; d) the toroidal buffer is equivalent to a view-centred buffer and avoids the necessity of updating the entire buffer.

4.3 Surface attribute representations Since VIRSuite is being extended to provide a simulation environment for multiple infrared band sensors, ladar sensors and future imaging sensors, land surface attributes covering the multiplicity of options need to be accommodated. The obvious method by which this may be accomplished is to encode attributes into texture colour channels. Depending on the sensor type, these attributes can be interpreted appropriately when the scene is rendered. However, different attribute sets must be built to cover all the supported sensors or, at the very least, for the sensors of current interest. The task of generating the attributes data is laborious and acutely sensitive to the availability of detailed measured data. Note that VIRSuite has the capability to switch sensor modes on the fly (e.g., from visual to LWIR to MWIR and back to visual). Each of these switches requires attributes data fetched from different data bases. 4.4 Surface attribute indexing by global land cover classification In the infrared domain, it has been our preference to maintain surface attribute data in terms of apparent temperature and emissivity and to dynamically apply the Planck equation in converting to in-band radiance. The latter step is a one-off process on initial switch to the desired waveband and works well for objects such as boats and aircraft. The approach is not, however, well-suited to land surface simulation due to the sheer quantity of data requiring processing. Given this limitation and the fact that real high resolution land surface data is difficult to obtain, we have made the decision to use land cover classifications (LCCs) as references to small representative regions of surface attribute data. A number of sources of global LCC data are listed in Table 1. In support, we maintain a set of LCC representative attribute regions in RAM and ensure that they are tileable in order to allow for synthesizing more expansive surface regions by mosaic composition. Since attribute data is subsequently confined to a number of small manageable

Proc. of SPIE Vol. 8015 80150G-10

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 08/27/2013 Terms of Use: http://spiedl.org/terms

Page 11: SPIE Proceedings [SPIE SPIE Defense, Security, and Sensing - Orlando, Florida (Monday 25 April 2011)] Technologies for Synthetic Environments: Hardware-in-the-Loop XVI - Real-time

regions, we can once again employ run-time transformation methods to further specialise the attributes (e.g., as in the infrared case, by transforming from temperature and emissivity to in-band radiance).

Name of agency and/or data set Repository URL USGS Global Land Cover Characterisation http://edc2.usgs.gov/glcc/glcc.php UMD Land Cover Classification http://esip.umiacs.umd.edu/data/landcover/ European Space Agency Ionia GlobCover http://ionia1.esrin.esa.int/ European Commission, Global Land Cover 2000 http://bioval.jrc.ec.europa.eu/products/glc2000/glc2000.php

Table 1: Global land cover data sources

At run–time, a splatting technique is used, as illustrated in Figure 9. In this, the attribute data referenced by the LCC at any given terrain location is the source of the attribute data used to produce the final image at that location. The end result is a composite surface that is a good representation of the actual land being simulated. This approach offers many benefits but suffers from a drawback. The benefits are that the total amount of data storage is dramatically reduced and the data preparation process is much simplified. Furthermore, the availability of LCC data, both global and regional, means that sourcing surface data is much less of a problem. The drawback is that required fine-resolution characteristics, such as geographical anomalies, roads, rivers, etc., need to be incorporated by using supplemental techniques. 3D culture such as trees, buildings, etc., also requires its own management mechanism. Note, however, this would be the case whichever land rendering approach was adapted. It is our intention to explore culture implementations in a later evolution of VIRSuite.

Figure 9: Splatting – composing the complete surface by blending imagery of unique classes of land surface. Image courtesy of http://blog.nostatic.org/2007/11/3d-landscape-rendering-with-texture.html

Proc. of SPIE Vol. 8015 80150G-11

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 08/27/2013 Terms of Use: http://spiedl.org/terms

Page 12: SPIE Proceedings [SPIE SPIE Defense, Security, and Sensing - Orlando, Florida (Monday 25 April 2011)] Technologies for Synthetic Environments: Hardware-in-the-Loop XVI - Real-time

6. SUMMARY In this paper we have described the development of a ladar scene simulator as an evolutionary extension of our VIRSuite scene simulation program. It allows the real-time synthetic generation of both range and infrared imagery, currently of maritime interest, within a standard CPU/GPU computing environment. The simulation has been constructed in such a manner that it can potentially be used in diverse roles such as hardware-in-the-loop simulation, synthetic injection and sensor modelling. A terrain simulation module is currently being developed and it is expected that new scene capabilities and different sensor applications will be added in future.

REFERENCES

[1] Swierkowski, L., Gouthas, E., Christie, C.L. and Williams, O.M., “Boat, wake and wave real-time simulation,” Technologies for Synthetic Environments: Hardware-in-the-loop Testing XIV, Proc. SPIE 7301, 73010A (2009).

[2] Gouthas, E., Christie, C.L., Swierkowski, L. and Williams, O.M., “Real-time maritime scene simulation,” Technologies for Synthetic Environments: Hardware-in-the-loop Testing XV, Proc. SPIE 7663, 76630M (2010).

[3] Williams, O.M., Christie, C.L., Shen, G., Gouthas, E. and Swierkowski, L., “Real-time scene generation infrared radiometry,” Technologies for Synthetic Environments: Hardware-in-the-loop Testing XV, Proc. SPIE 7663, 76630N (2010).

[4] Christie, C.L., Gouthas, E., and Williams, O.M., “Graphics Processing Unit (GPU) real-time infrared scene generation,” Technologies for Synthetic Environments: Hardware-in-the-loop Testing XII, Proc. SPIE Vol. 6544, 65440C (2007).

[5] Tessendorf, J., “Simulating ocean surfaces,” SIGGRAPH 2004 course notes, http://tessendorf.org/papers_files/coursenotes2004.pdf

[6] Yuksel,C., House, D.H., and Keyser, J., “Wave particles,” ACM Trans. Graph. 26, 3, Article 99, 8pp (2007). [7] OpenGL cube map texturing, http://developer.nvidia.com/object/cube_map_ogl_tutorial.html, NVIDIA

Corporation (1999).

[8] Lassaso, F., Hoppe, H., “Geometry Clipmaps: Terrain Rendering Using Nested Regular Grids,” ACM Transactions on Graphics (Proceedings of SIGGRAPH 2004) 23(3), pp. 769–776.

[9] Asirvatham, A., Hughes, H., “Geometry Clipmaps: Terrain Rendering Using Nested Regular Grids,” GPU Gems 2, Chapter 2. Addison Wesley ISBN 0-321-33559-2 (2005).

Proc. of SPIE Vol. 8015 80150G-12

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 08/27/2013 Terms of Use: http://spiedl.org/terms