realistic perspective projections for virtual objects and ...€¦ · which utah teapot depicts...

10
112 Realistic Perspective Projections for Virtual Objects and Environments FRANK STEINICKE and GERD BRUDER University of W ¨ urzburg and SCOTT KUHL Michigan Technological University Computer graphics systems provide sophisticated means to render virtual 3D space to 2D display surfaces by applying planar geometric projections. In a realistic viewing condition the perspective applied for rendering should appropriately account for the viewer’s location relative to the image. As a result, an observer would not be able to distinguish between a rendering of a virtual environment on a computer screen and a view “through” the screen at an identical real-world scene. Until now, little effort has been made to identify perspective projections which cause human observers to judge them to be realistic. In this article we analyze observers’ awareness of perspective distortions of virtual scenes displayed on a computer screen. These distortions warp the virtual scene and make it differ significantly from how the scene would look in reality. We describe psychophysical experiments that explore the subject’s ability to discriminate between different perspective projections and identify projections that most closely match an equivalent real scene. We found that the field of view used for perspective rendering should match the actual visual angle of the display to provide users with a realistic view. However, we found that slight changes of the field of view in the range of 10-20% for two classes of test environments did not cause a distorted mental image of the observed models. Categories and Subject Descriptors: I.3.7 [Computer Graphics]: Three- Dimensional Graphics and Realism General Terms: Human Factors Additional Key Words and Phrases: Perspective projection, scene percep- tion, psychophysics This work was partly funded by grants from the Deutsche Forschungsge- meinschaft (DFG) in the scope of the LOCUI project. Authors’ addresses: F. Steinicke (corresponding author) and G. Bruder, Immersive Media Group, University of W¨ urzburg, Germany; email: [email protected]; S. Kuhl, Department of Computer Science, Michi- gan Technological University. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies show this notice on the first page or initial screen of a display along with the full citation. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any component of this work in other works requires prior specific permission and/or a fee. Permissions may be requested from Publications Dept., ACM, Inc., 2 Penn Plaza, Suite 701, New York, NY 10121-0701 USA, fax +1 (212) 869-0481, or [email protected]. c 2011 ACM 0730-0301/2011/10-ART112 $10.00 DOI 10.1145/2019627.2019631 http://doi.acm.org/10.1145/2019627.2019631 ACM Reference Format: Steinicke, F., Bruder, G., and Kuhl, S. 2011. Realistic perspective projections for virtual objects and environments. ACM Trans. Graph. 30, 5, Article 112 (October 2011), 10 pages. DOI = 10.1145/2019627.2019631 http://doi.acm.org/10.1145/2019627.2019631 1. INTRODUCTION In computer graphics one is often concerned with representing spa- tial scenes and relationships on a flat display screen. When we view imagery on a typical desktop screen, the screen subtends a relatively small visual angle at the observer’s eye compared to the effective visual field of humans (approximately 200 degrees horizontally and 150 degrees vertically [Warren and Wertheim 1990]). We some- times refer to this angle as the Display Field Of View (DFOV) (see Figure 2). Most of today’s computer screens have a screen size of roughly 13 to 30 , thus the DFOV averages 28 to 60 degrees diagonally, assuming an ergonomic viewing distance of 65cm. In order for a virtual scene to be displayed on a computer screen, the computer graphics system must determine which part of the scene should be displayed where on the screen. In 3D computer graphics, planar geometric projections are typically applied, which make use of a straightforward mapping of graphical entities in a 3D “view” region—the so-called viewing or view frustum—to a 2D im- age plane. During the rendering process objects inside the view frus- tum are projected onto the 2D image plane; objects outside the view frustum are omitted. The exact shape of the view frustum will often be a symmetric or asymmetric truncated rectangular pyramid. The angle at the peak of the pyramid, often denoted as Geometric Field Of View (GFOV) [McGreevy et al. 1985] should match the DFOV for the imagery to be projected in a geometrically correct way. Perspective projections of virtual environments on a computer screen are affected by the interplay between the geometric field of view and the field of view provided by the display, on which the vir- tual 3D environment is projected (see Figure 1). For instance, if the GFOV is greater than the DFOV, rendered objects appear smaller or “stretched” on the computer screen, with inverse effects caused by a smaller GFOV. Although the perceptual consequences of ren- dering virtual scenes with different GFOVs are not clear, graphics designers and developers often choose GFOVs that vary from the DFOV. The choice for a certain GFOV is often driven by aesthetics, necessity to display a certain amount of virtual space, or based on a rule-of-thumb. This assumption is confirmed by disclosures of developers as well as reviewing of existing 3D modeling applica- tions and games. Though correct perception of metric properties is not always intended or necessary in the field of entertainment, ACM Transactions on Graphics, Vol. 30, No. 5, Article 112, Publication date: October 2011.

Upload: others

Post on 24-Jun-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Realistic Perspective Projections for Virtual Objects and ...€¦ · Which Utah teapot depicts best the physical counterpart? The teapot is rendered with different Geometric Fields

112

Realistic Perspective Projections for VirtualObjects and Environments

FRANK STEINICKE and GERD BRUDERUniversity of WurzburgandSCOTT KUHLMichigan Technological University

Computer graphics systems provide sophisticated means to render virtual3D space to 2D display surfaces by applying planar geometric projections.In a realistic viewing condition the perspective applied for rendering shouldappropriately account for the viewer’s location relative to the image. As aresult, an observer would not be able to distinguish between a rendering ofa virtual environment on a computer screen and a view “through” the screenat an identical real-world scene. Until now, little effort has been made toidentify perspective projections which cause human observers to judge themto be realistic.

In this article we analyze observers’ awareness of perspective distortionsof virtual scenes displayed on a computer screen. These distortions warpthe virtual scene and make it differ significantly from how the scene wouldlook in reality. We describe psychophysical experiments that explore thesubject’s ability to discriminate between different perspective projectionsand identify projections that most closely match an equivalent real scene.We found that the field of view used for perspective rendering should matchthe actual visual angle of the display to provide users with a realistic view.However, we found that slight changes of the field of view in the range of10-20% for two classes of test environments did not cause a distorted mentalimage of the observed models.

Categories and Subject Descriptors: I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism

General Terms: Human Factors

Additional Key Words and Phrases: Perspective projection, scene percep-tion, psychophysics

This work was partly funded by grants from the Deutsche Forschungsge-meinschaft (DFG) in the scope of the LOCUI project.Authors’ addresses: F. Steinicke (corresponding author) and G.Bruder, Immersive Media Group, University of Wurzburg, Germany;email: [email protected]; S. Kuhl, Department of Computer Science, Michi-gan Technological University.Permission to make digital or hard copies of part or all of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesshow this notice on the first page or initial screen of a display along withthe full citation. Copyrights for components of this work owned by othersthan ACM must be honored. Abstracting with credit is permitted. To copyotherwise, to republish, to post on servers, to redistribute to lists, or to useany component of this work in other works requires prior specific permissionand/or a fee. Permissions may be requested from Publications Dept., ACM,Inc., 2 Penn Plaza, Suite 701, New York, NY 10121-0701 USA, fax +1(212) 869-0481, or [email protected]© 2011 ACM 0730-0301/2011/10-ART112 $10.00

DOI 10.1145/2019627.2019631http://doi.acm.org/10.1145/2019627.2019631

ACM Reference Format:

Steinicke, F., Bruder, G., and Kuhl, S. 2011. Realistic perspective projectionsfor virtual objects and environments. ACM Trans. Graph. 30, 5, Article 112(October 2011), 10 pages.DOI = 10.1145/2019627.2019631http://doi.acm.org/10.1145/2019627.2019631

1. INTRODUCTION

In computer graphics one is often concerned with representing spa-tial scenes and relationships on a flat display screen. When we viewimagery on a typical desktop screen, the screen subtends a relativelysmall visual angle at the observer’s eye compared to the effectivevisual field of humans (approximately 200 degrees horizontally and150 degrees vertically [Warren and Wertheim 1990]). We some-times refer to this angle as the Display Field Of View (DFOV) (seeFigure 2). Most of today’s computer screens have a screen sizeof roughly 13′′ to 30′′, thus the DFOV averages 28 to 60 degreesdiagonally, assuming an ergonomic viewing distance of 65cm.

In order for a virtual scene to be displayed on a computer screen,the computer graphics system must determine which part of thescene should be displayed where on the screen. In 3D computergraphics, planar geometric projections are typically applied, whichmake use of a straightforward mapping of graphical entities in a 3D“view” region—the so-called viewing or view frustum—to a 2D im-age plane. During the rendering process objects inside the view frus-tum are projected onto the 2D image plane; objects outside the viewfrustum are omitted. The exact shape of the view frustum will oftenbe a symmetric or asymmetric truncated rectangular pyramid. Theangle at the peak of the pyramid, often denoted as Geometric FieldOf View (GFOV) [McGreevy et al. 1985] should match the DFOVfor the imagery to be projected in a geometrically correct way.

Perspective projections of virtual environments on a computerscreen are affected by the interplay between the geometric field ofview and the field of view provided by the display, on which the vir-tual 3D environment is projected (see Figure 1). For instance, if theGFOV is greater than the DFOV, rendered objects appear smalleror “stretched” on the computer screen, with inverse effects causedby a smaller GFOV. Although the perceptual consequences of ren-dering virtual scenes with different GFOVs are not clear, graphicsdesigners and developers often choose GFOVs that vary from theDFOV. The choice for a certain GFOV is often driven by aesthetics,necessity to display a certain amount of virtual space, or based ona rule-of-thumb. This assumption is confirmed by disclosures ofdevelopers as well as reviewing of existing 3D modeling applica-tions and games. Though correct perception of metric propertiesis not always intended or necessary in the field of entertainment,

ACM Transactions on Graphics, Vol. 30, No. 5, Article 112, Publication date: October 2011.

Page 2: Realistic Perspective Projections for Virtual Objects and ...€¦ · Which Utah teapot depicts best the physical counterpart? The teapot is rendered with different Geometric Fields

112:2 • F. Steinicke et al.

(a) GFOV=10◦ (b) GFOV=20◦ (c) GFOV=30◦ (d) GFOV=40◦ (e) GFOV=50◦ (f) GFOV=60◦

Fig. 1. Which Utah teapot depicts best the physical counterpart? The teapot is rendered with different Geometric Fields Of View (GFOVs). The vertical fieldsof view gradually increase by 10◦ from left (10◦) to right (60◦); the horizontal field of view is adjusted such that the image ratio is maintained. The virtualcamera is translated forward or backward to ensure that the projected teapot covers the same amount of screen space.

GFOVDFOV

OFOV

near far

left

right

top

bottom

Fig. 2. Illustration of geometric, display, and object fields of view. Thebottom-left inset shows a top view of the scene, the bottom-right insetshows a generic view frustum.

it is essential for application scenarios in which the virtual sceneserves as basis for physical reconstruction, as is often the case inthe virtual prototyping, 3D modeling, architecture, and computer-aided design domain. Due to the known problems with perspectiverenderings, these applications often provide also orthographic pro-jections, which conserve the relative size of virtual geometry duringrendering on the image plane regardless of depth, but do not sup-port linear perspective depth cues. Therefore, only a limited 3Dimpression can be conveyed by these kind of projections. As a re-sult, switching between orthographic and perspective projectionsis often required to ensure a realistic spatial impression and accu-rate perception of scene metrics, emphasizing the importance ofundistorted perspective projections.

Perspective projections with matching DFOV and GFOV presenta viewer with a geometrically correct image, as if the user wasviewing a virtual window through the frame of the screen. It isfrequently assumed that, in such a setup, observers perceive themetric properties of a virtual scene almost in the same way as theywould perceive the properties of a corresponding real-world scene.However, this assumption has not been confirmed in previous re-search, in particular for Virtual Reality (VR) display environments[Steinicke et al. 2009; Franke et al. 2008; Ries et al. 2009]. For in-stance, in immersive VR systems larger GFOVs can be used [Kuhlet al. 2009] to compensate for compression effects in the user’sdistance perception often experienced in VR systems [Loomis and

Knapp 2003; Loomis et al. 1996; Willemsen et al. 2009; Witmer andKline 1998; McGreevy et al. 2005]. In the context of desktop-basedenvironments, little research has been conducted to identify per-spective projections that appear most realistic to users, and supportcorrect perception of computer-generated virtual scenes.

In this article we investigate how accurately a user is able todetect perspective distortions when she is asked to compare a realscene to an equivalent virtual world scene rendered with varyingamounts of distortion. These experiments also allow us to measurethe amount of distortions which people are reliably able to detect.

The article is organized as follows. Section 2 provides necessarybackground information and presents an overview of related work.Section 3 introduces an exposure study, which reveals problems ofusing arbitrary or default GFOVs in rendering environments. Sec-tion 4 describes psychophysical experiments in which we analyzerealistic projections for virtual objects and scenes. Section 5 dis-cusses the experimental results. Section 6 concludes the article anddiscusses future work.

2. BACKGROUND AND RELATED WORK

2.1 Natural Perspectives in Photography

It is a challenging task to project a 3D space on a planar displaysurface in such a way that metric properties of scene objects are per-ceived correctly by human observers. In order to extract 3D depth,size and shape cues about visual scenes from planar representa-tions, the human visual system combines various different sourcesof information [Murray 1994]. Since 2D pictorial or monoculardepth cues, for example, linear perspective or retinal image sizes,are interpreted by the visual system as 3D depth, the informationthey present may be ambiguous [Kjelldahl and Prime 1995]. Theseambiguities make it difficult to extract correct depth cues and areintentionally introduced in many common optical illusions [Gillam1980]. For instance, it was known to painters a long time ago that lin-ear perspective may produce pictures that provide incorrect objectproperties. Leonardo da Vinci formulated a rule-of-thumb whichstates that the best distance for depicting an object is about twentytimes the size of the object [Zorin and Barr 1995]. In photography,different lenses are used to convey different spatial impressions. Forexample, in images taken using a wide angle of view, an object closeto the lens appears abnormally large relative to more distant objects,and in distant shots with a narrow angle of view, the viewer cannotdiscern relative distances between distant objects. Although suchdistortions are often unintended, perspective distortions are some-times applied deliberately for artistic purposes in photography andcinematography [Hagen 1980; Pirenne 1970]. Extension (wide an-gle) distortion is often implemented to emphasize some element ofthe scene by making it appear larger and spatially removed from theother elements. Compression (telephoto) distortion is often used to

ACM Transactions on Graphics, Vol. 30, No. 5, Article 112, Publication date: October 2011.

Page 3: Realistic Perspective Projections for Virtual Objects and ...€¦ · Which Utah teapot depicts best the physical counterpart? The teapot is rendered with different Geometric Fields

Realistic Perspective Projections for Virtual Objects and Environments • 112:3

give the appearance of compressed distance between distant objectsin order to convey a feeling of congestion. Conversely, a normallens provides a natural perspective, which approximates the humaneye. As a rule-of-thumb, for a camera with a 35mm image size(43mm diagonal) a normal lens has a focal length of about 50mm,though focal lengths between 40mm and 58mm are still regardednormal [Stroebel 1999]. In cinematography, focal lengths of abouttwice the diagonal of the image size are regarded as normal, whichcorresponds to the typically greater viewing distance of about twicethe diagonal of the screen.

Although the retinal image changes when viewing a projected2D image from different viewpoints, humans are in principleable to perceive stable object properties because of Emmert’s law[Vishwanath et al. 2005]. However, it has only been shown that theestimation of object properties does not change when changing theviewpoint, but it is not clear whether the estimation was correct inthe first place.

2.2 View Frustums

The perspective view frustum defines the 3D volume in space, whichis projected to the image plane. As illustrated in Figure 2, a viewfrustum can be specified in view space by the distances of the near(near ∈ R

+) and far (far ∈ R+) clipping planes, and the extents of

the projection area on the near plane (left, right, top, bottom ∈ R)[Shreiner 2009]. Alternatively, a symmetrical view frustum can bespecified using only the vertical geometric field of view1

GFOV = 2 · atan

(top

near

)∈ ]

0, 180[, (1)

and the image ratio (ratio ∈ R+).

The nominal DFOVs of computer screens can be derived bythe width (w) and the height (h) of the screen specified by themanufacturers, and the distance (d) of the user’s head to the screen.Assuming a symmetrical view frustum, the vertical display field ofview can be calculated with the following equation. We have

DFOV = 2 · atan

(h

2 · d

), (2)

whereas the horizontal display field of view can be calculated byreplacing h with w in Eq. (2).

Ideally, changes of the user’s head pose would be tracked andused to update the perspective projection in such a way that theview frustum is always aligned in correspondence with the halflines extending from the user’s eyes (for stereoscopic projections)to the corners of the physical screen (refer to Figure 2) [Burdeaand Coiffet 2003; Holloway and Lastra 1995]. To implement suchconstantly adjusting projections, additional hardware, that is, stereo-scopic viewing and tracking technology, is required, which is oftennot available in typical desktop-based environments.

2.3 Minification and Magnification

Most modern computer screens provide relatively narrow fields ofview in comparison to the effective visual field of humans (referto Section 1). As a result, if a virtual world is rendered such thatthe GFOV matches the DFOV, a user would see less of the virtualspace than he would see in the real world with an unrestrictedview. Therefore, many graphics applications use a GFOV that is

1For simplicity, we refer to GFOV and DFOV as the vertical fields of view,if not stated otherwise.

display

GFOV

display

GFOV

Fig. 3. Illustration of the relationship between the DFOV and GFOV usedfor perspective rendering. The left image shows a magnification, the rightimage a minification of the virtual object.

larger than the DFOV to provide a wider view of the virtual scene.When the GFOV and DFOV differ, the virtual imagery is mini- ormagnified [Polys et al. 2005]. As illustrated in Figure 3 (left), if theGFOV is smaller than the DFOV, the displayed image will appearmagnified on the computer screen because of the requirement forthe image to fill a larger subtended angle in real space versus virtualspace. Conversely, if the GFOV is larger than the DFOV, a largerportion of the virtual scene needs to be displayed on the same screenspace, which will appear minified (see Figure 3 (right)). Differencesbetween the visual fields also change the spatial sampling of arendered image; more pixels will be devoted to different parts ofthe scene.

Mini- or magnification changes several visual cues that provideinformation about metric properties of objects such as distances,sizes, shapes, and angles [Kjelldahl and Prime 1995]. For example,minification changes the visual cues in a way that can potentiallyincrease perceived distances to objects [Kuhl et al. 2009]. On theother hand, magnification changes these cues in the opposite direc-tion and can potentially decrease perceived distance. The benefitsof distorting the GFOV are not clear. A series of studies were con-ducted to evaluate the role of the GFOV in accurate spatial judg-ments. Relative azimuth and elevation judgments in a perspectiveprojection were less accurate for GFOVs greater than the DFOV[McGreevy et al. 1985]. This effect has been noted in see-throughstereoscopic displays that match real-world viewing with syntheticelements [Rolland et al. 1995]. Alternatively, room size estimationand distance estimation tasks were aided by a larger GFOV [Neale1996]. The sense of presence also appears to be linked to an in-creased GFOV [Hendrix and Barfield 1996]. For other tasks, suchas estimating the relative skew of two lines, a disparity betweenDFOV and GFOVs was less useful [Rosenberg and Barfield 1995].

Mini- or magnification of the graphics is caused by changing theextents of the near plane, for example, by increasing or decreasingthe vertical and horizontal GFOVs (see Figures 2 and 3). The de-scribed mini- and magnification can be implemented by means offield of view gains. The gain gF ∈ R

+ denotes the ratio betweengeometric and display fields of view gF = GFOV

DFOV . If the GFOV isscaled by a gain gF (and the horizontal GFOV is modified accord-ingly using ratio), we can determine the mini-/magnification withthe following equation.

m = tan(GFOV/2)

tan((gF · GFOV)/2)(3)

The mini-/magnification m denotes the amount of scaling that isrequired to map the viewport (rendered with a certain GFOV andratio) to the display (defined by its DFOV and ratio). If this mini-/magnification equals 1.0, a person will perceive a spatially accurate

ACM Transactions on Graphics, Vol. 30, No. 5, Article 112, Publication date: October 2011.

Page 4: Realistic Perspective Projections for Virtual Objects and ...€¦ · Which Utah teapot depicts best the physical counterpart? The teapot is rendered with different Geometric Fields

112:4 • F. Steinicke et al.

Table I. Single Virtual Objects Used in the Exposure Study.

(c)(b)(a)

C1 : +0.35% C1 : −6.98% C1 : +1.19%C2 : +2.47% C2 : −8.63% C2 : −2.37%

(f)(e)(d)

C1 : +3.67% C1 : −5.57% C1 : +2.76%C2 : +16.00% C2 : −5.09% C2 : +9.17%

(i)(h)(g)

C1 : +0.63% C1 : +2.27% C1 : +2.65%C2 : −6.73% C2 : +6.77% C2 : −0.44%

The values show the amount of distortion perceived by the subjects under theconditions (C1) GFOV = 25.99◦ and (C2) GFOV = 45◦.

image. Figure 3 (left) illustrates magnification (m > 1) of animage when the GFOV is decreased (gF < 1), whereas Figure 3(right) illustrates minification (m < 1) with an increased GFOV(gF > 1). Since we are interested in identifying the observer’sability to discriminate between different perspective projectionsrather than discriminating between minified or magnified environ-ments, we compensate mini- and magnification by translating thecamera along the view direction. The amount of camera translationwas selected such that the ratio of the visual angle of the renderedobjects, which we refer to as Object Field of View (OFOV), andthe GFOV are identical (refer to Figure 1). In a symmetric viewingcondition the distance between the camera and the center of thevirtual object can be calculated by top

tan(GFOV/2) for a given GFOV.In cinematography this technique is referred to as dolly-zoomor vertigo technique [Stroebel 1999]. However, the scenes aredisplayed with different amounts of distortion (see Figure 1), andcan significantly vary from viewing a real-world replica.

3. EXPOSURE STUDY

In an exposure study we analyzed how far discrepancies betweenGFOV and DFOV affect perception of metric properties of vir-tual scenes. Therefore, we displayed nine virtual 3D objects (see

Table I) with the 3D modeling application 3D Studio Max 2010SP1 developed by Autodesk.2

3.1 Participants

For the experiment we recruited nine male and four female (age23-56, ∅ : 28) experts in the domain of computer graphics, ar-chitectural design, 3D modeling, and CAD, each with at least twoyears professional experience. Prior to the experiment subjects hadto fill out questionnaires to rate their levels of expertise. All hadnormal or corrected-to-normal vision; six wore glasses or contactlenses. Six subjects rated themselves as experts in computer games,nine as experts in 3D modeling, and four as experts in CAD. Allsubjects were naıve to the experimental conditions. The total timeper subject including prequestionnaire, instructions, training, ex-periment, breaks, and debriefing took approximately one hour foreach experiment (refer to, Sections 3.2, 4.1.1, and 4.2.1). Subjectswere allowed to take breaks at any time, and were informed thatthey should focus on accuracy rather than performance.

2Autodesk’s 3D Studio Max 2010 SP1 serves as one example. The defaultGFOV of the 3D preview within 3D Studio Max is 45◦. Similar defaultGFOVs can be found in Cinema4D, AutoCAD, SketchUp, etc.

ACM Transactions on Graphics, Vol. 30, No. 5, Article 112, Publication date: October 2011.

Page 5: Realistic Perspective Projections for Virtual Objects and ...€¦ · Which Utah teapot depicts best the physical counterpart? The teapot is rendered with different Geometric Fields

Realistic Perspective Projections for Virtual Objects and Environments • 112:5

3.2 Material and Methods

According to ergonomic guidelines described by Ankrum [1999],we positioned subjects in front of a computer screen. Each sub-ject’s head was fixed on a chin-rest in such a way that subjectsassume a posture that is both visually and posturally comfortable.We adjusted the chin-rest such that the distance from the eyes tothe computer screen was 65cm for each subject. We used a FujitsuSiemens Scenicview P20-2 with a resolution of 1600 × 1200 and ascreen size of 40cm × 30cm. The vertical and horizontal max syncrate was 76Hz × 82kHz with a response time of 8ms. The imagewas displayed with a contrast ratio of 800 : 1, image color temper-ature of 9300K, and brightness of 300cd/m2. The viewing angle tothe monitor was 32.5◦ below horizontal eye level, and the monitortilt was 32.5◦. This setup resulted in a symmetric viewing conditionwith a DFOV of 25.99◦, whereas 3D Studio Max 2010 SP1 pro-vided a default geometric field of view of 45◦ for the 3D preview.Considering the DFOV provided by our setup (DFOV= 25.99◦),the default GFOV provided by 3D Studio Max corresponds to afield of view gain of gF = 1.66 applied to the GFOV that matchesthe DFOV.

In order to determine the amount of perceived distortion, subjectshad to estimate relations within the tested 3D objects. For instance,when viewing the tire (see Table I, Figure (b)), a subject’s task wasto estimate the ratio between the inner and outer circle, or whenviewing the fan (see Table I, Figure (g)), the task was to estimatethe length of the fan-blades relative to the stand. For the remainingobjects subjects estimated the ratio of similar radii or side lengths.

Subject were allowed to change their virtual viewpoint by usingthe trackball metaphor provided by the modeling application.

We tested similar relations for all objects under two differentconditions: (C1) GFOV = 25.99◦ (i. e., the GFOV that matches theDFOV), and (C2) GFOV = 45◦ (i. e., the default GFOV providedby the application). The independent variables were the used fieldof view, that is, condition C1 and C2, as well as the consideredobjects (see Table I (a)–(i)). As dependent variable we measuredthe relative deviation of the estimation to the correct metric. Wehypothesized that subjects would extract distorted metric propertiesof a 3D object, when the DFOV varies from the GFOV.

H0. Estimation of metric properties improves under condition C1

in comparison to C2.

3.3 Results

Table I shows the discrepancy between the subject’s estimations andthe real metric relations of the nine objects (a)–(i) under conditionsC1 and C2. We analyzed the metric estimation with a two-wayanalysis of variance (ANOVA), testing the within-subjects effectsof the field of view and the virtual object. The analysis revealeda significant main effect of the different fields of view in general(F (1, 495) = 4.24, p < 0.05) as well as the fields of view forindividual virtual objects (F (8, 740) = 6.34, p < 0.01). Post-hocanalysis with the Tukey test showed that subjects made significantlysmaller errors in estimating the object relations for objects (d), (e),and (f) (p < 0.01) under viewing condition C1 compared to theviewing condition C2. We found no significant difference betweenthe other objects under conditions C1 and C2.

These results support hypothesis H0 that the subject’s ability toestimate object metrics improves under condition C1 in comparisonto C2. On average subjects were about 4% better in estimatingmetrics of a 3D object which is displayed with a GFOV that matchesthe DFOV (C1) in comparison to an object that is displayed withthe default of the modeling application (C2). For 7 of the 9 virtual

objects, subjects were better in judging metric properties undercondition C1. The relative errors made by subjects under conditionC1 are on average 2.90%, whereas under condition C2 the relativeerrors average 6.41%.

3.4 Discussion

The findings of the exposure experiment suggest that subjects havedifficulty estimating metrics of objects correctly when objects arerendered with default geometric fields of view as used in currentmodeling applications such as 3D Studio Max, which are often dif-ferent from the DFOV. The results motivate that the GFOV shouldbe adjusted to the physical viewing condition for undistorted men-tal impressions of displayed objects. However, the results have notrevealed whether the estimation of object metrics is optimal for aGFOV that matches the DFOV (gF = 1) or for any other gain (con-sidering perceptual biases [Loomis and Knapp 2003; Willemsenet al. 2009; Steinicke et al. 2009]) or how much humans are awareof perspective distortions of virtual environments displayed on acomputer screen. The latter closely relates to the question how anobserver’s mental impression of a model is affected by slight fieldof view mismatches, such as caused by head movements in front ofa display, in particular if there is a close perceptual drop-off fromthe optimal GFOV towards greatly increased or decreased GFOVs,that is, if tracking the user’s head in front of a display is a necessity,or if precalibration of the field of view can provide an accurate men-tal impression nonaffected by slight head movements. We addressthese questions in the following experiments.

4. PSYCHOPHYSICAL EXPERIMENTS:REALISTIC PROJECTIONS

We performed two psychophysical experiments to identify the op-timal GFOV for rendering, and to determine human awareness ofperspective distortions of virtual environments displayed on a com-puter screen. The subject’s task was to compare a real-world viewof a physical scene to two renderings (with different GFOVs) ofa corresponding virtual replica of the same physical scene, and toidentify the perspective rendering that matches the real-world viewmore accurately. In order to manipulate the GFOV we applied a fieldof view gain gF ∈ R

+ to the virtual camera frustum by replacing thevertical angle fovy of the view frustum by gF · fovy. The horizontalangle fovx is manipulated accordingly with respect to the ratio ofthe viewport.

Since it is rather difficult to visually estimate whether a per-spective distortion is the result of a field of view that is either toolarge or too small, we decided to use a two-interval forced-choicetask for the experiment. We used the method of constant stimuli inwhich the applied gains are not related from one trial to the next,but presented randomly and uniformly distributed [Ferwerda 2008].Alternately, we presented the subjects two projections of a virtualscene both rendered with different gains by preserving the ratio ofOFOV and DFOV as described earlier (refer to Figure 1). The twovisual stimuli we tested represented typical CAD, 3D modeling,or architecture applications. Since the goal of our experiments isto analyze if subjects can discriminate between different perspec-tive projections of virtual scenes from physical counterparts, werequired physical scenes which could be presented simultaneouslywith their virtual counterparts to the subjects. In experiment E1(refer to Section 4.1) we used a Utah teapot, and in experiment E2(refer to Section 4.2) we presented a replica of our real laboratory.We applied different gains to the GFOV used for the perspectiveprojection of both virtual scenes ranging between 0.4 and 1.6 in

ACM Transactions on Graphics, Vol. 30, No. 5, Article 112, Publication date: October 2011.

Page 6: Realistic Perspective Projections for Virtual Objects and ...€¦ · Which Utah teapot depicts best the physical counterpart? The teapot is rendered with different Geometric Fields

112:6 • F. Steinicke et al.

steps of 0.2. We considered each gain constellations 6 times, butomitted constellation of equal gains resulting in 7 × 6 × 6 overalltrials for all resulting constellations in a randomized order.

The subjects had to choose between one of two possible re-sponses, that is, “Which one of both virtual scenes is a more realisticmodel of the physical scene: scene A or scene B?”. Responses like“I can’t tell.” were not allowed. In the case of uncertainty, when sub-jects cannot detect the signal, they must guess, and will be correct onaverage in 50% of the trials. The gain at which a subject responds“scene A” in half of the trials is taken as the Point of SubjectiveEquality (PSE), at which the subject perceives both projections ofthe scene as equally realistic or unrealistic respectively.

We hypothesized that subjects would perceive virtual scenes iden-tical to physical counterparts if the DFOV equals the GFOV.

H1. The average PSE of all subjects is gF = 1.

As the gain decreases or increases from this value the ability ofthe subject to detect the distorted projection of the scene increases,resulting in a psychometric curve for the discrimination perfor-mance. The amount of change in the stimulus required to producea noticeable sensation is defined as the Just Noticeable Difference(JND). However, stimuli at values close to thresholds will oftenbe detectable. Therefore, thresholds are considered to be the gainsat which the manipulation is detected only some proportion of thetime. In psychophysical experiments, usually the point at which thecurve reaches the middle between the chance level and 100% istaken as threshold. Therefore, we define the Detection Threshold(DT) for gains smaller than the PSE to be the value of the gain atwhich the subject has 75% probability of choosing the “scene A”response correctly and the detection threshold for gains greater thanthe PSE to be the value of the gain at which the subject choosesthe “scene A” response in only 25% of the trials (since the correctresponse was then chosen in 75% of the trails).

In this article we focus on the range of gains over which thesubject cannot reliably detect the difference, and in particular thegain at which subjects perceive the physical and virtual scene asidentical. The 25% to 75% range of gains will give us an interval ofpossible perspective projections, which can be used for renderingwithout users noticing a distortion of the scene. The PSE givesindications about how to map a DFOV to a GFOV such that aperspective projection appears to match the real-world scene.

4.1 Experiment E1: Single Object

In this experiment we tested subjects’ ability to discriminatebetween different perspective distortions, when a single virtualobject is displayed. In the tradition of various computer graph-ics researchers comparing rendering algorithms, we used a vir-tual Utah teapot as representative reference object in this experi-ment. The interaction with such a single object is a typical situa-tion in 3D modeling or CAD applications. Ten of the participants(4 females, age 23-33, ∅ : 27) of the exposure study participated inthe experiment (refer to Section 3.1).

4.1.1 Material and Methods of E1.

Setup. Figure 4 illustrates the setup used in the experiment. Theviewing setup in this study was similar to the setup described inSection 3.2. In order to allow subjects to compare the view to thephysical teapot with the view to the virtual teapot, we displayedvirtual and real view side-by-side. The physical prop representingthis virtual model was a white chinaware teapot manufactured byMelitta (see Figure 4). The virtual teapot that we used consistedof 6, 325 triangles and was slightly adapted by a professional 3D

Fig. 4. Illustration of the experimental setup: a subject in front of the screenwith the virtual teapot on the left and the screen frame with the physicalteapot on the right. The view to both teapots is identical from the subject’sperspective.

modeler such that virtual and physical teapots were congruent insubmillimeter range. The angle between the view to the virtualand physical teapot was approximately 35◦. To ensure equal view-ing conditions, we placed the physical teapot inside the frame ofanother Fujitsu Siemens Scenicview P20-2 screen, which had beendisassembled for the purpose of this experiment. The physical teapotwas placed on a stand in such a way that the plane defined by theframe intersected the center of the teapot. As illustrated in Figure 4,we attached a black cloth to the frame and over the stand to providethe same background for virtual and real teapot. Both teapots weredisplayed at the same height and each rotated around the yaw axisby 45 degrees and slightly tilted by 20◦ such that the extents of theteapot were visible.

We defined the initial perspective and the virtual teapot in such away that GFOV and DFOV were identical, and that the visual angleof the physical teapot as seen by the subjects and the OFOV of thevirtual teapot were identical. For half of the subjects the computerscreen was left, for half of the subjects the computer screen was rightof the physical teapot. The virtual teapot was rendered with Crytek’sCryEngine 3. As illustrated in Figure 4 we manually adjusted theglobal illumination parameters of the rendering in such a way that itmimics the real-world lighting conditions affecting the real teapotin the laboratory.

Procedure. We used a within-subject design in the experi-ment. At the beginning of the experiment, subjects were told toview and memorize the size and shape of the physical teapot for1 minute. During the first minute their heads were not constrainedby the chin-rest. Afterwards, the subjects were positioned in frontof the screens with their heads fixed by the chin-rest as described inSection 3.2, and the trials started. In each trial subjects saw twoimages of the teapot rendered with different perspective projectionsas described before. To switch between the two renderings sub-jects used a PowerMate manufactured by Griffin Technology (seeFigure 4). A clockwise rotation by six degrees switched the secondrendering active, a counter-clockwise rotation switched back to thefirst rendering. In order to avoid subjects directly comparing bothprojections of the teapot and relying on size cues only, we displayeda blank gray image for 80ms between the renderings of the teapotas a short interstimulus interval [Rensink et al. 1997]. In each trialthe subject’s task was to decide, based on the stimuli, which of bothteapots was a more realistic model of the real teapot. In order to

ACM Transactions on Graphics, Vol. 30, No. 5, Article 112, Publication date: October 2011.

Page 7: Realistic Perspective Projections for Virtual Objects and ...€¦ · Which Utah teapot depicts best the physical counterpart? The teapot is rendered with different Geometric Fields

Realistic Perspective Projections for Virtual Objects and Environments • 112:7

Fig. 5. Pooled results of the discrimination between (a) virtual and physical object and (b) virtual and physical scene. The x-axis and y-axis show the appliedfield of view gains gF , the color shows the probability that subjects estimate the virtual object or scene rendered with a larger GFOV as a more realistic modelof the physical object or scene, respectively. Note that equal gains have not been tested, but were identified with a probability of 0.5.

explain the task, we started with two example trials in which clearlydistorted virtual teapots were shown. In order to compare both ren-derings with the physical view, subjects could switch as often asdesired. Subjects were told that the time was not measured, andthey should focus on accuracy of their judgment. When a subjectwas confident that the currently displayed virtual teapot was a morerealistic model of the physical teapot, they had to push the button onthe PowerMate to indicate the end of the trial. Before the next trialstarted we applied another gray image for 240ms as interstimulusinterval. We used this transition to avoid subjects being able to di-rectly compare the visual stimuli of two subsequent trials [Rensinket al. 1997].

4.1.2 Results of E1. Figure 5(a) shows the pooled results of thesubjects for all gain conditions, with the field of view gains gF on thex-axis applied to the first rendered teapot, respectively on the y-axisfor the second rendered teapot. The color shows the probability thatsubjects estimate the virtual object rendered with a larger GFOVas a more realistic model of the physical object. The results showthat subjects have difficulty detecting perspective distortions whena single object is rendered. Figure 7(a) shows the pooled results forthe discrimination task between virtual and physical teapots togetherwith the standard error, here considering only the gain conditions, inwhich one applied field of view gain assumed gF = 1.0. The x-axisshows the applied field of view gain gF that varied from gF = 1.0.The y-axis shows the probability that subjects perceive the virtualteapot as more realistic when it is rendered: (1) magnified (GFOVsmaller than DFOV: gF < 1) or (2) minified (GFOV greater thanDFOV: gF > 1). The solid line shows the fitted sigmoid functionof the form f (x) = 1

1+ea·x+b with real numbers a and b (start valuesa = −9.5 and b = 10). We found no dependency whether wearranged the physical teapot left or right of the virtual teapot andtherefore pooled the two conditions. From the sigmoid functionwe determined a slight bias for the point of subjective equality(PSE = 0.9744). The PSE shows that subjects are quite accurate indiscriminating between different perspective distortions when theGFOV almost matches the DFOV. For individual subjects, we foundPSEs of 0.79, 0.80, 0.83, 1.00, 1.17, 1.00, 1.06, 1.10, 1.00, 1.17(∅ : 0.9932).

The DTs were at gains of 0.8593 and 1.1096 for responses inwhich subjects judged the teapot rendered with the larger field of

Fig. 6. Illustration of subject’s side-by-side view of (a) virtual laboratoryand (b) view to real laboratory through the screen frame.

view as a more realistic model. These results show that gain differ-ences within this range cannot be reliably estimated when the GFOVvaries from the tested DFOV (25.99◦) between 22.33◦ and 28.84◦.

4.2 Experiment E2: Virtual Environment

In this experiment we tested the subjects’ ability to discriminatebetween different perspective distortions when an entire virtual en-vironment is displayed. In order to present subjects again with aview to a corresponding real-world setup, we modeled the labora-tory room as virtual environment. The exploration of such room-sizevirtual environments is a typical situation for architectural designand CAD applications. Twelve of the participants (4 females, age23-56, ∅ : 28) of the exposure study participated in the experiment(refer to Section 3.1).

4.2.1 Material and Methods of E2. The viewing setup in thisstudy was similar to the setup described in Section 3.2. Again, weused a disassembled monitor side-by-side with the monitor dis-playing the virtual laboratory, and arranged both on a table in thelaboratory. Subjects were seated in front of the display analog to thesetup described in Section 3.2. Their heads were fixed on the samechin-rest such that the distance from the eyes to the computer screenwas 65cm for each subject. The perspective to the real room and thevirtual replica were identical. The virtual camera was positioned inthe center of the virtual laboratory and oriented towards one of thewalls (see Figure 6).

ACM Transactions on Graphics, Vol. 30, No. 5, Article 112, Publication date: October 2011.

Page 8: Realistic Perspective Projections for Virtual Objects and ...€¦ · Which Utah teapot depicts best the physical counterpart? The teapot is rendered with different Geometric Fields

112:8 • F. Steinicke et al.

Fig. 7. Pooled results of the discrimination between (a) virtual and physical teapots and (b) virtual and physical scene respectively. The x-axis shows theapplied field of view gain gF , the y-axis shows the probability that subjects estimate the virtual teapot/laboratory rendered with a larger GFOV as a morerealistic model of the physical teapot/laboratory.

Figure 6 (left) illustrates the view of a subject to the virtual andFigure 6 (right) a view to the real setup. The virtual replica consistedof more than 50,000 texture-mapped polygons. The texture mapswere obtained from a mosaic of digital photographs of the walls,ceiling, and floor of the laboratory. All floor and wall fixtures wererepresented true to original as detailed, textured 3D objects, forexample, door knobs, furniture, and computer equipment. Virtualand physical laboratory were congruent in millimeter range.

4.2.2 Results of E2. Figure 5(b) shows the pooled results ofthe subjects for all gain conditions, with the field of view gainsgF on the x-axis applied to the first rendered scene, respectively,on the y-axis for the second rendered scene. The color shows theprobability that subjects estimate the virtual scene rendered witha larger GFOV as a more realistic model of the physical scene.The results show that subjects have problems detecting perspectivedistortions when a virtual scene is rendered. Figure 7(b) showsthe pooled results for the discrimination task between virtual andphysical scene together with the standard error, here consideringonly the gain conditions, in which one applied field of view gainassumed gF = 1.0. The x-axis shows the applied field of view gaingF that varied from gF = 1.0. The y-axis shows the probability thatsubjects perceive the virtual laboratory as more realistic when it isrendered: (1) magnified (GFOV smaller than DFOV: gF < 1) or(2) minified (GFOV greater than DFOV: gF > 1). The solid lineshows the same fitted sigmoid function as used in experiment E1.We found no dependency whether we arranged the view to thereal laboratory left or right of the view to the virtual replica andtherefore pooled the two conditions. From the sigmoid functionwe determined a slight bias for the point of subjective equality(PSE = 1.0481). This PSE shows that subjects perceive virtual andreal laboratory as identical when the GFOV almost matches theDFOV. For individual subjects, we found PSEs of 0.99, 1.30, 0.70,1.21, 1.15, 1.00, 1.00, 0.93, 1.38, 0.89, 1.00, 1.01 (∅ : 1.047).

The DTs were at gains of 0.8902 and 1.2261 for responses inwhich subjects judged the virtual laboratory rendered with the largerfield of view as a more realistic model. These results show that gain

differences within this range cannot be reliably estimated when theGFOV varies from the DFOV (25.99◦) between 23.14◦ and 31.87◦.

5. DISCUSSION

The exposure study revealed that a deviation between GFOV andDFOV induces essential problems when judging object properties.The results from experiments E1 and E2 show that when the GFOVapproximates the DFOV, users perceive the virtual scene as anidentical representation of its physical counterpart in contrast torepresentations, which use a different GFOV. We observed only aminimal shift from gF = 1 of the PSEs for experiments E1 and E2.A simple t-test analysis showed that this shift is neither significantin experiment E1 (p = 0.88) nor in experiment E2 (p = 0.40),which supports hypothesis H1 that the perception of virtual scenesmatches the perception of corresponding real-world scenes whenthe GFOV matches the DFOV.

In the results of both experiments (refer to Section 4.1.2 andSection 4.2.2) we observed that the detection thresholds are almostsymmetrical around the PSE, which indicates that mini- and mag-nification of scenes yield symmetrical perceived distortions.

The results of the experiments indicate that for an individualobject subjects could less reliably detect perspective distortion thanfor an entire visual scene. Indeed, the psychometric functions (seeFigures 7(a) and (b)) are similar, since we considered only casesin which one stimuli was rendered with a gain that equaled one.However, in Figure 5, which depicts all gain constellation, it canbe seen that the area which corresponds to uncertain answers of thesubjects is larger in Figure 5(a) (E1) than in Figure 5(b) (E2). Inexperiment E1, only a single object was displayed in the center ofthe screen with OFOV<DFOV, whereas in experiment E2 a visualscene with OFOV=DFOV was displayed and discrepancies weredetected already for small amounts of perspective distortions. Anexplanation for this effect might be that when a detailed scene isrendered, more linear perspective cues are presented to the user. Forsuch environments, perspective distortion affects more geometry inthe image and users can detect the distortion more easily.

ACM Transactions on Graphics, Vol. 30, No. 5, Article 112, Publication date: October 2011.

Page 9: Realistic Perspective Projections for Virtual Objects and ...€¦ · Which Utah teapot depicts best the physical counterpart? The teapot is rendered with different Geometric Fields

Realistic Perspective Projections for Virtual Objects and Environments • 112:9

The results showed that for a certain degree of distortion subjectscannot reliably detect the distortion. As mentioned in Section 2for some application scenarios users might desire a rendering witha GFOV that varies from the DFOV [Franke et al. 2008] muchmore than ±20%. For such perspective distortions users will stillbe able to interact with the virtual scene or to roughly estimatecertain scene metrics. However, our experiments have shown thatwhen an observer of such a rendering should estimate the samemetric properties as she would in the real world, it is essential toprovide a GFOV that matches the DFOV. Still, we found that slightchanges of the field of view in the range of 10-20% for two classesof test environments have not caused a distorted mental image ofthe observed models, which motivates that typical head movementsin front of a computer screen after previous field of view adjustmentdo not necessarily have to degrade overall image realism, or renderinitial perspective adjustments pointless.

For certain applications it is not required or desired that usersperceive the virtual scene in such a way as they would perceive acorresponding real-world scene. For those environments it is notnecessary to adjust the GFOV such that it matches the DFOV, andother aspects like aesthetics or task performance are important.However, if the primary task for a user is to perceive the virtualscene as viewing a corresponding real-world scene, then the GFOVshould match the DFOV.

6. CONCLUSION AND FUTURE WORK

In this article we have presented first steps to analyze the user’sability to notice perspective distortions of virtual objects and scenesdisplayed on a computer screen. The exposure study showed thattypical 3D modeling applications tend to provide larger GFOVs,which introduce distortions of the 3D scenes. We have describedpsychophysical experiments, which explored the ability of subjectsto discriminate between different perspective projections of a virtual3D object and a scene to corresponding physical setups. The ex-periments confirmed that perspective projections for virtual scenes,which use the same GFOV as the DFOV, are perceived as most re-alistic representations by subjects. Moreover, we found that GFOVand DFOV may vary by up to 10-20% for the two tested classesof virtual scenes, without inducing perception of incorrect objectmetrics. The detection thresholds were symmetrical around the PSEfor both experiments.

The findings suggest that perspective projections have to be care-fully parameterized when it is important to present the observer withrealistic projections of virtual 3D scenes from which correct dis-tance metrics must be estimated, for example, as required in CADor architectural applications.

Limitations and Future Work. While these initial findings pro-vide a useful approach for specifying perspective projections, fur-ther studies are required to fully understand the effects of differentGFOVs. First, in terms of psychophysical studies, the scope of theexperiment can be expanded to include varying camera positionsand orientations, or other types of planar projections, for example,orthographic or isometric projections. As explained in Section 2distortions of the GFOV can be useful. However, perceptual bene-fits are highly situation dependent and not investigated sufficiently.For instance, in certain situations the GFOV desired by users maybe much larger than the DFOV in order to get a wider view to avirtual 3D space. It is a challenging question whether the optimalFOV for different viewing situations as well as arbitrary environ-ments can be predetermined. In this context, it may be useful tochange the GFOV dynamically with respect to the visual scene.

We have shown that a GFOV that matches the DFOV is beneficialfor the exploration of virtual scenes, when the perception of objectproperties should match those perceived when viewing a real-worldcounterpart. However, other application scenarios may benefit froma representation with a larger GFOV, while distortions may be ne-glected. When changing the GFOV also the visual optic flow ratedecreases or increases proportionately [Draper et al. 2001]. Sinceoptic flow patterns provide important visual motion cues for humanself-motion perception, it is also interesting how different GFOVsaffect motion perception in computer graphics applications. In thefuture we will pursue these questions more deeply and explore theeffects of different GFOVs on spatial perception of 3D scenes.

REFERENCES

ANKRUM, D. R. 1999. Visual ergonomics in the office: Guidelines. Occup.Health Safety 68, 7, 64–74.

BURDEA, G. AND COIFFET, P. 2003. Virtual Reality Technology. Wiley-IEEEPress.

DRAPER, M. H., VIIRRE, E. S., FURNESS, T. A., AND GAWRON, V. J. 2001.Effects of image scale and system time delay on simulator sickness withinhead-coupled virtual environments. J. Human Factors Ergonom. Soc. 43,1, 129–146.

FERWERDA, J. 2008. SIGGRAPH core: Psychophysics 101: How to runperception experiments in computer graphics. In Proceedings of the Inter-national Conference on Computer Graphics and Interactive Techniques.ACM, SIGGRAPH 2008 classes.

FRANKE, I., PANNASCH, S., HELMERT, J. R., RIEGER, R., GROH, R., AND

VELICHKOVSKY, B. M. 2008. Towards attention-centered interfaces: Anaesthetic evaluation of perspective with eye tracking. ACM Trans. Multi-media Comput. Comm. Appl. 4, 3, 1–13.

GILLAM, B. 1980. Geometrical illusions. Amer. J. Science 242, 102–111.HAGEN, M. A. 1980. The Perception of pictures. (Series in Cognition and

Perception). Academic Press.HENDRIX, C. AND BARFIELD, W. 1996. Presence within virtual environments

as a function of visual display parameters. Presence: Teleoper. VirtualEnviron. 5, 3, 274–289.

HOLLOWAY, R. AND LASTRA, A., 1995. Virtual environments: A survey of thetechnology. Tech rep. TR93-033, University of North Carolina at ChapelHill.

KJELLDAHL, L. AND PRIME, M. 1995. A study on how depth perception isaffected by different presentation methods of 3D objects on a 2D display.Computers Graph. 19, 2, 199–202.

KUHL, S. A., THOMPSON, W. B., AND CREEM-REGEHR, S. H. 2009. HMDcalibration and its effects on distance judgments. ACM Trans. Appl.Percep. 6, 3, 1–19.

LOOMIS, J. M. AND KNAPP, J. M. 2003. Virtual and Adaptive Environments.Erlbaum, Mahwah, NJ. 21–46.

LOOMIS, J., SILVA, J. D., PHILBECK, J., AND FUKUSIMA, S. 1996. Visualperception of location and distance. Current Direct. Psych. Science 5,72–77.

MCGREEVY, M., RATZLAFF, C., AND ELLIS, S. 1985. Virtual space and two-dimensional effects in perspective displays. In Proceedings of the 21stAnnual Conference on Manual Control.

MESSING, R. AND DURGIN, F. H. 2005. Distance perception and the visualhorizon in head-mounted displays. ACM Trans. Appl. Percep. 2, 3, 234–250.

MURRAY, J. 1994. Some perspectives on visual depth perception. ACMSIGGRAPH Computer Graph., Special Issue on Interactive EntertainmentDesign, Implementation and Adrenaline. 28, 155–157.

NEALE, D. C. 1996. Spatial perception in desktop virtual environments. InProceedings of Human Factors and Ergonomics. 1117–1121.

ACM Transactions on Graphics, Vol. 30, No. 5, Article 112, Publication date: October 2011.

Page 10: Realistic Perspective Projections for Virtual Objects and ...€¦ · Which Utah teapot depicts best the physical counterpart? The teapot is rendered with different Geometric Fields

112:10 • F. Steinicke et al.

PIRENNE, M. H. 1970. Optics, Painting and Photography. CambridgeUniversity Press, Cambridge, UK.

POLYS, N., KIM, S., AND BOWMAN, D. 2005. Effects of information layout,screen size, and field of view on user performance in information-richvirtual environments. In Proceedings of the Symposium on Virtual Realityand Software Systems (VRST). ACM, 46–55.

RENSINK, R. A., O’REGAN, J. K., AND CLARK, J. J. 1997. To see or not tosee: The need for attention to perceive changes in scenes. Psych. Science8, 5, 368–373.

RIES, B., INTERRANTE, V., KAEDING, M., AND PHILLIPS, L. 2009. Analyzingthe effect of a virtual avatar’s geometric and motion fidelity on ego-centricspatial perception in immersive virtual environments. In Proceedings ofthe ACM Symposium on Virtual Reality Software and Technology (VRST).59–66.

ROLLAND, J., GIBSON, W., AND PRESENCE, D. A. 1995. Towards quantifyingdepth and size perception in virtual environments. Presence 4, 1, 24–48.

ROSENBERG, C., AND BARFIELD, W. 1995. Estimation of spatial distortionas a function of geometric parameters of perspective. IEEE Trans. Syst.Man Cybernetic 25, 1323–1333.

SHREINER, D. 2009. OpenGL Programming Guide: The official Guide toLearning OpenGL, Versions 3.0 and 3.1, 7th Ed. Addison-Wesley.

STEINICKE, F., BRUDER, G., KUHL, S., WILLEMSEN, P., LAPPE, M., AND

HINRICHS, K. H. 2009. Judgment of natural perspective projections inhead-mounted display environments. In Proceedings of the ACM Sympo-sium on Virtual Reality Software and Technology (VRST). 35–42.

STROEBEL, L. D. 1999. View Camera Technique. Focal Press.VISHWANATH, D., GIRSHICK, A. R., AND BANKS, M. S. 2005. Why pictures

look right when viewed from the wrong place? Nature Neurosci. 8, 1401–1410.

WARREN, R., AND WERTHEIM, A. H. 1990. Perception & Control of Self-Motion. Lawrence Erlbaum Associates, Mahwah, NJ.

WILLEMSEN, P., COLTON, M. B., CREEM-REGEHR, S., AND THOMPSON, W. B.2009. The effects of head-mounted display mechanical properties andfield-of-view on distance judgments in virtual environments. ACM Trans.Appl. Percept. 2, 6.

WITMER, B. G., AND KLINE, P. B. 1998. Judging perceived and traverseddistance in virtual environments. Presence: Teleoper. Virtual Environ. 7,2, 144–167.

ZORIN, D. AND BARR, A. H. 1995. Correction of geometric perceptualdistortions in pictures. In Proceedings of the 22nd Annual Conference onComputer Graphics and Interactive Techniques (SIGGRAPH), 257–264.

Received February 2011; accepted June 2011

ACM Transactions on Graphics, Vol. 30, No. 5, Article 112, Publication date: October 2011.