rendering a series of 3d dynamic visualizations in ......that is not to say either implementation...

7
RENDERING A SERIES OF 3D DYNAMIC VISUALIZATIONS IN (GEOGRAPHIC) EXPERIMENTAL TASKS Pavel Ugwitz, Zdeněk Stachoň, Pavel Pospíšil Mgr. Pavel Ugwitz; Department of Information and Library Studies; Arna Nováka 1, 602 00 Brno, CZ; Department of Geography, Faculty of Science, Masaryk University; Kotlářská 2, 611 37 Brno; [email protected] Mgr. Zdeněk Stachoň, Ph.D.; Department of Geography, Faculty of Science, Masaryk University; Kotlářská 2, 611 37 Brno, CZ; Department of Information and Library Studies; Arna Nováka 1, 602 00 Brno, CZ; [email protected] Mgr. Pavel Pospíšil; Department of Geography, Faculty of Science, Masaryk University; Kotlářská 2, 611 37 Brno, CZ; [email protected] Abstract Real-time 3D visualization engines have evolved in both their capacity to render visual fidelity and in their openness to scripting capabilities (photorealism, programmable shaders and object properties, dynamic visualizations). This contribution is to look at the potential and limitations of such implementations with an experimental/educational scope in mind: rendering multiple 3D (cartographic) visualizations of maps/scenes in a controlled experiment-purposed application, with an emphasis on their transition from one visualization to another. While 3D dynamic visualizations offer an increase in choices and customization in display and function (as opposed to traditional/2D media), this does come at a price of rising implementation costs. Both dynamicity and three- dimensionality of presented data require algorithms to process this (be it terrains, visibility models, collision models, graphics shaders, or other components). Because such algorithms require initialization time and run-time resources, their optimization is crucial. There are constraints and costs to where/how they can be used, too. Good practices of dynamic experimental scene implementation are discussed; in scene-to-scene transitions, multiple approaches are weighted against each other (multi-scene, in-scene with object transition, in-scene with user viewport transition). Techniques of static data retention and in-scene object manipulation are discussed. The conclusions are grounded in previous and current implementations of such tasks (an educational topographic map application, or ongoing research on cross-cultural differences in foreground-background visual perception). 1 MOTIVATION This contribution discusses implementing real-time 3D engine transitions for the needs of displaying visually complex, often dynamic visualizations. There is no particular research implementation attached to the paper as the solution because solutions offered in the paper are multiple and should be research-design-agnostic. This goes hand in hand with ensuring that the presented solutions cover a broad palette of research implementations in real-time 3D. In other words, per good research practices, the technical implementation should be adapted (and readily adaptable) to the research design in mind, and not the other way around. That is not to say either implementation examples or user interface & user experience (UI & UX) considerations that tie to them are omitted from the text; on the contrary, they are quite frequent, and they drive the points forward - but they should be considered for what they are: hook points in the current paradigms of computer graphics and UX, nothing more. In previous 3D experiments of ours, users were presented with large-scale, continuous, or varying data (indoor/outdoor environments, 2D/3D maps, 3D models and text stimuli) Some portions of these experiments were dynamic - the contents of the visualization presented to users were altered through the experiment (be it by the user, an administrator, or by an event-based script). Choosing a suitable implementation to execute such alterations was necessary - in different Proceedings Vol. 1, 8th International Conference on Cartography and GIS, 2020, Nessebar, Bulgaria ISSN: 1314-0604, Eds: Bandrova T., Konečný M., Marinova S. 628

Upload: others

Post on 06-Aug-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: RENDERING A SERIES OF 3D DYNAMIC VISUALIZATIONS IN ......That is not to say either implementation examples or user interface & user experience (UI & UX) considerations that tie

RENDERING A SERIES OF 3D DYNAMIC VISUALIZATIONS IN (GEOGRAPHIC) EXPERIMENTAL TASKS

Pavel Ugwitz, Zdeněk Stachoň, Pavel Pospíšil

Mgr. Pavel Ugwitz; Department of Information and Library Studies; Arna Nováka 1, 602 00 Brno, CZ; Department of Geography, Faculty of Science, Masaryk University; Kotlářská 2, 611 37 Brno; [email protected] Mgr. Zdeněk Stachoň, Ph.D.; Department of Geography, Faculty of Science, Masaryk University; Kotlářská 2, 611 37 Brno, CZ; Department of Information and Library Studies; Arna Nováka 1, 602 00 Brno, CZ; [email protected] Mgr. Pavel Pospíšil; Department of Geography, Faculty of Science, Masaryk University; Kotlářská 2, 611 37 Brno, CZ; [email protected]

Abstract Real-time 3D visualization engines have evolved in both their capacity to render visual fidelity and in their openness to scripting capabilities (photorealism, programmable shaders and object properties, dynamic visualizations). This contribution is to look at the potential and limitations of such implementations with an experimental/educational scope in mind: rendering multiple 3D (cartographic) visualizations of maps/scenes in a controlled experiment-purposed application, with an emphasis on their transition from one visualization to another.

While 3D dynamic visualizations offer an increase in choices and customization in display and function (as opposed to traditional/2D media), this does come at a price of rising implementation costs. Both dynamicity and three-dimensionality of presented data require algorithms to process this (be it terrains, visibility models, collision models, graphics shaders, or other components). Because such algorithms require initialization time and run-time resources, their optimization is crucial. There are constraints and costs to where/how they can be used, too. Good practices of dynamic experimental scene implementation are discussed; in scene-to-scene transitions, multiple approaches are weighted against each other (multi-scene, in-scene with object transition, in-scene with user viewport transition). Techniques of static data retention and in-scene object manipulation are discussed. The conclusions are grounded in previous and current implementations of such tasks (an educational topographic map application, or ongoing research on cross-cultural differences in foreground-background visual perception).

1 MOTIVATION

This contribution discusses implementing real-time 3D engine transitions for the needs of displaying visually complex, often dynamic visualizations. There is no particular research implementation attached to the paper as the solution because solutions offered in the paper are multiple and should be research-design-agnostic. This goes hand in hand with ensuring that the presented solutions cover a broad palette of research implementations in real-time 3D. In other words, per good research practices, the technical implementation should be adapted (and readily adaptable) to the research design in mind, and not the other way around.

That is not to say either implementation examples or user interface & user experience (UI & UX) considerations that tie to them are omitted from the text; on the contrary, they are quite frequent, and they drive the points forward - but they should be considered for what they are: hook points in the current paradigms of computer graphics and UX, nothing more.

In previous 3D experiments of ours, users were presented with large-scale, continuous, or varying data (indoor/outdoor environments, 2D/3D maps, 3D models and text stimuli) Some portions of these experiments were dynamic - the contents of the visualization presented to users were altered through the experiment (be it by the user, an administrator, or by an event-based script). Choosing a suitable implementation to execute such alterations was necessary - in different

Proceedings Vol. 1, 8th International Conference on Cartography and GIS, 2020, Nessebar, Bulgaria ISSN: 1314-0604, Eds: Bandrova T., Konečný M., Marinova S.

628

Page 2: RENDERING A SERIES OF 3D DYNAMIC VISUALIZATIONS IN ......That is not to say either implementation examples or user interface & user experience (UI & UX) considerations that tie

scenarios, we had used different solutions, as they provided more suitable than others. This is discussed later on in the Application chapter.

First, though, let us frame the subject by discussing what constitutes complexity in 3D/dynamic visualizations (the canvas and the visualization). This is followed by an overview of existing research (user movement in virtual environments and virtual reality (VR), users' understanding of virtual space). The Implementation chapter covers three approaches to implementing user/scene transitions, along with some real-time 3D engine processes to keep in mind; and lastly, existing research applications are described.

1.1 The canvas and the visualization

Since the beginning of the visualization discipline, humans have always been constrained by the properties of their supposed canvas, and its appropriateness to visualize what they actually wanted to be visualized (Durand, 2002; Stoops, 1966). As a simplification, one can reduce the problem into two essential variables that predetermine the viability of a visualization: the properties of a canvas (size, granularity, technology, and others), and the method of intended visualization (schematic or realistic depictions, flat or in-perspective spaces, amount of detail to convey). To put things into a contemporary context, "canvases" are meant to be LCD displays, or virtual reality head-mounted displays (VR HMDs), and the visualization methods we concern ourselves with are either 2D raster graphics or 3D geometries.

The supposed "canvases" of today are a given; and while events like new developments in VR HMDs push the technology forward, one step closer to the visual fidelity of human sight (Epstein & Rogers, 1995), we are no hardware developers ("Comparison of virtual reality headsets", 2020). So let us concern ourselves with presenting complex visualizations to the user - that is, 3D continuous/dynamic spatial data (Fig. 1).

Figure 1 (Left) Photogrammetry 3D model of a castle ("Hrad Landštejn", 2020) - a very vertically differentiated structure, where a single viewpoint fails to provide the full understanding of the castle's spatial layout. (Right) 3D map

of the surrounding terrain, which goes off to vast detail-losing distances ("Landštejn Castle - Mapy.cz", 2020).

With visualizations of such nature, we assume the resolution of the available canvas may never be enough to convey all the needed information to its user in one instance (not without panning, zooming, switching scenes, or through other means of progressively modifying what is displayed on the canvas). Therefore, if the view to visualization is to be purposefully altered (either intentionally by a user interface (UI), or without their knowledge/contribution, as in a controlled experiment), there are ways to achieve that.

1.2 Existing research

There is a body of research that ties into user aspects of scene transitions, e.g., UX, UI, usability. As new user interfaces and means of moving through virtual environments are introduced (Caserman et al., 2018), these are being compared, one against other - be it in basic research (Vlahovic et al., 2018), or specific/expert tasks (Kubíček et al., 2017). Besides continuous movement, some controllers/implementations also allow for non-continuous movement - teleportation; the software-side of such implementations can also be put for evaluation (Bozgeyikli et al., 2016). User-friendly

Proceedings Vol. 1, 8th International Conference on Cartography and GIS, 2020, Nessebar, Bulgaria ISSN: 1314-0604, Eds: Bandrova T., Konečný M., Marinova S.

629

Page 3: RENDERING A SERIES OF 3D DYNAMIC VISUALIZATIONS IN ......That is not to say either implementation examples or user interface & user experience (UI & UX) considerations that tie

implementations try to ensure the user retains their sense of location and spatial awareness, even after teleporting; other research designs try to deliberately challenge user's orientation in virtual space (Koltai et al., 2020).

User behavior and interface usage are measured to be analyzed and evaluated (Herman et al., 2018). Interfaces are developed to convey compelling usability metaphors (Sweeney, 2016). Etc. The scope of applicability is vast; the references mentioned here are merely scratching the surface of the subject. However, they relate to scene transitions (a technical topic not often described in detail in research papers).

2 IMPLEMENTATION

Our implementations were realized using the Unity 3D engine, version 2018.4 LTS. Specifically, we developed a series of Unity editor script extensions called Toggle Toolkit to make object/user transitions as simple as possible (Ugwitz et al., 2020). The implementation should, however, be somewhat engine-agnostic, as similar principles apply across other engines.

Moving a user inside a scene, moving other in-scene objects in relation to the user ("Transform.position"), or moving a user across different scenes ("Application.LoadLevel"), all that is simple, as far as 3D engine programming interfaces (APIs) are concerned. What is not so simple is the processing overhead that happens along with this. 3D engines generally achieve real-time performance by rendering objects, shadows, terrain, particles, and other graphic elements that are far away in lower detail or not at all (a technique generally known as Level of Detail (LoD)). For the most part, such LoD computations are taken care of by the internal algorithms of the 3D engine; engine rendering pipeline scriptability ("Scriptable Render Pipeline"), if any, allows for some customization of this; however, the algorithms keep running as the user moves, or some animated objects move in/out of user's vicinity: LoDs are determined and based on their changes, graphics detail is redrawn. The more LoDs that are recalculated (e.g., quite so by making a distant leap by teleporting a user), the more costly this is for the CPU and the hard drive (should textures be involved).

When object LoDs are suddenly "popping in" (too slow a CPU to process what is needed in real-time), when an empty surface takes time to load a texture (too slow to load real-time from the hard drive), the user witnesses unwanted glitches in the 3D visualizations that break immersion - which is something generally undesired (Ghosh et al., 2018). Thus, there are workarounds to prevent this. Furthermore, other resource-heavy components, e.g., mesh colliders ("Mesh Collider") and other geometric algorithm-based solutions, if directly accessible from within the 3D engine, may benefit from optimization as well.

Figure 2 (Top) Multi-scene transition, where the user is transferred from one scene to a wholly different one. (Middle) In-scene with object transition, where the user remains in the same place, while portions of the 3D spaces are

manipulated around them. (Bottom) In-scene with user viewport transition, where the user is teleported within the bounds of a single scene. This figure was illustrated using Icograms Designer.

Proceedings Vol. 1, 8th International Conference on Cartography and GIS, 2020, Nessebar, Bulgaria ISSN: 1314-0604, Eds: Bandrova T., Konečný M., Marinova S.

630

Page 4: RENDERING A SERIES OF 3D DYNAMIC VISUALIZATIONS IN ......That is not to say either implementation examples or user interface & user experience (UI & UX) considerations that tie

Fig. 2 illustrates the three approaches to 3D scene transitions. We will now cover detailed technical aspects of their successful implementation.

2.1 Multi-scene transition

This is a simple solution where portions of an intended virtual world are separated into different scenes. Before the advent of open-world solutions, this was (and in some cases still is) the most common way of making the transition from one area to another: by means of arriving at one scene's end, the next one would load up.

Historically, it made sense to divide the virtual world into such chunks for several reasons. For one, 3D engine editors did not offer a feature to edit a scene collaboratively - so by creating separate scenes, editing workload could be distributed across several people (Unity still does not have a standardized collaborative editing functionality; Unreal engine has ("Multi-User Editing", 2020)). Furthermore, separating a vast virtual space into smaller chunks can make it easier to manage by the engine - lesser the amount of objects in a scene overall means less CPU time spent on LoD and other distance/zoning algorithms; a unifying visual theme in a scene (assuming a scene portrays a location with a coherent theme to it) can help with texture allocation, making most, if not all textures fit into the memory, effectively eliminating hard disk reads for loading textures.

Another advantage of a separate scene is that it is a "blank slate". It can be set up to contain the scripts, effects, lighting set-ups it needs, without the need to consider set-ups of the other portions of the virtual world (something that would be implementation-costly and error-prone in an open-world scene, as with enabling specific visual/scriptable features of one portion of the virtual world, others would have to be checked to be disabled).

The inherent disadvantage of the multi-scene approach can be its slow loading times (especially if large objects like detailed 3D meshes and textures are included). In loading a new scene, all the existing memory objects of the current scene would be destroyed, and the next scene would initialize their own. This also means that if there were crucial data attached to the user in the previous scene to be transferred into the next one, the data would have to be saved/loaded on the side of the scene transition process (e.g., by implementing an XML writing/loading script).

2.2 In-scene object transition

The user remains in the same worldspace coordinates, while a transition happens around them. In most cases, these are dynamic alterations to the surrounding visuals (be it continuous animations or instant events). It would make sense to structure virtual worldspace interfaces to react to the user this way (e.g., "XR Interaction Toolkit", 2020); it would make a cumbersome effort to "teleport" the user by moving the whole virtual scene around them.

2.3 In-scene user viewport transition

This approach is the correct way to teleport the user within the scene: their worldspace coordinates are changed to move them to a defined area of the scene. Conventionally, a teleporter, if not unknown to the user, is to visually communicate it is a teleporter (be it standardized color schemes, effects, or other visual leads), and to teleport the user upon entry, or interface usage. The teleportation function of the standard VR HTC Vive controller utilizes this principle (Van de Kerckhove, 2019).

The HTC Vive controller teleportation implementation is an example of preventing/solving some of the scene transition issues mentioned above. There is a fade-in/fade-out animation upon teleportation - something that could hide potential LoD/texturing glitches; the controller also visibly indicates the place the user will be teleported to, ensuring they retain their spatial orientation. Large-distance teleporting using the HTC Vive controller is also severely limited.

While manipulating user's worldspace coordinates is to be expected, manipulating their viewport rotation would be considered a bad practice. In cases of VR systems (Hearney, 2019), rotation is evaluated by the VR hardware and then sent to the 3D engine; and even in cases of a scene visualization portrayed on an LCD monitor, changing the user's rotation could hamper their spatial orientation - unless that is the intent of the research.

Proceedings Vol. 1, 8th International Conference on Cartography and GIS, 2020, Nessebar, Bulgaria ISSN: 1314-0604, Eds: Bandrova T., Konečný M., Marinova S.

631

Page 5: RENDERING A SERIES OF 3D DYNAMIC VISUALIZATIONS IN ......That is not to say either implementation examples or user interface & user experience (UI & UX) considerations that tie

3. APPLICATION

The first, and possibly the most straightforward implementation of in-scene user viewport transition involved VR exploratory research, where the users were tasked to explore a vast outdoor area on their own; upon finishing, they were to be teleported to a watchtower, never mind the location in which they decided to end their exploration. The implementation involved a script that mapped a keypress to the watchtower virtual world coordinate; as the experiment administrator pressed the key, the user was teleported into the watchtower.

Another simple implementation was a multi-scene transition consisting of two VR scenes, and no movement apart from the head rotation for the user. The first scene had a control script attached to it, where the user was presented with a set of words to remember. Once all the words were shown, the user was transferred to the second scene and asked to recall the previously shown words. Since both scenes had a vastly different lighting set-up (something rather difficult to set up in Unity, with no profiling), splitting the visualization into two scenes was the right decision. More complex examples follow (Fig. 3).

Figure 3 (Left) In-scene object transition experiment (figure-background eye-tracking), where 30 scenes were shown in succession, experiment design per Čeněk et al. (2019), depiction by Fitz (2020). (Right) In-scene object transition experiment (contour lines) with virtual UI altering the visualization on the virtual table, by Šašinka et al. (2019).

There is a pending eye-tracking study on figure-background attention preferences in cross-cultural contexts, by Čeněk et al. (2019). This study was to be replicated for VR eye-tracking while preserving the original research design: i.e., to have 30 scenes presented to the user in succession. Since the users were mere observers in the experiment, their only range of motion was their head. Thirty non-demanding landscape visualizations were created, all within one scene, all placed onto the same coordinates. The scenes were intentionally deactivated (visually hidden). Once the experiment started, a switching timer script proceeded to show the scenes to the user, displaying each for four seconds. The user was situated in the same vantage point all the time, making this an object transition experiment. Due to this being an eye-tracking experiment, i.e., something that requires accurate timing of the presented stimuli with no delays or graphics artifacts, all the 3D models and textures present in the 30 subsequent visualizations were purposefully preloaded before their presentation to the user ("Resources.LoadAll"). We ensured the preloaded visuals were small enough to fit the experimental PC's memory (e.g., 16GB RAM, 8GB VRAM).

The most technologically sound implementation of ours is described by Šašinka et al. (2019) - a solution intended for collaborative education (in this case, the subject is contour lines). Not only does the implementation involve an intertwined virtual UI which allows for a variety of 2D/3D visualization adjustments of the subject presented on the table, but there is a network layer to it, too - while all collaborating users technically each run their instance of the visualization on their PC, the network layer ensures that if a user alters the look of their instance, this propagates to other users' instances as well. From the perspective of scene transitions, this is again an in-scene object transition experiment; the fact that the majority of the user's surroundings is a virtual room that remains unchanged (the only dynamic elements being the visualization on the table, and the potential networked co-users) ensures the user stays grounded in the virtual space they occupy. This implementation did offer a dynamic, animated transition where the 3D visualization of the terrain would be switched to 2D or vice versa; during this one-second transition, the highly detailed 3D terrain would gradually flatten into 2D; because the terrain would offer interactivity in terms of the ability to place items onto it, including a mesh collider component was crucial; during the transition animation, the mesh collider would, however, be forced to recalculate each rendered frame, hogging down the CPU and the framerate of the

Proceedings Vol. 1, 8th International Conference on Cartography and GIS, 2020, Nessebar, Bulgaria ISSN: 1314-0604, Eds: Bandrova T., Konečný M., Marinova S.

632

Page 6: RENDERING A SERIES OF 3D DYNAMIC VISUALIZATIONS IN ......That is not to say either implementation examples or user interface & user experience (UI & UX) considerations that tie

application. Therefore, for optimization purposes, the mesh collider terrain component was scripted to disable during the animated 3D/2D terrain transition and re-enable once the animation finishes.

CONCLUSION

This contribution raised the issue of working around 3D visualizations that are too complex in shape or dynamically changing in time - which then leads to the need to adjust the content of one's viewport so that most of the information present in the visualization is conveyed (Fig. 1). Whether the user has a full or limited range of motion and UI usage to achieve that is not relevant, as defining that belongs to a research design based around such issue. We have presented the most common scene transition issues and offered three approaches to object/user transition in a 3D visualization (Fig. 2). These were then discussed in detail regarding their shortcomings, advantages, and possibilities; lastly, some of our previous research designs utilizing these approaches were described, including the specifics of their implementation. We hope that such insight into the technical side of 3D/VR experiment creation can aid future experiment creation.

ACKNOWLEDGMENT

This publication was supported by the Czech Science Foundation (GC19-09265J: The influence of socio-cultural factors and writing system on perception and cognition of complex visual stimuli).

Preparation of related experiments was made available by the research infrastructure of Virtual Geographic Environments Lab, Department of Geography, Faculty of Science, Masaryk University, and HUME Lab Experimental Humanities Laboratory, Faculty of Arts, Masaryk University.

REFERENCES

Bozgeyikli, E., Raij, A., Katkoori, S., & Dubey, R. (2016). Point & Teleport Locomotion Technique for Virtual Reality. Proceedings of the 2016 Annual Symposium on Computer-Human Interaction in Play. https://doi.org/10.1145/2967934.2968105

Caserman, P., Garcia-Agundez, A., Konrad, R., Göbel, S., Steinmetz, R. (2018). Real-time body tracking in virtual reality using a Vive tracker. Virtual Reality, 23(2), 155–168. https://doi.org/10.1007/s10055-018-0374-z

Čeněk, J., Šašinka, Č., Tsai, J. (2019). Cultural Variations in Global and Local Attention and Eye-Movement Patterns during the Perception of Complex Visual Scenes: Comparison of Czech and Taiwanese University Students. Manuscript submitted for publication.

Durand, F. (2002). Limitations of the medium and pictorial techniques. Perceptual and Artistic Principles for Effective Computer Depiction, Course Notes for ACM SIGGRAPH 2002, 27-45.

Epic Games (2020, May 30).. Multi-User Editing. Unreal Engine Documentation. https://docs.unrealengine.com/en-US/Engine/Editor/MultiUser/index.html

Epstein, W., Rogers, S.J. (1995). Perception of space and motion. San Diego Academic Press. https://doi.org/10.1016/B978-0-12-240530-3.X5000-7

Fitz, J. (2020). Interkulturní výzkum v imerzivní virtuální realitě: možnosti využití eyetrackingu. [Unpublished bachelor's thesis]. Masaryk University.

Ghosh, S., Winston, L., Panchal, N., Kimura-Thollander, P., Hotnog, J., Cheong, D., Abowd, G. D. (2018). NotifiVR: Exploring Interruptions and Notifications in Virtual Reality. IEEE Transactions on Visualization and Computer Graphics, 24(4), 1447–1456. https://doi.org/10.1109/tvcg.2018.2793698

Hearney, D. (2019, Apr 29). How VR Positional Tracking Systems Work. UploadVR. https://uploadvr.com/how-vr-tracking-works/

Herman, L., Řezník, T., Stachoň, Z., & Russnák, J. (2018). The Design and Testing of 3DmoveR: an Experimental Tool for Usability Studies of Interactive 3D Maps. Cartographic Perspectives, (90), 31-63. https://doi.org/10.14714/CP90.1411

Icograms. Icograms Designer. https://icograms.com/

Proceedings Vol. 1, 8th International Conference on Cartography and GIS, 2020, Nessebar, Bulgaria ISSN: 1314-0604, Eds: Bandrova T., Konečný M., Marinova S.

633

Page 7: RENDERING A SERIES OF 3D DYNAMIC VISUALIZATIONS IN ......That is not to say either implementation examples or user interface & user experience (UI & UX) considerations that tie

Koltai, B. G., Elkjær Husted, J., Vangsted, R., Noes Mikkelsen, T., & Kraus, M. (2020). Procedurally Generated Self Overlapping Mazes in Virtual Reality. Interactivity, Game Creation, Design, Learning, and Innovation: 8th EAI International Conference, ArtsIT 2019, and 4th EAI International Conference, DLI 2019 Springer.

Krossapp Imagery (2020, May 30). Hrad Landštejn. http://3dcesko.cz/landstejn

Kubíček, P., Šašinka, Č., Stachoň, Z., Herman, L., Juřík, V., Urbánek, T., & Chmelík, J. (2017). Identification of altitude profiles in 3D geovisualizations: the role of interaction and spatial abilities. International Journal of Digital Earth, 12(2), 156–172. https://doi.org/10.1080/17538947.2017.1382581

Landštejn Castle - Mapy.cz (2020, May 30). Retrieved from https://en.mapy.cz/zakladni?x=15.2279273&y=49.0235908&z=17&m3d=1&height=659&yaw=-0&pitch=-21&source=base&id=1834233

Stoops, J. D. (1966). Media and the Communication of Aesthetic Ideas: Potentials & Limitations. Art Education, 19(4), 23-25. https://doi.org/10.2307/3190795

Sweeney, T. (2016, February 4). Build for VR in VR. Unreal Engine. https://www.unrealengine.com/en-US/blog/build-for-vr-in-vr

Šašinka, Č., Stachoň, Z., Sedlák, M., Chmelík, J., Herman, L., Kubíček, P., Šašinková, A., Doležal, M., Tejkl, H., Urbánek, T., Svatoňová, H., Ugwitz, P., Juřík, V. (2019). Collaborative immersive virtual environments for education in geography. ISPRS International Journal of Geo-Information, 8(1), 3. https://doi.org/10.3390/ijgi8010003

Ugwitz, P., Šašinková, A., Šašinka, Č., Stachoň, Z., Juřík, V. (2020). Toggle Toolkit: A Tool for Conducting Experiments in Unity Virtual Environments. Manuscript submitted for publication.

Unity Technologies. Application.LoadLevel. Unity. https://docs.unity3d.com/ScriptReference/Application.LoadLevel.html

Unity Technologies. Mesh Collider. Unity. https://docs.unity3d.com/Manual/class-MeshCollider.html

Unity Technologies. Resources.LoadAll. Unity. https://docs.unity3d.com/ScriptReference/Resources.LoadAll.html

Unity Technologies. Transform.position. Unity. https://docs.unity3d.com/ScriptReference/Transform-position.html

Unity Technologies. Scriptable Render Pipeline. Unity. https://docs.unity3d.com/Manual/ScriptableRenderPipeline.html

Unity Technologies. XR Interaction Toolkit. Unity. https://docs.unity3d.com/Packages/[email protected]/

Van de Kerckhove (2019, Jan 4). HTC Vive Tutorial for Unity. Raywenderlich.com. https://www.raywenderlich.com/9189-htc-vive-tutorial-for-unity

Vlahovic, S., Suznjevic, M., Kapov, L. S. (2018). Subjective Assessment of Different Locomotion Techniques in Virtual Reality Environments. 2018 Tenth International Conference on Quality of Multimedia Experience (QoMEX). https://doi.org/10.1109/qomex.2018.8463433

Wikimedia Foundation (2020, May 30). Comparison of virtual reality headsets. Wikipedia. https://en.wikipedia.org/wiki/Comparison_of_virtual_reality_headsets

Proceedings Vol. 1, 8th International Conference on Cartography and GIS, 2020, Nessebar, Bulgaria ISSN: 1314-0604, Eds: Bandrova T., Konečný M., Marinova S.

634