virtual environments: system architectures anthony steed simon julier department of computer...

Post on 20-Jan-2018

221 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

Reminder - VE is an Immersive, Mediated Communication Medium User Interface Devices Environment User Synthetic Environment Real Environment Mediated Medium

TRANSCRIPT

Virtual Environments: System Architectures

Anthony SteedSimon JulierDepartment of Computer ScienceUniversity College London

http://www.cs.ucl.ac.uk/teaching/VE

Overview

• Problem Statement & Requirements

• Data Representations (Contents)

• Execution Models (Dynamics)

• Latency

Reminder - VE is an Immersive,Mediated Communication Medium

User

Interface Devices

Environment

User

Synthetic Environment

Real Environment

Mediated Medium

Ellis’s Content, Geometry, Dynamics Model

• Contents is data• Dynamics is code or

rules to change content• Today we will look at

common data and code representations

Contents:Actors and

Objects

Geometry:Dimensions, Metrics and

Extent

Dynamics:Interaction Rules

“Virtual Environments and Environmental Instruments”, S. Ellis, 1996

Environment

Sources and Sinks of Data

• Input “Data”– Data from trackers (e.g. 3D position plus 3D rotation for head

and hand = 12D data)– Data from input devices (e.g. 2D joystick, buttons)– Possibly audio, haptic, physiological input

• Output “Data”– Displays systems, audio, video

Displays Have Different Requirements

• Video (N copies – for stereo and multiple screens)– Maintain copy of visual state– Render as fast as possible (~60Hz)– Synchronise with other renders

• Audio– Maintain copy of audio state– Render without glitches (requires fast interrupt)

• Haptics– Maintain copy of haptic data

Render as fast as possible (~1000Hz)

“Under the Hood” of the Code

• We will find code modules for many different tasks:– Managing data and assets– Reading devices – Audio rendering– Video rendering– User input– Networking– Complex calculations such as physics (see seminar

tomorrow!)

Overview

• Problem Statement & Requirements

• Data Representations (Contents)

• Execution Models (Dynamics)

• Latency

Graphs

• A graph consists of vertices and edges

• Vertices define the “state” information

• Edges define “relationships”

• Scene-graphs are directed and acyclic

Arbitrary graphDirected graphDirected acyclic graph

Scene-graphs

• In a scene-graph, vertices are often called nodes– Store state information– Can include arbitrary property

information• All graphs have a root node

which defines the base of the tree

• All other nodes divided into two types:– Group nodes– Leaf Nodes

Root node

Group nodes

Leaf nodes

Group Nodes

• Group nodes have multiple nodes as children– Child nodes can be other group nodes or leaf nodes

• Applies common state information to multiple objects– State information propagates down the graph

• Examples include:– Transformations– Switch nodes– Effects

• Bump mapping, scribing, specular highlights

– (In recent times) Shader programs

Examples (OpenSceneGraph)

Anisotropic Lighting

ScribingCartoon

Bumpmapping

Leaf Nodes

• Leaf nodes cannot have children• State information relates to the appearance of specific

objects• Examples include:

– Geometry– Image based rendering

• Billboards• Impostors

Examples (OpenSceneGraph)

Impostors Billboards

Overview

• Problem Statement & Requirements

• Data Representations (Contents)

• Execution Models (Dynamics)

• Latency

Dynamics

• These are the rules of interaction between the contents

• These can be:– Differential equations of Newtonian dynamics to describe

kinematic and dynamic relationships– Grammatical rules for pattern-matched triggered actions

• Many different ways of doing this from imposing numerical approximations to Newtonian physics, through to plain old C++ / Java / XVR coding

Dynamics Representation Model

• Separate from the contents, how do we want to represent our dynamics?

• Its leads to quite critical computer science questions about the separation between code and data

• From a scene-authors point of view two basic models– Standalone process model– Within-scene-graph model

Implementing Dynamics as Standalone Processes• Dynamics implemented

as separate processes / threads

• Can change state of the graph in arbitrary ways– Change values of nodes– Add / remove nodes

Dynamics

Problems of Standalone

• Treats scene-graph as a black-box, makes initial scene-creation very easy (load nodes in to scene-graph, changes a few fields on nodes)

• Composing scenes together, or instantiating existing objects is very hard

• End up with quite complicated code• The code probably has variables in local scope which

“should” be in the scene-graph• Couples together rendering and animation to the same

frame rate

Implementing Dynamics Within the Scene-Graph

• Fairly “autonomous” dynamics can be achieved by embedding dynamics within the scenegraph

• Animations are group nodes which apply state changes to their children

• Examples include:– Animation paths– Particle systems

Animation Node

Animated nodes

Example Animation and Particle System

Problems with Within-Scene-Graph Model

• Difficult to get all the code in there– e.g Difficult to write code that interfaces to other applications– Some valiant attempts such as VRML97/X3D

• Does provide for easy composition of scenes• Different parts of the scene do need an explicit way of

talking to each other– In VRML/X3D there are routes– In many other systems there is an event system

• Many systems provide a scripting language support– Python and Lua are very common

Execution Model Ties Everything Together

• So far we we’ve talked about a disparate set of systems:– A master environment– Separate representations for different output modes– User interfaces for controlling the environment

• The execution model “glues” all these parts together– Closely related to distributed systems as well

Execution Models Tying Things Together

• Example:– The position on an object is changed– The update needs to be reflected in:

• The master database• The different scenegraphs• Over the network (if connected)

– How can all of this be coordinated?• Two main models:

– Kernel model– Actor model

Simple Kernel Model

• Treats a VR application like a traditional graphical application:

while(true){

read_trackers();set_body_position();//Changes scene graphdo_animation(); //Changes scene graphrender_left_eye();render_right_eye();render_sound();poll_trackers();

}

do_animation

• After the user’s position is set by the read_trackers and set_body_position (see later in course) …

• … do_animation manipulates the scene• As we have noted, is responsible for:

– Animations, physics, interaction• This is where the standalone or all the within-scene-

graph dynamics are updated

Problems with Simple Kernel Model

• Everything happens in serial order• Rendering only happens at a fixed rate, so if part of

the animation slows down, the rendering slows down (very noticeable in many video games!)

• What if there are different output requirements such as haptics (1000Hz)?

• What if the input rates are much higher (e.g. 200Hz)

Modified Kernel Model (1)

• Has a fast loop which calls different functions at different frame rates:

while(true){

fast_function();if (elapsed>30ms) slow_function();

}fast_function(){read_trackers(), do_haptics();}

slow_function(){render_left_eye();render_right_eye(); render_sound();}

Modified Kernel Model (2)

• Uses a simple main loop, for the video rendering, but “background” threads for reading devices

• Main thread is very much like the Simple Kernel Model, but the function read_trackers and poll_trackers are now in a separate thread, and the main thread just copies information

• This is a good match to how operating systems actually schedule non-blocking input and output

Pros / Cons of the Kernel Model

• Advantages:– Simple to understand– Application programmer keeps their own data structures within the loop

• Disadvantages:– Implementation needs care because of different update rates– Usually requires some awareness of parallel programming issues– Lots of complexity ends up in the do_animation() method

Actor Model• Virtual environment is

realised by a set of collaborating asynchronous processes (actors)

• Actors send messages to one another

• Processes share a common database

• Database is a central repository of the scene graph

DatabaseAudio

Video1

Video2

TrackingSpeech

Collision

Application

Setting Object State Using in the Actor Model

• Setting the object state is often achieved using the subject-observer design pattern

• The object in the database is the subject• Different renderers / networking systems are the

observers• When the subject’s state is updated, the observers are

automatically notified

Pros / Cons of the Actor Model

• Advantages:– Application program does not care about distribution / what

rendering systems used– Update rates are locally very good– Scales well to multiple cores

• Disadvantages:– Difficult to code efficiently– Difficult to query between code modules in different actors,

needs very clear separation of responsibilities– End to end latency difficult to control (see next section)

Overview

• Problem Statement & Requirements

• Data Representations

• Execution Models

• Latency

End-to-End Latency

• Total time from head movement to scene movement

Mine, Mark (1993). Characterization of End-to-End Delays in Head-Mounted Display Systems, UNC Chapel Hill Computer Science Technical Report TR93-001.

End-to-end latency

Problems

• Single process• For the main CPU, tracker processing is actually

mostly just “waiting”, could schedule something else IF and only IF (IFF) we know how long it will take– Previously we did poll and read separately

• If there are multiple video streams, we could do them in parallel IFF rendering changes no state in the scene graph (depending on your scene graph this isn’t a good bet!)

Example Timing Problem

Animate Render Animate Render

T0 = tracker poll

T1 = video output

Animate Render Animate Render

T0 = tracker poll

T1 = video output

Ideal case

Worst case

Summary

• Representing the environment is difficult• The representation has to be rich enough to capture the contents,

geometry and dynamics• Each display mode requires its own form of the environment to

optimise the display• Want to make content as rich as possible to support dynamic

models• Otherwise behaviour is expressed only in code.• At run-time there are logically concurrent processes (rendering,

collision, audio etc…)• Execution models need to reflect this concurrency

top related