a time delay compensation method improving registration for augmented reality

22
Research methods in Computing COMP6008 November 21, 2013 University of Southampton (ECS) A Time Delay Compensation Method Improving Registration for Augmented Reality[1] Jean-Yves Didier, David Roussel and Malik Mallem Laboratoire Syste`mes Complexes Universite ́ d’Evry Val d’Essone {didier, roussel, mallem}@iup.univ-evry.fr Mohamed Hisham Hassan Ahmed [email protected] 26498235 Wednesday, 20 November 13

Upload: mohamed-hisham

Post on 17-Jul-2015

124 views

Category:

Technology


1 download

TRANSCRIPT

Research methods in ComputingCOMP6008

November 21, 2013University of Southampton (ECS)

A Time Delay Compensation Method Improving Registration for

Augmented Reality[1]

Jean-Yves Didier, David Roussel and Malik Mallem

Laboratoire Syste`mes Complexes

Universite ́ d’Evry Val d’Essone

{didier, roussel, mallem}@iup.univ-evry.fr

Mohamed Hisham Hassan [email protected]

26498235

Wednesday, 20 November 13

Mohamed Hisham (mhha1g13) (Soton ECS) November 21, 2013

Overview

‣ Introduction

‣ Background

‣ Proposed Solution

‣ Simulation

‣ Future Work

2

Wednesday, 20 November 13

Mohamed Hisham (mhha1g13) (Soton ECS) November 21, 2013

Overview

‣ Introduction

‣ Background

‣ Proposed Solution

‣ Simulation

‣ Future Work

3

Wednesday, 20 November 13

Mohamed Hisham (mhha1g13) (Soton ECS) November 21, 2013

Augmented Reality (AR)

‣ Overlay information

‣ Head Mounted Devices (HMD)

- See-through

Introduction Augmented Reality

( Vuzix augmented reality STAR™ 1200XLD product [2])

4

Wednesday, 20 November 13

Mohamed Hisham (mhha1g13) (Soton ECS) November 21, 2013

Fields‣ Educational

- Museums

‣ Industry

- Construction

- Design

‣ Advertisements

- Location-based Ads(Bentley Utility AR system [3])

5

Introduction Augmented Reality

Wednesday, 20 November 13

Mohamed Hisham (mhha1g13) (Soton ECS) November 21, 2013

Registration?

(KML/HTML Augmented Reality Mobile Architecture (KHARMA) [4])

6

Introduction Registration problem

Wednesday, 20 November 13

Mohamed Hisham (mhha1g13) (Soton ECS) November 21, 2013

Latency

“Temporal Shift between the worlds.”[1]

7

Introduction Registration problem

Wednesday, 20 November 13

Mohamed Hisham (mhha1g13) (Soton ECS) November 21, 2013

‣ Introduction

‣ Background

‣ Proposed Solution

‣ Simulation

‣ Future Work

8

Overview

Wednesday, 20 November 13

Mohamed Hisham (mhha1g13) (Soton ECS) November 21, 2013

Prediction

‣ Prediction over Computation

• Kalman filtering

• Monte Carlo particle filtering

9

Background Previous Solutions

Wednesday, 20 November 13

Mohamed Hisham (mhha1g13) (Soton ECS) November 21, 2013

Post rendering Techniques

‣ Correction after rendering

‣ Degrees of freedom (DOF)

- 6DOF instead of 2DOF

10

Six Degrees of freedom [5]

Background Previous Solutions

Wednesday, 20 November 13

Mohamed Hisham (mhha1g13) (Soton ECS) November 21, 2013

‣ Introduction

‣ Background

‣ Proposed Solution

‣ Simulation

‣ Future Work

11

Overview

Wednesday, 20 November 13

Mohamed Hisham (mhha1g13) (Soton ECS) November 21, 2013

Proposed Solution‣ Prediction + post rendering

- Software graphics pipeline

• AR Prediction idea: what to do in the case of new dataset is available while rendering a frame?

‣ Authors claimed

- Fast for post processing

- Significantly reduce rotational effects

- Reduce registration errors (Translational)

12

Proposed Solution Keypoints

Wednesday, 20 November 13

Mohamed Hisham (mhha1g13) (Soton ECS) November 21, 2013

Proposed Solution Software graphics pipeline

13

Predictor

Renderer

Compensator

Display

Corrected prediction

Rendered frames

Dataset at t=1

Dataset at t >1

Rough sketch on paper’s pipeline diagram [1]

Wednesday, 20 November 13

Mohamed Hisham (mhha1g13) (Soton ECS) November 21, 2013

‣ Introduction

‣ Background

‣ Proposed Solution

‣ Simulation

‣ Future Work

14

Overview

Wednesday, 20 November 13

Mohamed Hisham (mhha1g13) (Soton ECS) November 21, 2013

Setup

‣ Hardware & Software

- OpenGL library

‣ Simulation Assumption

- The 2nd prediction is the most accurate

Simulation Environment

15

Wednesday, 20 November 13

Mohamed Hisham (mhha1g13) (Soton ECS) November 21, 2013

Evaluation

‣ Post processing time

- Theoretically: ≈ 4000 μseconds

- Simulation: 3600 μseconds

Faster

16

Results

3200

3400

3600

3800

4000

4200

4400

4600

4800

5000

−20 −15 −10 −5 0 5 10 15 20

tim

e in

mic

rose

cond

s

angle in degrees (°)

Rotation along y axisRotation along z axis

2500

3000

3500

4000

4500

5000

−1 −0.5 0 0.5 1

tim

e in

mic

rose

cond

s

translation in translation units

Translation along x axisTranslation along z axis

Fig. 4. Time spent to apply pseudo-correction for rotational movements

conclude that our post-processing method is fast enough tobe used for compensating prediction errors detected duringrendering.

Now, we will evaluate what are the registration errordecreasing we can obtain by using this kind of pseudo-correction method.

C. Prediction error compensation

This section is composed of several tests performedon typical sets of transformations ∆T in order to showthe behavior we should expect from the method we haveimplemented.

a) Pure rotational movement: Figure 5 is showingerrors we obtained by trying to evaluate errors for rotationsaround z⃗i and x⃗i axis ( errors are the same for x⃗i and y⃗i

axis). One can notice that the remaining error in pixels isless than 1 for angles varying between -20 and +20, whichmeans that our algorithm is almost correcting all the errorsoccurring on rotations. A deeper evaluation has shown thatour method is not suited for very small angles below 0.04degrees but, even in this case, the introduced error is lessthan 0.2 pixels, so we can consider the method can be usedto correct every rotational movement.

This shows that our method can almost perfectly correctregistration errors for rotational motion for angles up to

0.1

1

10

100

−20 −15 −10 −5 0 5 10 15 20

mea

n er

ror

in p

ixel

rotation angle in degrees (°) around y axis

without compensationwith compensation

0.1

1

10

100

−20 −15 −10 −5 0 5 10 15 20

mea

n er

ror

in p

ixel

rotation angle in degrees (°) around z axis

without compensationwith compensation

Fig. 5. Mean error in pixel computed for rotation movements

± 20 degrees. Such motion could be expected for a humanmotion at 30 fps or above.

We will now evaluate the errors obtained on translationmovements.

b) Pure translation movements: We know that the-oretically most of the registration errors in applying ourmethod are caused by translation movements. The objectwe choose to run our test was fitting in a cubic boundingbox of 1 length unit width in every dimension. The distancebetween the bounding box center and first camera opticalcenter is equal to 2 length units which means that our δparameter (our virtual focal length) equals 2.Like before, we show the results obtained for translationsalong the x⃗i (results are the same y⃗i) axis and z⃗i axis.In this case, the introduced errors are mostly due toperspective deformations and since our method is a 2Dpseudo-correction, it cannot deal efficiently with this. Theerrors computed and introduced in figure 6 are far moreimportant than for rotational motion but our method is stillcorrecting registration errors within the range of translationmotions we applied.

As we can see, our method has not real benefit forvery small translations but since the error for this kind oftranslation is under one length unit, it can be performed onall kind of translation along the x⃗i axis.

3388

Simulation

Post processing time graph with extra annotations [1]

Theoretical

Wednesday, 20 November 13

Mohamed Hisham (mhha1g13) (Soton ECS) November 21, 2013

Evaluation‣ Rotational errors

- ± 20 degrees

• Human motion at 30 fps

Compensate rotational pixel

errors

17

3200

3400

3600

3800

4000

4200

4400

4600

4800

5000

−20 −15 −10 −5 0 5 10 15 20

tim

e in

mic

rose

cond

s

angle in degrees (°)

Rotation along y axisRotation along z axis

2500

3000

3500

4000

4500

5000

−1 −0.5 0 0.5 1

tim

e in

mic

rose

cond

s

translation in translation units

Translation along x axisTranslation along z axis

Fig. 4. Time spent to apply pseudo-correction for rotational movements

conclude that our post-processing method is fast enough tobe used for compensating prediction errors detected duringrendering.

Now, we will evaluate what are the registration errordecreasing we can obtain by using this kind of pseudo-correction method.

C. Prediction error compensation

This section is composed of several tests performedon typical sets of transformations ∆T in order to showthe behavior we should expect from the method we haveimplemented.

a) Pure rotational movement: Figure 5 is showingerrors we obtained by trying to evaluate errors for rotationsaround z⃗i and x⃗i axis ( errors are the same for x⃗i and y⃗i

axis). One can notice that the remaining error in pixels isless than 1 for angles varying between -20 and +20, whichmeans that our algorithm is almost correcting all the errorsoccurring on rotations. A deeper evaluation has shown thatour method is not suited for very small angles below 0.04degrees but, even in this case, the introduced error is lessthan 0.2 pixels, so we can consider the method can be usedto correct every rotational movement.

This shows that our method can almost perfectly correctregistration errors for rotational motion for angles up to

0.1

1

10

100

−20 −15 −10 −5 0 5 10 15 20

mea

n er

ror

in p

ixel

rotation angle in degrees (°) around y axis

without compensationwith compensation

0.1

1

10

100

−20 −15 −10 −5 0 5 10 15 20

mea

n er

ror

in p

ixel

rotation angle in degrees (°) around z axis

without compensationwith compensation

Fig. 5. Mean error in pixel computed for rotation movements

± 20 degrees. Such motion could be expected for a humanmotion at 30 fps or above.

We will now evaluate the errors obtained on translationmovements.

b) Pure translation movements: We know that the-oretically most of the registration errors in applying ourmethod are caused by translation movements. The objectwe choose to run our test was fitting in a cubic boundingbox of 1 length unit width in every dimension. The distancebetween the bounding box center and first camera opticalcenter is equal to 2 length units which means that our δparameter (our virtual focal length) equals 2.Like before, we show the results obtained for translationsalong the x⃗i (results are the same y⃗i) axis and z⃗i axis.In this case, the introduced errors are mostly due toperspective deformations and since our method is a 2Dpseudo-correction, it cannot deal efficiently with this. Theerrors computed and introduced in figure 6 are far moreimportant than for rotational motion but our method is stillcorrecting registration errors within the range of translationmotions we applied.

As we can see, our method has not real benefit forvery small translations but since the error for this kind oftranslation is under one length unit, it can be performed onall kind of translation along the x⃗i axis.

3388

ResultsSimulation

Post processing time graph [1]

Wednesday, 20 November 13

Mohamed Hisham (mhha1g13) (Soton ECS) November 21, 2013

Evaluation‣ Translational errors

- Perspective deformations

- Ineffective for small translations

Corrects larger translations

(i.e >1 unit length)

18

0.01

0.1

1

10

100

1000

−1 −0.5 0 0.5 1

mea

n er

ror

in p

ixel

translation in translation unit along x axis

without compensationwith compensation

0.1

1

10

100

−1 −0.5 0 0.5 1

mea

n er

ror

in p

ixel

translation in translation unit along z axis

without compensationwith compensation

Fig. 6. Mean error in pixel computed for translation movements

Translations along z⃗i axis are introducing moreinteresting results. Figure 6 shows a performancesdecreasing when the second predicted viewpoint gets tooclosed from the object. Actually, a translation of +1 lengthunit along the z⃗i axis is corresponding to a distance ofδ/2 from the object. According to our results, one cannotice that approaching the object at a distance of aboutδ/2 will lead to a loss of performances due to importantperspective deformations.

The main fact shown by this set of figures is that pseudo-correction produces smallest errors than no correction inalmost every motion cases tested.

V. CONCLUSION AND FURTHER WORK

In this paper, we have presented a time delay compen-sation method devoted to an Augmented Reality system.This method is based on texture post-rendering techniques.It assumes that the viewpoint prediction and registrationcomputing are already done. The mathematics formalismfor a pseudo-correction of a rendered virtual scene isexposed in a first step. Then theoretical errors, both onrotation and translation, introduced by this method arestudied using a simulation test-bench.

According to the series of results we obtained, we can

state that our method is fast enough to be used as a postrendering method, compensate ideally rotation effects, isreducing registration errors in the case we want to applyit, for instance, for compensating human motion . One ofthe most important factor we have to take care of is atranslation that leads the viewer near the object we wantto display and cut more than in half the distance betweenthe first predicted viewpoint and the virtual objects.

This study was a first investigation. The next step in ourresearch will be to perform this method on a real device. Astudy of how the end-user is reacting when we’re applyingpseudo-correction should be required to see if our methodhas a real benefit in being used.

REFERENCES

[1] G. Welch and E. Foxlin, “Motion tracking: No silver bullet, buta respectable arsenal,” IEEE Computer Graphics and Applications,vol. 22, pp. 24–38, November/December 2002.

[2] R. L. Holloway, “Registration errors in augmented reality systems,”Master’s thesis, UNC Chapel Hill, Department of Computer Science,1995.

[3] R. Azuma, Predictive Tracking for augmented reality. PhD the-sis, Computer Science Department, University of North Carolina,February 1995.

[4] G. Welsh and G. Bishop, “An introduction to the kalman filter.,”Tech. Rep. TR95-041, University of North Carolina at Chapel Hill,1995.

[5] F. Ababsa, J. Didier, M. Mallem, and D. Roussel, “Head motionprediction in augmented reality systems using monte carlo particlefilters,” in Proceedings of the 13th International Conference onArtificial Reality and Telexixtance (ICAT 2003), pp. 83–88, TheVirtual Reality Society of Japan, 3-5 December 2003.

[6] S. You and U. Neumann, “Fusion of vision and gyro tracking forrobust augmented reality registration,” in In IEEE Conference onVirtual Reality, (Yokohama), pp. 71–78, March 2001.

[7] M. Regan and R. Pose, “Priority rendering with a virtual realityaddress recalculation pipeline,” in Proceedings of the 21st an-nual conference on Computer graphics and interactive techniques,pp. 155–162, 1994.

[8] R. Kijima and T. Ojika, “Reflex hmd to compensate lag andcorrection of derivative deformation,” in International Conferenceon Virtual Reality 2002 (VR2002), pp. 172–179, IEEE, 2002.

[9] R. So and M. Griffin, “Compensating lags in head coupled displaysusing head position prediction and image deflection,” Aircraft,vol. 29, no. 6, pp. 1064–1068, 1992.

[10] W. Mark, L. McMillan, and G. Bishop, “Post-rendering 3d warping,”in Proceedings of the 1997 symposium on Interactive 3D graphics,(Providence, RI), pp. 7–16, April 27-30 1997.

[11] V. Popescu, J. Eyles, A. Lastra, J. Steinhurst, N. England, andL. Nyland, “The warpengine: An architecture for the post-polygonalage.,” in Proceedings of SIGGRAPH 2000, (New Orleans, La),pp. 433–442, July 23-28 2000.

[12] R. Y. Tsai, “A versatile camera calibration technique for high-accuracy 3d machine vision metrology using off-the-shelf tv camerasand lenses,” IEEE Journal of Robotics and Automation, vol. RA-3,pp. 323–344, August 1987.

[13] J. Liang, C. Shaw, and M. Green, “On temporal-spatial realism inthe virtual reality environment,” in Proceedings of the 4th AnnualACM Symposium on User Interface Software & Technology, (HiltonHead), pp. 19–25, November 1991.

3389

ResultsSimulation

mean pixel error z-axis translation graph [1]

Wednesday, 20 November 13

Mohamed Hisham (mhha1g13) (Soton ECS) November 21, 2013

Conclusion‣ Algorithm

- Texture post rendering + Prediction

‣ Graphics pipeline

‣ Simulation justified the claims;

- fast

- reduces error from rotational motion

- reduces translation errors

19

Wednesday, 20 November 13

Mohamed Hisham (mhha1g13) (Soton ECS) November 21, 2013

Future Work

‣ Apply on hardware device

‣ Analyze user (experimenter) feedback

20

Wednesday, 20 November 13

Mohamed Hisham (mhha1g13) (Soton ECS) November 21, 2013

References(1) Didier, J-Y; Roussel, D.; Mallem, M., "A Time Delay Compensation Method Improving Registration for

Augmented Reality," Robotics and Automation, 2005. ICRA 2005. Proceedings of the 2005 IEEE International Conference on , vol., no., pp.3384,3389, 18-22 April 2005.

(2) Vuzix augmented reality STAR™ 1200XLD: http://www.vuzix.com/augmented-reality/products_star1200xld.html

(3) Bentley utility system image: http://www.hotstudio.com/thoughts/augmented-reality-part-two-challenges-opportunities

(4) AR Registration problem image: http://www.hotstudio.com/thoughts/augmented-reality-part-two-challenges-opportunities

(5) Six degrees of freedom: http://www.oasisalignment.com/what-is-alignment/

21

Wednesday, 20 November 13

Mohamed Hisham (mhha1g13) (Soton ECS) November 21, 2013

Thank youAny Questions?

22

Wednesday, 20 November 13