exploring visuo-haptic mixed realityimd.naist.jp/imdweb/pub/sandor_prmu07/paper.pdfand two bt878...

6
Exploring Visuo-Haptic Mixed Reality Christian SANDOR, Tsuyoshi KUROKI, Shinji UCHIYAMA, Hiroyuki YAMAMOTO Human Machine Perception Laboratory, Canon Inc., 30-2, Shimomaruko 3-chome, Ohta-ku, Tokyo 146-8501, Japan Email: {sandor.christian, kuroki.tsuyoshi, uchiyama.shinji, yamamoto.hiroyuki125}@canon.co.jp Abstract In recent years, systems that allow users to see and touch virtual objects in the same space are being investigated. We refer to these systems as visuo-haptic mixed reality (VHMR) systems. Most research projects are employing a half-mirror, while few use a video see- through, head-mounted display (HMD). We have developed an HMD-based, VHMR painting application, which introduces new interaction techniques that could not be implemented with a half-mirror display. We present a user study to discuss its benefits and limitations. While we could not solve all technical problems, our work can serve as an important fundament for future research. Keywords Haptics, Mixed Reality, Augmented Reality, User Interfaces, Interaction Techniques, Color Selection 1 I NTRODUCTION Human perception is multi-modal: the senses of touch and vision do not operate in isolation, but rather closely coupled. This obser- vation has inspired systems that allow users to see and touch vir- tual objects at the same location in space (VHMR systems). Most VHMR systems have been implemented using a half-mirror to dis- play computer graphics in the haptic workspace [8, 14, 18]. This approach achieves a better integration of vision and touch than a conventional, screen-based display; thus, user interactions are more natural. Few research projects (for example, [3]) use a video see- through, HMD instead of a half-mirror. An obvious advantage of the HMD is that the user’s view of the real world and the computer graphics are not dimmed. While this is definitely increasing the realism of the virtual objects, it is hard to present to the user a con- sistent scene: real-world, computer graphics, and haptic forces have to be aligned very precisely. In this paper, we want to show that the HMD-based approach has significant advantages, as novel interaction techniques can be im- plemented. Figure 1 shows a user who paints with a virtual brush on a virtual teacup. He can see and feel the brush, as it is superim- posed over a PHANTOM [17]. The user can feel that he is hold- ing a cylindric object in his right hand. Combined with the visual sensation, he experiences a believable illusion of a real brush. To achieve this effect, two ingredients are necessary: fully opaque oc- clusion of real-world objects by computer graphics, and handmask- ing. Handmasking [12] refers to the correct occlusions between the user’s hands and virtual objects. These effects could hardly be implemented with a half-mirror display. Our contributions in this paper are: First, a novel, simple reg- istration method for VHMR systems. Second, we have created a VHMR painting application that enables users to paint more intu- itively on 3D objects than in other approaches. Third, the interac- tion techniques for this painting application contain novel elements: bi-manual interaction in a VHMR system, and the transformation of a haptic device into an actuated, tangible object. 2 RELATED WORK For precise alignment of MR graphics and haptics, a good solution seem to be the methods proposed by Bianchi et. al. [3]. Our ap- proach is not as precise and robust. However, it is much easier to implement. Our VHMR painting application was strongly inspired by the dAb system [2]. A PHANTOM is used in a desktop setup to imitate the techniques used in real painting. Sophisticated paint-transfer functions and brush simulations are used in this system. While we can’t compete with these, we offer the possibility to draw on 3D Figure 1: View through an HMD in our VHMR painting application. objects. More importantly, we take the painted objects out of the screen and next to the haptic device. By removing the separa- tion of display space and interaction space, we believe to achieve a much more intuitive user interface. Our brush closely resem- bles a fude brush (commonly used in Japanese calligraphy). Two previous projects have already been conducted on performing cal- ligraphy with a PHANTOM [24, 20]. Again, they are only desktop systems. Our brush paints directly on the texture of a 3D object. For 2D input devices, this has been done very early by Hanrahan and Haeberli [5]. Recently, commercial products, such as ZBrush [13] offer this functionality. Painting on 3D objects with a haptic device has already been presented by Johnson and colleagues [9]. The commercial Freeform system [16] offers even more manipulation methods for 3D objects. Regarding the interaction techniques for the painting applica- tion, we have picked up an interesting idea from Inami and col- leagues [7]. They used a projection-based system to hide their haptic device. We camouflage our haptic device by overlays in an HMD. Our interaction technique of picking colors from the real world is in parts similar to Ryokai and colleagues’ I/O Brush [15]. We discuss the relation to our work in detail in Section 6. In con- trast to most other research in VHMR we enable users to perform direct interaction with both hands. Walairacht et. al. [23] allow bi-manual interaction of virtual objects in MR. However, their reg- istration results are not as precise as in our system.

Upload: others

Post on 26-Jan-2021

2 views

Category:

Documents


0 download

TRANSCRIPT

  • Exploring Visuo-Haptic Mixed RealityChristian SANDOR, Tsuyoshi KUROKI, Shinji UCHIYAMA, Hiroyuki YAMAMOTO

    Human Machine Perception Laboratory, Canon Inc., 30-2, Shimomaruko 3-chome, Ohta-ku, Tokyo 146-8501, JapanEmail: {sandor.christian, kuroki.tsuyoshi, uchiyama.shinji, yamamoto.hiroyuki125}@canon.co.jp

    Abstract In recent years, systems that allow users to see and touch virtual objects in the same space are being investigated. We refer tothese systems as visuo-haptic mixed reality (VHMR) systems. Most research projects are employing a half-mirror, while few use a video see-through, head-mounted display (HMD). We have developed an HMD-based, VHMR painting application, which introduces new interactiontechniques that could not be implemented with a half-mirror display. We present a user study to discuss its benefits and limitations. Whilewe could not solve all technical problems, our work can serve as an important fundament for future research.Keywords Haptics, Mixed Reality, Augmented Reality, User Interfaces, Interaction Techniques, Color Selection

    1 INTRODUCTION

    Human perception is multi-modal: the senses of touch and visiondo not operate in isolation, but rather closely coupled. This obser-vation has inspired systems that allow users to see and touch vir-tual objects at the same location in space (VHMR systems). MostVHMR systems have been implemented using a half-mirror to dis-play computer graphics in the haptic workspace [8, 14, 18]. Thisapproach achieves a better integration of vision and touch than aconventional, screen-based display; thus, user interactions are morenatural. Few research projects (for example, [3]) use a video see-through, HMD instead of a half-mirror. An obvious advantage ofthe HMD is that the user’s view of the real world and the computergraphics are not dimmed. While this is definitely increasing therealism of the virtual objects, it is hard to present to the user a con-sistent scene: real-world, computer graphics, and haptic forces haveto be aligned very precisely.

    In this paper, we want to show that the HMD-based approach hassignificant advantages, as novel interaction techniques can be im-plemented. Figure 1 shows a user who paints with a virtual brushon a virtual teacup. He can see and feel the brush, as it is superim-posed over a PHANTOM [17]. The user can feel that he is hold-ing a cylindric object in his right hand. Combined with the visualsensation, he experiences a believable illusion of a real brush. Toachieve this effect, two ingredients are necessary: fully opaque oc-clusion of real-world objects by computer graphics, and handmask-ing. Handmasking [12] refers to the correct occlusions betweenthe user’s hands and virtual objects. These effects could hardly beimplemented with a half-mirror display.

    Our contributions in this paper are: First, a novel, simple reg-istration method for VHMR systems. Second, we have created aVHMR painting application that enables users to paint more intu-itively on 3D objects than in other approaches. Third, the interac-tion techniques for this painting application contain novel elements:bi-manual interaction in a VHMR system, and the transformation ofa haptic device into an actuated, tangible object.

    2 RELATED WORK

    For precise alignment of MR graphics and haptics, a good solutionseem to be the methods proposed by Bianchi et. al. [3]. Our ap-proach is not as precise and robust. However, it is much easier toimplement.

    Our VHMR painting application was strongly inspired by thedAb system [2]. A PHANTOM is used in a desktop setup to imitatethe techniques used in real painting. Sophisticated paint-transferfunctions and brush simulations are used in this system. While wecan’t compete with these, we offer the possibility to draw on 3D

    Figure 1: View through an HMD in our VHMR painting application.

    objects. More importantly, we take the painted objects out of thescreen and next to the haptic device. By removing the separa-tion of display space and interaction space, we believe to achievea much more intuitive user interface. Our brush closely resem-bles a fude brush (commonly used in Japanese calligraphy). Twoprevious projects have already been conducted on performing cal-ligraphy with a PHANTOM [24, 20]. Again, they are only desktopsystems. Our brush paints directly on the texture of a 3D object. For2D input devices, this has been done very early by Hanrahan andHaeberli [5]. Recently, commercial products, such as ZBrush [13]offer this functionality. Painting on 3D objects with a haptic devicehas already been presented by Johnson and colleagues [9]. Thecommercial Freeform system [16] offers even more manipulationmethods for 3D objects.

    Regarding the interaction techniques for the painting applica-tion, we have picked up an interesting idea from Inami and col-leagues [7]. They used a projection-based system to hide theirhaptic device. We camouflage our haptic device by overlays in anHMD. Our interaction technique of picking colors from the realworld is in parts similar to Ryokai and colleagues’ I/O Brush [15].We discuss the relation to our work in detail in Section 6. In con-trast to most other research in VHMR we enable users to performdirect interaction with both hands. Walairacht et. al. [23] allowbi-manual interaction of virtual objects in MR. However, their reg-istration results are not as precise as in our system.

  • 3 CHALLENGES FOR VHMR APPLICATIONS

    Figure 2 gives an overview of the processes within a human userand the VHMR system she is using. The human’s sensori-motorloop receives signals through the visual and haptic channel andturns these into actions, e.g., into hand movements. A similar loopcan be found in the technological components that implement aVHMR system. Sensors and trackers provide information about thereal world. These signals are interpreted by a controller and turnedinto visual and haptic output. A unique feature in haptic systems isthe mechanical coupling of these two loops. The haptic device iscontrolled by both sensori-motor loops: the human’s and the com-puter’s.

    The upper half of Figure 2 describes a stand-alone visual MRsystem, whereas the lower half describes a stand-alone haptic sys-tem. In a VHMR system, the interaction space and the space for vis-ual augmentations are merged. Thus, the user can observe her ownhands performing interactions (arrow 1 in Figure 2). This seems tobe a benefit for many interactions. However, this benefit also comeswith a new problem: the core challenge in VHMR is to combine thehaptic and the visual system consistently and maintain this consist-ency at all times (arrow 2 in Figure 2). Based on this observation,we can identify several challenges:

    Registration In spatial applications, registration refers to theprecise alignment of various coordinate systems. For conventionalMR the alignment of real world and computer graphics is still beinginvestigated. For VHMR a new challenge occurs: the spatial loca-tions of haptic and visual output must match perfectly. A disconti-nuity between these two channels destroys the illusion that VHMRtries to create. Thus, precise tracking of the haptic device (arrow 3in Figure 2) is crucial.

    Performance The haptic and visual channel impose differentconstraints on the system’s performance. For a stable visual impres-sion, 25Hz are sufficient. However, for the haptic channel a muchhigher update rate is needed. Typically, the actuation of a hapticdevice (called servo loop in haptic literature) should happen at atleast 1000Hz, to avoid perceivable force discontinuities.

    Stable force rendering To achieve stable force rendering in ahaptic system is already difficult [4]. However, in a VHMR system,this challenge is even harder. The spatial relation between the hap-tic device and the virtual objects are determined by sensors. Thesesensors have limitations regarding robustness, update rate and ac-curacy. These limitations are propagated to the force rendering. Forexample, a jitter of 0.5 millimeters will hardly be perceived on thevisual channel. On the haptic channel, this jitter leads to force dis-continuities as will be discussed in Section 5.

    Human-computer interaction In [10, 11], systems havebeen presented that focus on a similar vision like we do: theycombine MR with 3D prints to enable users to feel the augmentedobjects. Users of those systems have liked this combination verymuch. However, they are clearly limited in flexibility, since 3Dprints take a long time and can’t be modified easily. These prob-lems could be overcome by VHMR. VHMR primarily concernedwith merging the haptic and visual world. The immersiveness ofthe user experience can be further enhanced by merging the virtualobjects better with the real world. A variety of techniques havealready been proposed to for this purpose: shadows [19] and occlu-sions [12].

    4 VHMR PAINTING APPLICATION

    In our visionary painting application, users should be able to paintwith a virtual brush on a virtual, earthenware teacup (our teacup is atraditional Japanese teacup, called chawan). Our goal was to makethis interaction as easy as possible. We decided to let the users

    Figure 2: Schematic overview of a VHMR system. Arrows representdata-flow. The haptic device is mechanically coupled with sensors,motors and the user’s hands. The implications of the numbered, boldarrows are discussed in Section 3.

    control the virtual teacup with a graspable object. Additionally, wehave invented a new interaction technique to make color selectionfrom real objects very easy. Typically, our users were drawing theappearance of real-world objects onto the teacup.

    Next, we explain the hardware (Section 4.1) and software (Sec-tion 4.2) that we have used in our prototype. Then, we describe ourregistration method (Section 4.3) and the implementation of colorselection (Section 4.4).

    4.1 Hardware

    The haptic device in our experiment is a PHANTOM Desktop [17].We used Canon’s COASTAR (Co-Optical Axis See-Through forAugmented Reality)-type HMD [21] for visual augmentations. Itis lightweight (327 grams) and provides a wide field of view (51degrees in horizontal direction). It is stereoscopic with a resolutionof 640x480 for each eye. A special feature of this HMD is that theaxes of its two video cameras and displays are aligned. For accurateposition measurements, we have used a Vicon tracker [22]. This isa high-precision optical tracker, typically used in motion capturingapplications. It delivers up to 150 Hz update rate and high absoluteaccuracy (0.5 mm precision). All software was deployed on one PCwith 1GB RAM, Dual 3.6GHz Intel Xeon CPUs, GeForce 6600GT,and two Bt878 framegrabbers. The operating system was GentooLinux with 2. 4. 31. Kernel.

    4.2 Software

    For rendering the computer graphics, we used plain OpenGL withan additional model loader. Furthermore, we have employed twoframeworks: OpenHaptics (Version 2.0; included with the PHAN-TOM) and MR Platform [21] (Internal version). MR Platform pro-vides a set of functions for implementing MR applications; for ex-ample, calibration, tracking, and handmasking. Our implementa-tion of handmasking does color-based detection of the user’s handsin the video images obtained from the HMD’s cameras. This in-formation can be used to mask this part of the computer graphicsscene via OpenGL’s stencil buffer. As a result, the user’s hands arealways visible (see Figure 1).

  • (a) Photo. (b) Schematic drawing including named coordinate systems.

    Figure 3: Setup for our VHMR painting application.

    4.3 Registration Procedure

    From a calibration perspective, we have three relevant objects (seeFigure 3): the user’s HMD, the base of the PHANTOM and thePHANTOM pen. The relation between the attached markers andtheses physical objects can be calibrated using MR Platform’s cal-ibration tools. For rendering of the computer graphics, we just usethe Vicon’s tracking data. Its update rate is high enough, whereasthe jitter is almost not visually perceivable by a user. The graphi-cal framerate was constantly 30 Hz. For the haptic rendering, wehad to chose a different approach, since OpenHaptic’s HLAPI basesits force rendering on the values of the PHANTOM’s encoders.However, the absolute position accuracy is bad (we measured up to20mm error). Essentially, the PHANTOM’s measurements are non-linearly distorted. To keep the haptic and MR world consistent, wedetermine the offset between the PHANTOM’s measurements andthe real pen position (as determined by the Vicon) in every hapticrendering pass. The inverse of this offset is applied to the geometrythat is passed to HLAPI. Thus, the haptic experience matches thevisual experience, although they happen internally in two differentlocations. This approach results in haptic rendering that jitters withthe same amplitude as the Vicon’s data.

    4.4 Cross-Reality Color Picking

    We allow users to select colors from real world objects (see Fig-ure 5). We use a slightly different setup to explain the mathemat-ics behind this interaction technique: a tracked pen is used to picka color from a real teapot (see Figure 4). The known parametersare the 6DOF values of C and P (see Figure 3b). During cameracalibration we have determined: the focal length of the camera f(unit: pixel) and the 2D coordinates of principal point of the camera(px, py) (unit: pixel). The unknown parameters are: the 6DOF ofthe pen’s tip in camera coordinates M and its translation component((Tx,Ty,Tz)). To obtain the 2D coordinates of the pen’s projectionpoint (u,v) (unit: pixel), we proceed:

    Figure 4: Mathematical description of cross-reality color picking.

    M = C−1P (1)

    u =− f TxTz

    + px, v = fTyTz

    + py (2)

    Since u and v refer to the pixel coordinates of the ideal image, wemust transform them to the pixel coordinates of the actual, dis-torted, camera image. MR Platform’s lens distortion model is aquintic radial distortion model. The distortion parameters for itwere gathered during camera calibration. MR Platforms utilityclasses allow us to determine the corresponding pixel in the realimage. We read the (R,G,B) value of that pixel and are done.

    5 USER TEST

    We tested our painting application by conducting a user test. Weasked 14 subjects to paint teacups with our system. The procedurewas:

    1. General training (about 1 minute): The users were employingthe viewer application to touch a virtual object. This madethem experience the force sensation in our system and famil-iarized them with the graspable object for controlling the vir-tual object.

  • (a) Initial state. (b) Move the brush to a real-world object andpress the button on the PHANTOM pen.

    (c) Apply the selected color to the teacup.

    Figure 5: Interactions for cross-reality color picking.

    2. Training for painting application (about 2 minutes): We putseveral objects such as vegetables and fruit on the table. Then,the subjects could familiarize themselves with color pickingfrom those objects and painting on the cup.

    3. Painting: (about 5 minutes): Next, the subjects could paintwhatever they felt like.

    4. Questionnaire: (about 3 minutes): Finally, the subjects com-pleted a feedback questionnaire.

    The results of the test are shown in Figures 6 and 7. We can drawthe following positive observations, based on the users’ feedback:

    Very intuitive system Even in the extreme short time-slotsfor our study, subjects had no problems to understand and use oursystem. This makes us very confident about the ease of use. Itwould be hardly possible to achieve similar results in this short timeusing a standard 3D modeling application.

    Good overall system concept Color picking, overall visualappearance and force sensation received positive feedback from al-most all users. A user commented: “I like the function of movingmy viewpoint, the cup and the pen at the same time. Commercialpainting tools allow the user to move these things only separately,but not in parallel.”

    Artistic expression possible As Figure 6 shows, it waspossible to create interesting pieces of art with our system.

    However, several points received criticism:

    Jittery haptic experience There is a clear limit to the hap-tic experience in our system—it is quite unstable; thus, it is hardto draw straight lines. One subject was trying to write a Japanesecharacter. As can be clearly seen in his drawing (see Figure 7), oursystem was too jittery to allow this kind of precise lines. Almost allusers complained about this in the questionnaire.

    Problems in depth perception The stereoscopic effect ofour HMD is not working when the cup is too close to the eyes.Convergence can only happen at more than 30 cm distance. Someof our subjects held the cup closer than 30 cm in front of their eyes,so they did not have any stereoscopic effect. Thus, they complainedthat the distance between the pen and cup is not easy to understand.

    Over-simplification of the brush The visual impression ofthe brush had two big problems that were not liked by users. First,the computer graphics of the brushes’ bristles are not natural. Sec-ond, the area of color application does not match the bristles’ posi-tion. When we apply color on the texture, we just render a circle on

    the surface. The circle’s center is at the contact point and the radiusscales with the length of the bent bristles.

    Limitations of the PHANTOM Another problem in oursystem was that the working area was very small (160 mm x 120mm x 120 mm). This made the brush interaction unnatural.

    6 DISCUSSION

    While we could not overcome all technical difficulties, our VHMRpainting application has shown new directions for human-computerinteraction. While other systems perform better on particular as-pects of the painting interaction (e.g., better computer graphics [2],or better haptic rendering [20]), the overall concept of our systemcontains novel points. We foster the advantages of using an HMD byallowing users to naturally interact with real world objects, as exem-plified by our new cross-reality color picking technique. Also, byfully occluding the tip of the PHANTOM with a computer graph-ics representation of a brush, we create a virtual, tangible device.Furthermore, we support bi-manual interaction in a VHMR system.

    To wrap up, we would like to discuss the insights that we havegained about our painting application.

    Color picking Our new interaction technique for cross-realitycolor picking was one of the features that our users liked a lot. Itseems to go very well with the metaphor of MR. One part of theinteraction is very similar to the I/O brush: acquiring colors fromreal world objects by touching them with a brush. However, ac-tually using these colors is very different in our system. The I/Obrush still needs a computer screen to paint on. In our system, wecan paint directly on objects located in the real world, eliminatingthe unnatural interaction of using a computer screen as a canvas.

    Occlusions We have used two mechanisms to provide correctocclusions. First, we have used color-based hand-masking. Second,we have masked our tracked objects by rendering their geometriesinto OpenGL’s depth buffer. Both methods were sufficient for ourprototype, but both have inherent problems. For the tracked ob-jects, we had to determine their geometries by hand. This was bothcumbersome and not precise. For example, we did not measure thegeometries of the attached Vicon markers. It seems to us that a 3Dscanner would have been very useful. Even more useful would be amethod that could deal with occlusions during runtime, for examplea real-time computation of depth maps [6].

    The hand-masking had even more problems. First of all, this ap-proach does not yield a high performance. Second, the two handscan’t be distinguished. This led to our decision to make user’s wear

  • a glove on their left hand. Third, this method is error-prone, es-pecially when we used lots of colorful real world objects, some ofthose were masked, because their color is similar to the user’s hand.The real-time depth maps that we proposed above, could also be ap-plied to the occlusion problem of the user’s hands.

    Merging the haptic and visual world Essentially, we haveadapted everything in our system to the real world. This includedadapting the haptic world according to the measurements of the Vi-con. The usage of such a highly accurate, but not very robust trackerled to problems for the haptic impression. Although the visual im-pression was correct at all times, the haptic world was perceived asunstable by users. As a result, our system was not well balanced.We plan to implement Bianchi et. al.’s method [3] to overcome thislimitation.

    Performance considerations Although our system per-formed well in the painting application, we could clearly see itslimits. Objects with a polygon count over 200000 resulted in badperformance. We could use load-balancing of the different parts ofour application to improve it. Either, by balancing better on ourCPUs (better threading), or by using several PCs and networkingour system. Also, we could move parts of the calculations on theGPU.

    To optimize even more, we are considering to buy special-ized hardware for physics simulation and collision detection (e.g.,PhysX [1]). Since the heaviest task for huge models is the collisiondetection, we expect great benefits from this approach.

    Future Work As one might remark, we could have imple-mented our painting application without using a haptic device, byusing a real, tracked cup and brush. Only the applied color couldbe MR. When just thinking about the painting application, this isdefinitely true. However, our work is a first step towards a biggervision. We would like to enable users to interact naturally with ar-bitrary virtual objects.

    When using the PHANTOM device for VHMR, pen-shaped toolscan be realized. In our example application, we have implemented abrush. Future work could be to build a variety of other tools with thePHANTOM: e.g., drills or hammers. Other haptic devices wouldenable us to build other kinds of tools.

    Ultimately, we would like to use a general-purpose haptic de-vice that can be used to simulate almost any real-world tool. Forexample, the SPIDAR haptic device [23], seems promising in thisregard. With it, we could also get rid of our graspable object, butinstead let the users touch the to-be-manipulated object directly. Weare convinced that once we have implemented such a system, it willhave major impact on the research fields of MR, haptics and human-computer interaction.

    ACKNOWLEDGEMENTS

    We would like to thank all of our colleagues who participated in thisproject. Special thanks to Dai Matsumura (Vicon setup), HiroyukiKakuta (artistic advice, movie editing) and Yukio Sakagawa (lastminute review).

    REFERENCES

    [1] AGEIA Inc. Physx physics processor. http://www.ageia.com/products/physx.html.

    [2] William V. Baxter, Vincent Scheib, and Ming C. Lin. dAb: Interactivehaptic painting with 3D virtual brushes. In Eugene Fiume, editor,SIGGRAPH 2001, Computer Graphics Proceedings, pages 461–468.ACM Press / ACM SIGGRAPH, 2001.

    [3] G. Bianchi, B. Knörlein, G. Szkely, and M. Harders. High precisionaugmented reality haptics. In Eurohaptics 2006, pages 169–177, July2006.

    [4] Seungmoon Choi and Hong Z. Tan. Toward realistic haptic renderingof surface textures. IEEE Comput. Graph. Appl., 24(2):40–47, 2004.

    [5] Pat Hanrahan and Paul Haeberli. Direct wysiwyg painting and textur-ing on 3d shapes. In SIGGRAPH ’90: Proceedings of the 17th annualconference on Computer graphics and interactive techniques, pages215–223, New York, NY, USA, 1990. ACM Press.

    [6] Hauke Heibel. Real-time computation of depth maps with an appli-cation to occlusion handling in augmented reality. Master’s thesis,Technische Universität München, 2004.

    [7] Masahiko Inami, Naoki Kawakami, Dairoku Sekiguchi, YasuyukiYanagida, Taro Maeda, and Susumu Tachi. Visuo-haptic display usinghead-mounted projector. In VR ’00: Proceedings of the IEEE VirtualReality 2000 Conference, pages 233–240, New Brunswick, New Jer-sey, USA, 2000. IEEE Computer Society.

    [8] Industrial Virtual Reality Inc. Immersive touch. http://www.immersivetouch.com/.

    [9] David Johnson, Thomas V. Thompson II, Matthew Kaplan, Donald D.Nelson, and Elaine Cohen. Painting textures with a haptic interface.In Proc. of IEEE Virtual Reality Conference, pages 282–285, 1999.

    [10] Daisuke Kotake, Kiyohide Satoh, Shinji Uchiyama, and Hiroyuki Ya-mamoto. A hybrid and linear registration method utilizing inclinationconstraint. In ISMAR ’05: Proceedings of the IEEE and ACM Interna-tional Symposium on Mixed and Augmented Reality, pages 140–149,2005.

    [11] Woohun Lee and Jun Park. Augmented foam: A tangible augmentedreality for product design. In ISMAR ’05: Proceedings of the IEEEand ACM International Symposium on Mixed and Augmented Reality,pages 106–109, 2005.

    [12] Toshikazu Ohshima and Hiroyuki Yamamoto. A mixed reality stylingsimulator for automobile development. In International Workshopon Potential Industrial Applications of Mixed and Augmented Real-ity (PIA), Tokyo, Japan, October 2003.

    [13] Pixologic Inc. Zbrush homepage. http://pixologic.com.[14] Reachin Technologies AB. Reachin display. http://www.reachin.

    se/products/Reachindisplay/.[15] Kimiko Ryokai, Stefan Marti, and Hiroshi Ishii. Designing the world

    as your palette. In CHI ’05: CHI ’05 extended abstracts on Humanfactors in computing systems, pages 1037–1049, New York, NY, USA,2005. ACM Press.

    [16] Sensable Technologies, Inc. Freeform systems. http://sensable.com/products/3ddesign/freeform/FreeForm_Systems.asp.

    [17] Sensable Technologies, Inc. Phantom desktop haptic de-vice. http://www.sensable.com/products/phantom_ghost/phantom-desktop.asp.

    [18] SenseGraphics AB. Sensegraphics. http://www.sensegraphics.com/products_immersive.html.

    [19] Natsuki Sugano, Hirokazu Kato, and Keihachiro Tachibana. The ef-fects of shadow representation of virtual objects in augmented reality.In ISMAR ’03: Proceedings of the IEEE and ACM International Sym-posium on Mixed and Augmented Reality, pages 76–83, 2003.

    [20] Yuichi Suzuki, Yasushi Inoguchi, and Susumu Horiguchi. Brushmodel for calligraphy using a haptic device. Transactions of the Vir-tual Reality Society of Japan, 10(4):573–580, 2005.

    [21] Shinji Uchiyama, Kazuki Takemoto, Kiyohide Satoh, Hiroyuki Ya-mamoto, and Hideyuki Tamura. Mr platform: A basic body on whichmixed reality applications are built. In Proceedings of the Interna-tional Symposium on Mixed and Augmented Reality, pages 246–255,Darmstadt, Germany, October 2002.

    [22] Vicon Peak. Vicon motion capturing systems. http://www.vicon.com/.

    [23] Somsak Walairacht, Keita Yamada, Shoichi Hasegawa, YasuharuKoike, and Makoto Sato. 4 + 4 fingers manipulating virtual objectsin mixed-reality environment. Presence: Teleoperators & Virtual En-vironments, 11(2):134–143, 2002.

    [24] Jeng-Sheng Yeh, Ting-Yu Lien, and Ming Ouhyoung. On the effectsof haptic display in brush and ink simulation for chinese painting andcalligraphy. In PG ’02: Proceedings of the 10th Pacific Conferenceon Computer Graphics and Applications, page 439, Washington, DC,USA, 2002. IEEE Computer Society.

  • Figure 6: Drawings created with our painting application.

    (a) One subject has drawn the character chihkaku (Japanesefor: perception).

    (b) Later, we asked the same subject to drawthe character with a fude brush on a piece ofpaper.

    Figure 7: Problem in our system: straight lines are hard to draw.