a virtual environment for collaborative assembly
TRANSCRIPT
A Virtual Environment for Collaborative Assembly
Xiaowu Chen, Nan Xu, Ying Li
The Key Laboratory of Virtual Reality Technology, Ministry of Education
School of Computer Science and Engneering, Beihang University
Beijing 100083, P.R. China) [email protected]
Abstract
To allow geographical dispersed engineers to
perform an assembly task together, a Virtual Environment for Collaborative Assembly (VECA) has
been developed to build a typical collaborative virtual
assembly system. This presents the key parts of VECA,
such as system architecture, HLA-based (High Level Architecture) communication and collaboration,
motion guidance based on collision detection and
assembly constraints recognition, data translation
from CAD to virtual environment, reference resolution in multimodal interaction.
1. Introduction
Virtual reality (VR) is a technology which is often
regarded as a natural extension to 3D computer
graphics with advanced input and output devices. Now
VR has matured enough to warrant serious engineering
applications [1]. As one of the important application
domains of VR, virtual assembly (VA) fulfills design
flexibility by replacing physical objects with the virtual
representation of machinery parts and providing
advanced user interfaces for users to design and
generate product prototypes [2]. VA can evaluate and
analyze product assembly and disassembly during the
product design stage with the goal of reducing
assembly costs, improving quality and shortening time
to market [3].
With the distributed fashion of companies and
research organizations, many design, assembly,
manufacture, analysis works require the collaboration
of geographical dispersed engineers. Collaborative
This paper is supported by A.S.T. Fund (VEADAM ), National
Natural Science Foundation of China (60503066), National Research Fund (51404040305HK01015), China Education and Research Grid
(ChinaGrid) Program (CG2003-GA004), National 863 Program (2004AA104280) & (SIMBRIDGE).
virtual environment (CVE) is a computer system that
allows remote users to work together in a shared
virtual reality space [4]. As an important category of
Computer-Supported Cooperative Work (CSCW) and
VR, CVE systems have been applied to military
training, telepresence, collaborative design and
engineering, distance training, entertainment, and
many other personal and industrial applications [5].
Because of the advantages of CVE and the actual
requirement of industry, collaborative virtual assembly
(CVA) is becoming one of research emphases of VA.
There have been many research efforts to develop
virtual assembly systems [6][7][8]. A representative
system is Virtual Assembly Design Environment
(VADE) [9] which allows engineers to evaluate,
analyze, and plan the assembly of mechanical systems.
VADE simulates inter-part interaction for planar and
axisymmetric parts using constrained motions along
axes/planes which are obtained from the parametric
CAD system. In this system, direct interaction is
supported through a CyberGlove.
In the field of CVA, several systems have been
created. For example, National Center for
Supercomputing Applications (NCSA) and Germany’s
National Research Center for Information Technology
(GMD) developed a collaborative virtual prototyping
system over ATM network [10]. The system integrated
real-time video and audio transmissions let engineers
see other participants in a shared virtual environment at
each remote site’s viewpoint position and orientation.
Jianzhong Mo et al. developed a virtual-reality-based
software tool- Motive3D [11] that supports
collaborative assembly/disassembly over Internet and
presented systematic methodology for disassembly
relation modeling, path/sequence automatic generation
and evaluation independent of any commercial CAD
systems. Shyamsundar et al. developed an internet-
based collaborative product assembly design (cPAD)
tool [12][13]. The architecture of cPAD adopts 3-tier
client/server mode. In this system, a new Assembly
Proceedings of the Second International Conference on Embedded Software and Systems (ICESS’05) 0-7695-2512-1/05 $20.00 © 2005 IEEE
Representation (AREP) scheme was introduced to
improve the assembly modeling efficiency. The AREP
model at the server side can be accessed by many
designers at different locations through client browsers
implemented using Java3D.
But most CVA systems above are based on C/S or
B/S architecture, and little effort has been put into the
research of CVA based on distributed architecture.
Sometimes there are some requirements on the design
of certain industrial products, which expect a
distributed virtual environment to support the
collaborative assembly, such as a Virtual Environment
for Collaborative Assembly (VECA) presented in this
paper. VECA can build a collaborative virtual
assembly system which allows geographical dispersed
engineers to perform an assembly task together. VECA
mainly includes HLA-based (High Level Architecture)
communication and collaboration, motion guidance
based on collision detection and assembly constraints
recognition, data translation from CAD to virtual
environment, reference resolution in multimodal
interaction, and so on.
The rest of this paper is organized as follows.
Section 2 presents the system architecture of VECA.
The modules of communication and collaboration, data
translation, motion guidance, multimodal interaction
are separately elaborated in section 3, 4, 5, and 6.
Section 7 is the implementation and application of
VECA, and finally section 8 ends this paper with
conclusions and future work.
2. System Architecture
The system architecture of VECA is illustrated in
Fig.1. Once an engineer has finished designing
assemblies or subassemblies using a parametric CAD
system such as Pro/Engineer, he or she uses a plug-in
for Pro/Engineer to translate CAD models to triangle
mesh models (Multigen OpenFlight) including
assembly constraints and geometry feature
information. Others download these models, and then
they can assemble the product collaboratively in a
multimodal shared VE to find the design defects or get
the feasible assembly sequence.
Now there are five key parts in the system.
1. Communication and collaboration: connects
geographical dispersed nodes to form a distributed
collaborative VE based on HLA.
2. Motion guidance: helps the user translate or
rotate the parts in the VE freely and precisely using
collision detection and assembly constraints
recognition.
3. Data translation: this module translates models
and extracts information from CAD (Pro/Engineer). It
is a plug-in for Pro/Engineer and is developed by
Pro/Toolkit and Multigen OpenFlight API.
4. Constraint manager: dynamically maintains
assembly constraints information. The design of this
module and a constraint-based distributed virtual
assembly model were already introduced in [14][15].
5. Multimodal interaction: processes combined
natural input modes such as speech, hand gesture in a
coordinated manner.
Data translation
Multimodal interaction
Mouse, Keyboard, CyberGlove, Microphone, ... Display, 3D shutter glasses, ...
Input/output device
Collaborative virtual assembly
Motion guidance Constraint manager Comunication & collaboration
Virtu
al enviro
nm
ent Scenegraph management
CAD
Figure 1. System architecture of VECA
3. HLA-based Communication and
Collaboration
HLA standard [16] is a general architecture for
simulation reuse and interoperability developed by the
US Department of Defense. The conceptualization of
HLA led to the development of the Run-Time
Infrastructure (RTI). This software implements an
interface specification that represents one of the
tangible products of the HLA. Some concepts and
terms in HLA are introduced as follows. “Federation”
is defined as a group of “federates” forming a
community. Federates may be simulation models, data
collectors, simulators, autonomous agents or passive
viewers. A simulation session, in which a number of
federates participate, is called a “federation execution”.
Simulated entities, such as tanks, aircrafts or
subassemblies, are referred to as “objects”. “Attribute”,
referred to as object state, can be passed from one
object to another. Objects interact with each other and
with the environment via “interactions”, which may be
viewed as unique events, such as manipulation of the
object, or a collision between objects. Initially, an
attribute is controlled by the federate that instantiated
the object. However, attribute ownership may change
during the execution. This mechanism allows users to
co-manipulate the same object, for example. All
possible interactions among the federates of a
federation are defined in a so-called “Federation Object
Model” (FOM). The capabilities of a federate are
defined in a so-called “Simulation Object Model”
Proceedings of the Second International Conference on Embedded Software and Systems (ICESS’05) 0-7695-2512-1/05 $20.00 © 2005 IEEE
(SOM). The SOM is introduced to encourage reuse of
simulation models [17].
DVE_FM [18] is an application framework which is
based on the interface specification of HLA. It
provides the universal function of distributed
interactive simulation and insulates invoking of RTI.
DVE_FM reduces the complexity of development of
simulation application based on HLA, thus the
developers can focus on domain application but not
complex interoperation between simulation application
and RTI.
The HLA-based simulation framework of VECA is
shown in Fig.2. Each of VA nodes joins federation as a
federate. And the development of the federate is based
on DVE_FM.
TCP/IP
Run-Time Infrastructure (RTI)
DVE_FM
VM federate 1
DVE_FM
VM federate 2
DVE_FM
VM federate n
…...
Figure 2. HLA-based simulation framework
During a federation execution, each of federates
updates the attributes of the objects (parts or
subassemblies) which belong to it. VECA has defined
two types of interactions: CCooperationInteraction and
CConstraintFeedbackInteraction.
CCooperationInteraction represents federate
assembly/disassembly requirement. A federate that has
the ownership of the parts or subassemblies may
receive more than one CCooperationInteraction for the
same part (subassembly) at the same time, and it will
select one by a set of schedule rules to solve the
problem of competition and collaboration between
multi-users. If the requirement satisfies assembly
constraints, the federate updates position or attitude of
parts (subassemblies), then informs other federates
through RTI. Otherwise the federate transfers
CConstraintFeedbackInteraction to declare that the
current operation is not allowed.
4. Motion Guidance
In VA, due to the lack of physical constraint
information and the limitation of location tracking
precision, it is difficult for the user to control the
motion of the assembling parts precisely with current
virtual reality I/O devices during VA [19]. Therefore, it
is necessary to explore a technique to help the user
manipulate the target assembling part freely and
precisely in VA.
VECA adopts motion guidance based on collision
detection and assembly constraints recognition [20] to
accomplish precise location of parts (subassemblies).
Collision detection is used to solve assembling parts
interference problem. Assembly constraints
recognition is used to assist in capturing the user’s
intention.
Axis orientation constraint and face match
constraint are the two assembly constraints mainly
researched into because other assembly constraints can
be converted to the combination of these two
constraints, and a pair of assembly units just need to
satisfy each of them for only once to form a new
subassembly. In order to simulate the actual assembly
procedure, motion guidance first recognizes the axis
orientation constraint, and then recognizes the face
match constraint during assembly constraints
recognition.
The above discusses the logic sequence of a single
assembly from the angle of recognition satisfaction.
Fig.3 describes the functions of motion guidance and
precise location from the angel of a part's
(subassembly's) motion state transition during a single
assembly (disassembly).
A single assembly process of a pair of assembly
units can be divided into three phases: free motion
state A, axial motion state, and free motion state B as
shown in Fig.3. Free motion state A is the initial state
of assembly units, axial motion state means that the
assembly motion unit can only move along or rotate
around the orientation axis under the axis orientation
constraint, and free motion state B means the assembly
motion unit and the assembly base unit compose a new
subassembly and return to the free motion state again.
Note that the initial state of assembly is free motion
state A while that of disassembly is free motion state B
because disassembly is the contrary course of
assembly. During assembly, the two assembly units are
in the free motion state first, and collision detection
algorithm guarantees that any parts (subassemblies)
don't collide with each other, and similarly, it can also
avoid penetration between users and that between users
and objects. Axis orientation constraints recognition
algorithm detects whether axis orientation constraint is
satisfied between the two assembly units, if satisfied,
they will perform axis align operation via axial precise
location, then they enter axial motion state in which
axial collision detection algorithm guarantees axial
motion without interference and detects possible
design defects. Axial collision detection extends the
general collision detection which cannot distinguish
between contact and collision by examining the
normals of the pieces where collision happens. Axial
motion solving algorithm maps the users' inputs to a
certain length of translation along the orientation axis.
Proceedings of the Second International Conference on Embedded Software and Systems (ICESS’05) 0-7695-2512-1/05 $20.00 © 2005 IEEE
Axis orientation constraint relief algorithm is used to
detect user's intention of relieving axis orientation
constraint. Face match constraint recognition algorithm
is used to detect whether face match constraint is
satisfied between the two assembly units, if satisfied,
they are assembled together successfully, and compose
a new subassembly, and then enter free motion state B
in which face match constraint relief algorithm detects
user's intention of relieving face match constraint
(namely disassembly intention). Fig.4-Fig.6 show an
example of motion guidance.
Axial motion
state
Free motion
state A
Free motion
state B
Satisfied axis
orientation
constraint
Relieved axis
orientation
constraint
Satisfied face
match constraint
Relieved face
match constraint
Motion guidance
• Collision detection
• Axis orientation
constraint
recognition
Motion guidance
• Axial collision detection
• Axial motion solving
• Axis orientation constraint relief
• Face match constraint recognition
Motion guidance
• Collision detection
• Face match constraint
relief
Precise location
• Axial precise location
Precise location
• Face match precise location
Figure 3. State transition of a part (subassembly) in
a single assembly
Figure 4. P1 collides with P2
Figure 5. Recognized the axis orientation constraint
Figure 6. Recognized the face match constraint
5. Data Translation
This module includes two functions: model
translation and information extraction. The former
translates the parametric model created in Pro/Engineer
to triangle mesh model (Multigen OpenFlight) which
can be supported in VR. The latter extracts orientation
axis, match face and assembly constraints for motion
guidance. It is a plug-in for Pro/Engineer and
developed by Pro/Toolkit and Multigen OpenFlight
API.
The assembly hierarchy in Pro/Engineer is shown in
Fig.7. The assembly consists of a series of
subassemblies and parts, and a part consists of several
features. Surfaces and geometry items (such as axis,
dimension) constitute a feature. And the OpenFlight
also uses a multilevel hierarchy to define the
organization of the objects in the database (see Fig.8).
At the top level is the database node (Node is a generic
term for any record in the hierarchy). At the lowest
level are objects made up of polygons (fltFace), which
are, in turn, made up of vertices. Between these two
levels are a number of different types of organizational
nodes that are attached to each other to organize the
database. An assembly, subassembly, or part in
Pro/Engineer is represented by a DOF (degree of
freedom) node (fltDof) in OpenFlight. A feature
corresponds to a group node (fltGroup), and a surface
or an axis corresponds to an object node (fltObject).
The assembly constraints are stored in the
comments field of OpenFlight model.
…...
…...
…...…...
assembly
subassembly part ProCsys
part part ProFeature 1 ProFeature n ProCsys
ProFeature 1 ProFeature n ProCsys ProAxis 1 ProAxis n ProSurface 1 ProSurface n
ProCsys
Figure 7. Assembly hierarchy in Pro/Engineer
Proceedings of the Second International Conference on Embedded Software and Systems (ICESS’05) 0-7695-2512-1/05 $20.00 © 2005 IEEE
…...
…...
…...
…...
…...
g2
fltDof
(assembly)
fltDof
(subassembly)fltDof (part)
fltObject
(Csys)
fltDof (part) fltDof (part)fltGroup 1
(feature)
fltGroup n
(feature)
fltObject
(Csys)
fltGroup 1
(feature)
fltGroup n
(feature)
fltObject
(Csys)
fltObject 1
(axis)
fltObject n
(axis)
fltObject 1
(surface)
fltObject n
(surface)
fltObject
(Csys)
db
g1
fltFace 1 fltFace n
Figure 8. Assembly hierarchy in OpenFlight
6. Multimodal Interaction
VECA allows users to interact with VE through
multiple modalities such as speech and gesture. The
multimodal interface model of VECA is shown in
Fig.9. A key element in understanding user multimodal
inputs is known as reference resolution [21]. In VECA,
a reference resolution approach for virtual environment
(RRVE) [22] is employed. RRVE improves the
approach of Kehler [23], and adopts the method of
Senseshape [24][25] to disambiguate in object
selection of 3D interaction.
RRVE divides the objects in VE into four state
spaces: gestured objects which correspond to the status
point, selected object last time which correspond to the
status “in focus”, unselected but visible objects which
correspond to the status “activated”, unselected and
invisible objects which correspond to the status
“extinct”. The sequence of cognitive hierarchy is: point
> in focus > activated > extinct. In RRVE, resolution
reference is just as match operation between candidate
objects and reference constraints which are gained
from speech. There are two types of match operation:
“don’t conflict” and “meet entirely”. The former is the
necessary condition for the latter. For example, if an
object has attribute set: {name=”M4-1”, type=”bolt”,
feature=” hexagon”}, and the user input “select the red
bolt”, the result of “don’t conflict” is true and the result
of “meet entirely” is false.
In RRVE, Senseshape is a cone which is attached to
the finger of virtual hand (see Fig.10). It provides
valuable statistical rankings about gestured objects
through collision detection. For example, the time
ranking of an object is derived from the fraction of
time the object spends in the cone over a specified time
period: the more time the object is in the cone, the
higher the ranking. The point queue is a priority queue
of point objects and the priority is determined by
weighted average of these statistical rankings. The
activated queue is a priority queue of activated objects
and the priority is determined by distance between the
object and the current viewpoint. Extinct vector is a
vector of extinct objects but not considering the
sequence as above.
The algorithm of RRVE is as follows.
1. Get reference constraint from speech;
2. Get point object priority queue by method of
Senseshape;
3. If an point object doesn’t conflict with reference
constraints, choose that object;
4. Otherwise, if the focus object doesn’t conflict
with reference constraints, choose that object;
5. Otherwise, if an activated object doesn’t conflict
with reference constraints, choose that object;
6. Otherwise, if an extinct object meets all reference
constraints, choose that object.
This algorithm simply chooses the most salient
entity that meets reference constraints, regardless of
the type of referential form. And we found that, when
users interacted with visible objects, the accuracy of
reference resolution was higher if the match condition
was looser. When users attempt to interact with extinct
object, most of cognitive resources fasten on speech
model. The information from the speech should be
considered sufficiently, so the match operation of
“meet entirely” is used.
Command type, constraint
Speech
parsing
Multimodal integration
Speech
recognition
Gesture
recognition
Gesture
parsing
Activated
queue
Voice command Sensor data
Focus
object
executed command or hint
Point gesture
Point queue
XML element, attribute
Extinct
vector
Figure 9. Multimodal interface model
Figure 10. Senseshape in VECA
Proceedings of the Second International Conference on Embedded Software and Systems (ICESS’05) 0-7695-2512-1/05 $20.00 © 2005 IEEE
7. Experiments
The system of VECA is implemented in C++ using
Microsoft Visual C++ 6.0. The VR platform is
OpenSceneGraph 0.9.6-2, and the engine of speech
recognition is Microsoft Speech SDK 5.1. The
distributed interactive simulation platform is
DMSO_RTI 1.3v6 and DVE_FM v1.0. The CAD
software is Pro/Engineer 2001 and we implement
application development by Pro/Toolkit and Multigen
OpenFlight API. VECA is created to be capable of
supporting a variety of VR peripherals such as
CyberGlove, 3D shutter glasses.
The experimental environment is illustrated in
Fig.11. One PC is the Pro/E server and four PCs join
the simulation. Users can download triangle mesh
models which include corresponding information from
the Pro/E server. Then users utilize simulation
application (VM federate 1 to 3) to perform assembly
task collaboratively through CyberGlove, microphone,
mouse or keyboard. DVE_StealthAndPlayer can record
and replay the whole course of VA simulation. The
system runs over 100M Ethernet and a switch which
supports multicast. Experiment shows that VECA
provides a natural and effective simulation
environment, and HLA-based communication and
collaboration can effectively resolve the problem of
cooperation and competition between multi-users.
Fig.12-Fig.15 illustrates a single assembly course of a
part.
COL-ACT-STA-
1 2 3 4 5 6 7 8 9101112
HS1 HS2 OK1 OK2 PSCONSOLE
100M ethernet network
Switch supported multicast
VM federate 1 VM federate 2 VM federate 3
Pro/E server VM federate 4 (DVE_StealthAndPlayer)
Figure 11. Experimental environment
The experiment shows that VECA can effectively
solve the problem of competition and collaboration
between multi-users, and help users locate the part
(subassembly) freely and precisely. It has been
successfully applied to the design of certain type of
products.
Figure 12. Federate 1 and federate 2 strive for the
ownership of a part
Figure 13. Federate 2 gets the part, and the part
collides with the subassembly
Figure 14. Federate 2’s intention of assembly is
recognized
Figure 15. The assembly is finished
Proceedings of the Second International Conference on Embedded Software and Systems (ICESS’05) 0-7695-2512-1/05 $20.00 © 2005 IEEE
8. Conclusions
This paper presents a Virtual Environment for
Collaborative Assembly, which can build a
collaborative virtual assembly system allowing
geographical dispersed engineers to perform an
assembly task together, since there are some
requirements on virtual assembly based on a
distributed virtual environment. VECA mainly
includes HLA-based (High Level Architecture)
communication and collaboration, motion guidance
based on collision detection and assembly constraints
recognition, data translation from CAD to virtual
environment, reference resolution in multimodal
interaction, and so on. Because VECA can only
translate the data from CAD to virtual environment,
it’s necessary to return the feedback to CAD in the
future work.
9. References
[1] Jayaram S., Connacher H., Lyons K., “Virtual Assembly
Using Virtual Reality Techniques”, Computer-Aided Design,
29(8), 1997, 575-584.
[2] Xiaobu Yuan, Simon X. Yang, “Virtual assembly
with biologically inspired intelligence” Systems, Man
and Cybernetics, Part C, IEEE Transactions, 33(2),
2003, 159-167.
[3] Jianzhong Mo, Chi Cheng, Prabhu B.S. and Gadh
R., "On the Creation of a Unified Modeling Language
Based Collaborative Virtual Assembly/Disassembly
System" ASME 2003 Design Engineering Technical
Conferences and Computers and Information in
Engineering Conference, Chicago, Illinois, 2–6
September 2003.
[4] Michael R. Maecdania and Michael J. Zyda, “A
taxonomy for networked virtual environments” IEEE
Multimedia, 4(1), January-March 1997, 48-56.
[5] Ling Chen, Gencai Chen, Hong Chen, Xiaolei Xu,
Chun Chen, “Study on Effects of Network
Characteristics on Cooperation Performance in a
Desktop CVE System” Computer Supported
Cooperative Work in Design, 2004, Volume 1, 26-28
May 2004, 121-126.
[6] Srinivasan H., Figueroa R., and Gadh R., “Selective
disassembly for virtual prototyping as applied to
demanufacturing” Journal of Robotics and Computer
Integrated Manufacturing, 15(3), 1999, 231-245.
[7] Luis Marcelino, Norman Murray, Terrence
Fernando, “A Constraint Manager to Support Virtual
Maintainability” Computer & Graphics 2003, 27, 19-
26.
[8] Huagen Wan, Shuming Gao, Qunsheng Peng,
“VDVAS: An Integrated Virtual Design and Virtual
Assembly Environment” Journal of Image and
Graphics, 7A(1), 2002, 27-35.
[9] Jayaram S., Yong Wang, Jayaram, U., Lyons, K.,
Hart, P., A Virtual Assembly Design Environment
Proc. IEEE Virtual Reality 99 Conf., IEEE CS Press
Los Alamitos, Calif., 1999, 172-179.
[10] Valerie D Lehner, Thomas A DeFanti,
“Distributed virtual reality: supporting remote
collaboration in vehicle design” Computer Graphics
and Applications, IEEE, 17(2), 1997, 13-17.
[11] Jianzhong Mo, Qiong Zhang, Rajit Gadh, “Virtual
Disassembly” International Journal of CAD/CAM,
2(1), 2002, 29-37.
[12] Shyamsundar N., Gadh R., “Internet-based
collaborative product design with assembly features
and virtual design spaces” Computer-Aided Design,
2001, 33(9), 637-651.
[13] Shyamsundar N., Gadh R., “Collaborative virtual
prototyping of product assemblies over the internet”
Computer-Aided Design, 34(10), 2002, 755-768.
[14] Aina Sui, Wei Wu, Xiaowu Chen, Qinping Zhao,
“Assembly Constraint Semantic Model In Distributed
Virtual Environment” Journal of Computer Research
and Development, 42, 2005.
[15] Xiaowu Chen, Aina Sui, Wei Wang, Yingchun
Huang, Nan Xu, Zhangsheng Pan, Hongchang Lin,
“VEADAM: A Collaborative Virtual Design and
Virtual Manufacture System” Journal of System
Simulation, 17(2), 2005, 371-374.
[16] IEEE Standard for Modeling and Simulation
(M&S) High Level Architecture (HLA) - Framework
and Rules (IEEE Std 1516-2000). Institute of Electrical
and Electronics Engineers, Inc., 2000.
[17] Xiaoju Shen, Hage R., Georganas N., “Agent-
aided collaborative virtual environments over
HLA/RTI” Distributed Interactive Simulation and
Real-Time Applications, 1999. Proceedings. 3rd IEEE
International Workshop on 22-23 Oct. 1999, 128 -135.
[18] Zuoyi Duan, Wei Wu, “Research on Distributed-
Interactive-Simulation-Oriented Application
Framework” Third China Conference on Virtual
Reality and Visualization (CCVRV’2003), Changsha,
China, 2003, 234-237.
[19] Fa, M., Fernando, T., Dew, P.M., “Direct 3D
manipulation techniques for interactive constraint-
based solid modeling” Proceedings of the
Eurographics’93. Oxford: Blackwell Publishers, 1993,
237-248.
[20] Yingchun Huang, Xiaowu Chen, Wei Wu,
“Motion Guidance in Virtual Assembly Based on
Collision Detection and Constraints Recognition” Fifth
China Conference on Virtual Reality and Visualization
(CCVRV’2005), Beijing, China, 2005.
Proceedings of the Second International Conference on Embedded Software and Systems (ICESS’05) 0-7695-2512-1/05 $20.00 © 2005 IEEE
[21] Joyce Y. Chai, Hong Peng-yu, Michelle X. Zhou,
“A Probabilistic Approach to Reference Resolution in
Multimodal User Interfaces” Proceedings of the 2004
International Conference on Intelligent User Interfaces
(IUI-2004), ACM, Madeira, Portugal, January, 2004,
70-77.
[22] Nan Xu, Xiaowu Chen, Ying Li, “A Multimodal
Reference Resolution Approach for Virtual
Environment” Fifth China Conference on Virtual
Reality and Visualization (CCVRV’2005), Beijing,
China, 2005.
[23] Andrew Kehler, “Cognitive Status and Form of
Reference in Multimodal Human-Computer
Interaction” Proceedings of the Seventeenth National
Conference on Artificial Intelligence and Twelfth
Conference on Innovative Applications of Artificial
Intelligence, July 2000 - August 2000, 685-690.
[24] Olwal, A., Benko, H., Feiner, S., “SenseShapes:
using statistical geometry for object selection in a
multimodal augmented reality” Proceedings of the
Second IEEE and ACM International Symposium on
Mixed and Augmented Reality (ISMAR ’03), Tokyo,
Japan, 7-10 Oct. 2003, 300-301.
[25] Ed Kaiser, Alex Olwal, David McGee, Hrvoje
Benko, Andrea Corradini, Xiaoguang Li, Phil Cohen,
Steven Feiner, “Mutual Disambiguation of 3D
Multimodal Interaction in Augmented and Virtual
Reality” Proceedings of the 5th international
conference on Multimodal interfaces (ICMI’03), 2003,
12-19.
Proceedings of the Second International Conference on Embedded Software and Systems (ICESS’05) 0-7695-2512-1/05 $20.00 © 2005 IEEE