assembly assistance in ar environment.pdf

Upload: sara-namdarian

Post on 01-Jun-2018

225 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/9/2019 assembly assistance in AR environment.pdf

    1/21

    International Journal of Production Research

    Vol. 49, No. 13, 1 July 2011, 39193938

    RFID-assisted assembly guidance system in an augmented reality

    environment

    J. Zhang, S.K. Ong*and A.Y.C. Nee

    Department of Mechanical Engineering, Faculty of Engineering, National Universityof Singapore, 9 Engineering Drive 1, Singapore 117576, Singapore

    (Received 24 November 2009; final version received 12 April 2010)

    RFID technology provides an invisible visibility to the end user for tracking andmonitoring any objects that have been tagged. Research on the application ofRFID in assembly lines for overall production monitoring and control has beenreported recently. This paper presents a novel research on implementing theRFID technology in the application of assembly guidance in an augmented realityenvironment. Aiming at providing just-in-time information rendering andintuitive information navigation, methodologies of applying RFID, infrared-enhanced computer vision, and inertial sensor is discussed in this paper. Aprototype system is established, and two case studies are presented to validate thefeasibility of the proposed system.

    Keywords: assembly guidance; augmented reality; RFID; 3D-to-2D pointmatching

    1. Introduction

    RFID (radio-frequency identification) technology has received wide attention owing to the

    significant decrease in the manufacturing cost of the required hardware driven by

    technological advances. On the application side, benefiting from its ability of providing

    one unique ID for each tag, RFID technology has been replacing the bar code technology

    in the tracking and localisation of assets, inventories and personnel in dwellings,

    industries, groceries, logistic facilities, nursing homes and hospitals. A typical RFID

    system consists of readers, antennas and tags. According to the application requirements,

    there are two possibilities of applying the RFID technology. The system may deploy

    RFID readers to indicate only the existence of RFID tags in a binary mode: present or

    absent. A more functional RFID reader can provide detailed information apart from the

    existing status, such as the direction and distance of the detected tag with respect to the

    reader. This allows the system to determine the location of each object with an

    attached tag. Hence, RFID technology provides an invisible visibility to the end users,

    allowing them to monitor the status of each object that has been tagged and respond

    to dynamic changes of these objects quickly according to properly designed management

    strategies.

    Augmented reality (AR) is a technology that provides intuitive interaction experience

    to the users by seamlessly combining the real world with the various computer-generated

    *Corresponding author. Email: [email protected]

  • 8/9/2019 assembly assistance in AR environment.pdf

    2/21

  • 8/9/2019 assembly assistance in AR environment.pdf

    3/21

    ARToolKit markers must be planar and relatively large to be recognised robustly using

    computer vision techniques, it is difficult to attach them onto small components or

    non-planar surfaces. In one case study of furniture assembly (Zauner et al. 2003), the

    assembly positions of some small components, such as screws which are too small for

    markers to be attached, were estimated based on the nearby markers that are attached on

    the flat surfaces, so that the assembly information can be rendered at these estimatedpositions. 3D assembly animation showing the correct way of installing the components

    was rendered after calculating the interpolation between the two transformation matrices

    according to the markers. Similarly, assembly components were stamped with markers in

    the research by Liverani et al. (2004). A binary assembly tree (BAT) structure was applied

    to facilitate information rendering. ARTag markers were employed by Hakkarainen et al.

    (2008) and Salonen et al. (2007) to achieve an assembly platform. Hakkarainen et al.

    (2008) reported a study on the possibility of using mobile telephones for an AR-assisted

    assembly guidance system. A client-server architecture was set up using a Nokia mobile

    telephone and a PC. Considering the limited processing capability of the mobile telephone,

    the PC handles the rendering of complex CAD models, and static rendered images are sentto the mobile telephone for fast rendering.

    Researchers have investigated other approaches to identify assembly components.

    Pathomaree and Charoenseang (2005) applied computer vision-based tracking technology

    in an assembly skill transfer system under an AR system infrastructure. The assembly

    components are labelled by different colours and tracked using the CamShift method.

    Although the method is effective, identification of the poses and orientations of the

    assembly components was not considered in this research. Hence, assembly instructions

    and hints are only rendered in 2D. In the system developed by Yuan et al. (2008), the

    operator needs to identify the components according to a set of images of the components,

    and actively enquire the information related to the components.In assembly data management, an assembly tree is usually applied to determine the

    hierarchical assembly sequences. For example, Liverani et al. (2004) developed the BAT

    structure for their assembly sequence check and validation system. Each tree node includes

    the name and description for the matching component. A rotation and translation matrix

    is included to indicate its spatial relationship with the sub-assembly that has been achieved

    earlier. Yuan et al. (2008) developed a visual assembly tree structure (VATS) for their

    assembly guidance system. Information such as the component name, description, and

    images can be interactively acquired and rendered to assist the assembly operations.

    An information authoring interface was also presented.

    Furthermore, research has been conducted on the interaction between the operator and

    the system. In the research by Yuan et al. (2008), an information panel is rendered on the

    screen or on an ARToolKit marker, and the operator would need to use a stylus with

    a colour feature as a pointer to activate the virtual buttons on the panel to request for

    the relevant assembly data. Zauner et al. (2003) implemented 2D screen information

    rendering, which includes both a schematic overview and the textual description of each

    component. Valentini (2009) focused on the interaction mechanism using a 5DT dataglove

    during virtual assembly in an AR environment. The research focused on the grasping and

    manipulation of virtual assembly components based on the identification of three typical

    manipulation gestures normally encountered during assembly operations, namely,

    grasping a cylinder, grasping a sphere, and pinching. Upon a successful grasping

    operation during virtual assembly, two circumstances are considered, namely, to move anassembly component freely in the workspace or an assembly constraint.

    International Journal of Production Research 3921

  • 8/9/2019 assembly assistance in AR environment.pdf

    4/21

    2.2 RFID applications in the assembly line

    Researchers have discussed the application of RFID in manufacturing, such as in the areas

    of personnel tracking, stock tracking, inventory monitoring, etc. (Li et al. 2004). Detailed

    discussions on RFID-assisted assembly monitoring and scheduling have been published

    more recently (Huang et al. 2007, 2008, Wang et al. 2010).

    Huang et al. (2007) presented a blueprint of a wireless manufacturing (WM)-enabled

    fixed-assembly-island-with-moving-operator assembly line. The authors emphasised that

    dynamic work-in-progress (WIP) information is essential in fixed-position assembly lines,

    as huge material and personnel movement exist and there is little time and personnel to

    take down the WIP information manually. Using a conceptual product and assembly

    workshop, the authors discussed the feasibility of applying WM, and specifically, the

    RFID or Auto ID technology. Details, such as the arrangement of the RFID readers and

    tags and the information explorers for different users, were discussed. Further research

    and development studies were presented. In a more recent research (Huang et al. 2008), the

    authors proved the concept that such a system can improve the efficiency of assembly

    planning and scheduling.Wang et al. (2010) presented the deployment of RFID readers and tags to track

    assembly components on a flexible assembly line. The RFID readers are arranged as a grid

    at certain positions on the assembly line in order to track any passing components

    attached with RFID tags. Two tracking methods were discussed, namely, range-based and

    range-free localisation. For range-based localisation, information such as distance, RSSI

    (received signal strength indication), or angular data is accessed from the reader. As was

    indicated by the authors, such kinds of readers will be quite costly to create a grid with

    a high intensity. The second method considers localisation as a maximum probability

    problem, which can be solved by inferring the most probable location after listing those

    readers that have detected an interested tag. A particle filter is applied to improve thelocalisation performance. As a result, the research reported a tracking accuracy of w/2,

    where w is the grid size (such as 5 cm and 10 cm in the research).

    2.3 Summary and research scope

    AR-based assembly guidance systems have been relying on ARToolKit markers to identify

    the assembly components and perform 2D and 3D information rendering. However,

    ARToolKit markers have limited applications on small components or components which

    lack large planar surfaces. On the other hand, though 2D information rendering can

    facilitate clear information perception for the operators, 3D dynamic information, such asthe rendering of CAD models in the assembly workspace would be of significant benefit as

    it can provide the operators with the correct orientation and assembly direction of the

    components. Furthermore, in hierarchical assembly sequences, the assembly sequence may

    vary depending on the next component or sub-assembly to be assembled. In this case, a

    permutation of all possible assembly video clips must be generated and stored as data files

    so they can be rendered according to the operators activity. Recording these assembly

    videos will be tedious and time-consuming. It will be more efficient to render a 3D model

    of the component to be assembled onto the real sub-assembly. Hence, component

    recognition and tracking methods not relying on markers would need to be developed for

    an efficient AR-based assembly guidance system and to promote a broader range ofapplications. In this paper, a model-based object tracking approach was implemented

    3922 J. Zhanget al.

  • 8/9/2019 assembly assistance in AR environment.pdf

    5/21

  • 8/9/2019 assembly assistance in AR environment.pdf

    6/21

  • 8/9/2019 assembly assistance in AR environment.pdf

    7/21

    4. Methodologies

    4.1 Overview

    Aiming at achieving successful assembly under a non-obstructive guidance with interactive

    information rendering, three approaches have been investigated and implemented in the

    proposed system. Figure 3 presents the overall system flow chart. Firstly, an assemblyinformation management scheme is applied throughout the system operating cycle to realise

    an interactive and assembly-centric information rendering. Secondly, the assembly process

    is detected using ADD. Data from the accelerometer sensor and the RFID reader in ADD is

    fed to the system (A in Figure 3) to identify the assembly component that is being handled

    and the information requested by the operator. The operator can also acknowledge the

    completion of an assembly operation using ADD. Thirdly, to achieve interactive 3D

    information rendering, the camera pose should be estimated in real time. The convex hull

    theory is applied to improve the 3D-to-2D point matching performance for camera pose

    estimation (B in Figure 3). According to the assembly process reported from ADD, the

    information management scheme will update the 3D feature points based on the given CADmodels in the knowledge base, and a 3D convex hull will be constructed. On the other hand,

    fiducials are applied to assist the detection of 2D feature points in the assembly workspace.

    Detect tag?

    Current taskfinished?

    Calculate 2D Convex Hull

    Set up 3D-2D Correspondence

    Y

    N

    Start

    Loop

    Loop

    Y

    N

    Get tag-associated panel

    Update corner feature list

    Camera Pose Estimation

    roll

    pitch

    Get (pitch, roll)from the handmovement sensor

    Panel rendering & manipulation

    3D information rendering

    Record assembly activity

    A

    B

    Reconstruct 3D Convex Hull

    Determine the horizon

    All tasksfinished?

    N

    YEnd

    Figure 3. The system flow chart.

    International Journal of Production Research 3925

  • 8/9/2019 assembly assistance in AR environment.pdf

    8/21

    Camera pose estimation is achieved upon the successful 3D-to-2D point matching.

    A few iterations between the camera pose calculation and the point matching are needed

    to obtain an optimal estimation of the camera pose. The system will record the assembly

    sequence performed by the operator, the information requested to assist the assembly, and

    the assembly performance measures, such as the overall assembly time, the times when the

    operator picks the wrong components, etc. The following sections will discuss the detailsof each of these three approaches.

    4.2 Assembly information management

    An assembly tree structure applied in the prototype system is defined as follows.

    An assembly component, denoted as Ci(i 0,1, . . ., N1, where Nis the total number of

    assembly components), will be assembled following either the base component C0or a sub-

    assembly. The node representing this component in the assembly tree consists of an array

    of information, including the ID of the RFID tag that is attached to this component, an

    image and the name of the component, a video clip of the assembly action, a CAD modelof the component defined in the component coordinate system (CoCSi), a list of its corner

    features defined in the CoCSithat have been physically marked by fiducials and denoted as

    ptij(j 0, 1, . . ., ni1, whereniis the total number of such feature points), a transformation

    matrix, denoted as Mi34 [Ri33, Ti31] to indicate the spatial relationship between

    CoCSiand CoCS0after the component has been assembled, and a list of components that

    can be assembled after the assembly of this component and denoted as the succeeding

    candidates. All the information is stored in a text data file, which will be imported with the

    relevant files (images, CAD models, videos, etc.) before the assembly guidance process

    starts. Some of the above-listed information can be rendered easily on the screen at certain

    fixed positions, such as the component name, the component image, and the assemblyvideo clip. A virtual information panel structure, namely the VirIP (Yuan et al. 2008), is

    adopted in the prototype system. The 3D CAD models of the assembly components will be

    rendered in an appropriate position and orientation with respect to the assembly base

    component.

    For any sub-assembly that has been completed, a 3D point set can be calculated

    according to the corner feature lists of each assembly component in the sub-assembly, and

    this 3D point set will facilitate the 3D-to-2D point matching. For example, in the sub-

    assembly {C0C1 Ci}, the corner features onCican be translated into CoCS0using

    Equation (1), whereRiand Tiare given in the assembly tree. Similarly, the corner features

    on each of the components involved can be translated into CoCS0. Some of these new

    corner features may need to be removed from the list. For example, if two or more of the

    new corner features have identical coordinates, they are considered as overlapped feature

    points, and thus only one is needed in the list while the rest should be deleted. Lastly, the

    corner features from all the assembly components involved in the sub-assembly will form

    a new corner feature list for the sub-assembly as a 3D point set, and its convex hull can

    be calculated. The incremental algorithm (De Berg et al. 2008) is applied in this research

    to calculate the 3D convex hull.

    ptij 0:x

    ptij 0:y

    ptij 0:

    z1

    0

    BB@

    1

    CCA Ri Ti0 1

    ptij:x

    ptij:y

    ptij:

    z1

    0

    BB@

    1

    CCA: 1

    3926 J. Zhanget al.

  • 8/9/2019 assembly assistance in AR environment.pdf

    9/21

    During the assembly, upon the acknowledgement of current task has been accom-

    plished from the operator, the information management scheme in the system will provide

    the images of the succeeding candidates in the VirIP. Upon the selection of one component

    from the candidates, the relevant instructional information will be rendered to assist the

    operator to complete the next assembly task. 3D rendering of a virtual component will also

    be achieved based on camera pose estimation. If the operator picks a wrong component,

    a warning message will be rendered. After the operator finishes the current assembly step,

    s/he will send an acknowledgement to the system, and this loop will continue until the end

    of the assembly. The assembly information management scheme also records the assembly

    activities. Activity data that can be recorded includes the sequence of the components

    selected, the handling time for each component, the frequency of wrong components

    picked when the images of the candidates are listed, etc.

    4.3 AAD and virtual information panel

    The AAD device is designed to be wearable and light-weighted for the assembly operators.

    It should be restriction-free with no wires to hinder the assembly operations, and it should

    not disturb the bending or grasping behaviour of the fingers or the hands. To meet these

    requirements, a wireless hand-attached device was developed as the AAD. The detector

    includes two parts, an RFID reader and a hand movement detector, to be worn on both

    hands of the operator, and they will detect the above-mentioned two user activities

    respectively. Both units can perform wireless data communication using ZigBee and

    Bluetooth technology. The installation of the two units will depend on the handedness of

    the operator. Normally, the wireless RFID reader will be installed on the dominant hand

    which the operator normally uses to handle assembly components, while the handmovement detector can be installed on the other hand as it will be free from time to time

    to navigate the rendered information. A pair of cycling gloves was used in the prototype

    system to mount the AAD on both hands.

    Figure 4(a) shows an example of an implementation scene, where the sensor unit

    is mounted on the left hand (Figure 4(b)), and the wireless RFID reader on the right

    (Figure 4(c)). In the prototype system, SHAKE SK7, a sensor unit incorporating several

    sensors including inertial sensor, magnetic sensor, etc., was applied as the hand movement

    detector. The wireless RFID reader was developed using an Arduino Pro Mini 8 MHz,

    an XBee module (Digi n.d.), a 125 KHz RFID reader ID-2, and an external antenna. The

    antenna is small in size (about 18 mm in diameter), which enables it to be enclosed in thepalm without disturbing any assembly operations. Figure 4(d) shows the wireless RFID

    reader with its power supply, a pack of eight rechargeable AA batteries, and the external

    antenna.

    Figure 5 shows the block diagram of the wireless RFID reader with the data

    communication flow. Power regulators are included to regulate the input voltage to 5 V

    and 3.3 V, respectively, to support the three chips. The RFID reader will detect RFID tags

    within a range of up to 5 cm, and send the IDs to the microcontroller in the reader

    specified format. The microcontroller will decode and send the IDs to the XBee module.

    Finally, another XBee module connected to the computer will receive the IDs. In the

    assembly guidance system, when a tag ID is received by the system, the system willcompare the ID with the ID(s) of the succeeding candidate(s). If the two IDs match,

    International Journal of Production Research 3927

  • 8/9/2019 assembly assistance in AR environment.pdf

    10/21

    the system will render the information of the component detected, and the starting time

    of this assembly step will be recorded.

    On the other hand, the navigation of the virtual information panel is facilitated by thehand movement detector mounted on the left hand. The hand movement detection is

    (c)

    External antenna

    Insert

    Wireless RFIDreader

    (d)

    Arduino Pro Mini

    XBee ID-2

    External antenna

    ~18mm

    Battery pack

    SK7

    (a) (b)

    Figure 4. The assembly activity detector.

    3.3V

    5V

    GND

    1

    23

    4

    Data communication

    Microcontroller RFID Reader XBee Node

    Powerregulator

    RFID Tag

    Computer

    XBee Node

    V in

    Wired

    Wireless

    Figure 5. Block diagram of the wireless RFID reader.

    3928 J. Zhanget al.

  • 8/9/2019 assembly assistance in AR environment.pdf

    11/21

  • 8/9/2019 assembly assistance in AR environment.pdf

    12/21

    For each component, the information provided in the text file (Figure 6(b)) includes the

    index n, the tag ID, the component name, etc. Hence, the system will generate the first

    button in its panel as a text button, showing the name. Next, the system will search theknowledge base for files named as Pn.*, and according to the file type, such as .jpg file or

    .avi file, the system will generate an image or video button for the component. Using the

    panel for the component A as an example, component A has only one image file in the

    knowledge base, and thus only two buttons are generated for its panel. Component B has

    been provided with one image file and one video file, and thus three buttons are generated

    for its panel. An acknowledgement button will be generated at the end of each panel,

    showing the text Finished?. If the operator activates this button, he acknowledges that

    he has completed the current assembly step.

    4.4 Model-based camera pose estimation

    As markers can be occluded easily and it is difficult to attach markers on small or

    non-planar surfaces, they are not applied in this research. IR-enhanced computer vision

    technology is proposed in this research. In the prototype system, the Dragonfly2 camera is

    equipped with several IR LEDs and an IR filter X-Nite 715 (Figure 7(a)), and non-

    obstructive reflective tapes are pasted on the feature points of each assembly component.

    These tapes are not highly distinguishable by a normal camera or the naked eye. However,

    they show as points with strong intensities in the images captured using an IR-sensitive

    camera, which in turn facilitates feature point detection in the image plane as a 2D point

    set (Figure 7(b)). The coordinates of these corners with respect to CoCS0 are usedto generate a 3D point set and its convex hull, as has been discussed in Section 4.2.

    (b)

    Dragonfly2 camera

    IR transmitter board

    IR LEDs

    X-Nite 715 filter

    (a)

    Figure 7. IR enhanced feature point detection.

    3930 J. Zhanget al.

  • 8/9/2019 assembly assistance in AR environment.pdf

    13/21

    An assumption applied in this research is that these corner features are rigid in that their

    internal spatial relationship will not change during the entire assembly process. This is a

    realistic assumption as the majority of the assembly components are rigid and their shapes

    and sizes do not change significantly.

    Given a 3D point set, a 2D point set, and a point matching algorithm between them, the

    camera pose can be estimated, and the assembly information can be rendered in 3D.

    RANSAC is a popular algorithm for 3D-to-2D point matching. To obtain the best point

    match, it randomly picks four point matches as an initial valid match and validates the

    match iteratively by eliminating the outliers. However, given a 3D point set withNpoints

    and a 2D point set with Mpoints (MN), the initial search space is big. Gu et al. (2008)

    applied a quick convex hull-based 3D-2D point matching using the topology restrictions to

    reduce the search space. They proved that given a set of 3D points and their projections on a

    2D plane as a set of 2D points, anm-sided 2D convex hull must correspond to an m-sided

    polygon, which is a closed connection ofm boundary points, on the 3D convex hull. A

    horizon-validation method was applied to trace the m-sided polygon, where the horizon is a

    3D polygon on the 3D convex hull that will be projected onto the image plane as the 2Dconvex hull. The authors proved that using these two approaches can reduce the point set

    searching effort, and experiments were conducted to validate the effectiveness and efficiency

    of this method. However, successful point matching using this method requires that all the

    3D points projected onto the 2D image plane can be detected successfully.

    Hence, in this research, the method developed by Gu et al. (2008) was improved by

    implementing the RANSAC method for point matching. An efficient camera pose

    estimation method, namely the EPnP method, is applied in the prototype system to

    estimate the camera pose. The approach of implementing the improved method in the

    assembly guidance system is described as follows.

    (1) Assume a sub-assembly {C0C1 Ci} has been achieved. The operator picks

    the component Cj, which is one of the succeeding candidates according to the

    information provided. Thus, the system should render a virtual 3D model of Cjonto the completed sub-assembly with the correct position and orientation.

    (2) According to the information provided in the knowledge base, a corner feature list

    for the sub-assembly {C0C1 Ci} and its 3D convex hull can be generated

    (Section 4.2). Assume there are N 3D feature points on the sub-assembly,

    constituting an N-size point set A.

    (3) At the same time, a 2D point setBcan be detected in the image plane as a subset

    of projectedA. A 2D convex hull can be constructed for point set B. Assume there

    are M2D points forming the convex hull as an m-sided polygon.

    (4) The camera pose from the last image frame is validated based on point setsA and

    Baccording to the re-projection error Eusing Equation (2), assuming k pairs of

    3D-to-2D point matching exists, and (ue, ve) and (um, vm) are the estimated and

    measured 2D points projected onto the image plane, respectively. If the

    re-projection error is higher than a threshold value d, the 3D and 2D points

    must be matched again in step 5:

    E1

    kXk

    i1

    ueivei

    umivmi

    2

    : 2

    International Journal of Production Research 3931

  • 8/9/2019 assembly assistance in AR environment.pdf

    14/21

  • 8/9/2019 assembly assistance in AR environment.pdf

    15/21

    in the point set. Hence, a 3D convex hull with a total of 12 points is constructed.

    Given one hypothesised camera pose, the horizon (with six points) and the visible point

    sets (with nine points) can be established. Figure 9(b) shows the six-sided 2D convex hull,

    and Figure 9(c) shows the rendering of the next component on the sub-assembly.

    5. Case studies

    5.1 3D puzzle assembly

    In this case study, a set of a 3D puzzle was assembled using the assembly guidance system

    that has been developed in this research. The operator was equipped with the AAD device,

    a head-mounted camera, and a HMD. The VirIP was rendered in the screen coordinate

    system. Assuming that the operator handles the component with his/her right hand and

    the assembly workspace is on his/her left side, the 2D point set was generated only from

    the left half image so as to avoid unwanted calculation from the feature points detectedon the component handled by the right hand.

    Figure 10(a) shows at the start of the guidance system, where an image showing the

    base assembly component was rendered. After the operator picks the correct component,

    the component-associated panel will be rendered (Figure 10(b)). The operator navigates

    through the information panel by moving his/her left hand (Figure 10(c)), and after s/he

    reports that s/he has finished the current assembly step, the system will render the images

    of the succeeding candidates for him/her (Figure 10(d)). If the operator picks one

    component among the candidate list, the system will render the component-associated

    panel and the 3D CAD model of this component that has been picked to show the correct

    assembly (Figure 10(e)). If the operator picks the wrong component, a warning messagewill be rendered to him/her (Figure 10(f)), and the panel will not be refreshed. The case

    study proves that the proposed assembly guidance system, to be more specific the AAD

    device and the point matching method, is feasible. It can guide an operator through the

    assembly sequence according to his/her activity, and the operator does not need any

    additional apparatus to navigate through the information provided. In this way,

    an intuitive assembly guiding course can be achieved.

    5.2 Computer mouse assembly

    Another case study of a computer mouse was carried out using the proposed assemblyguidance system. The computer mouse consists of five components, namely, the base, the

    (a) (b) (c)

    3D pointsVisible pointsHorizon points

    Overlappedpoints

    Figure 9. 3D-to-2D point matching and rendering on a sub-assembly.

    International Journal of Production Research 3933

  • 8/9/2019 assembly assistance in AR environment.pdf

    16/21

    PCB circuit board, two rollers and the cover. To prepare for the assembly of the computer

    mouse, these components were labelled with IR reflective tapes at certain points and

    attached with RFID tags as shown in Figure 11. Due to the small size of the rollers, small

    commercial RFID tags were not available in the authors laboratory for use on these two

    rollers; hence they were not attached with tags in this case study. The recognition of these

    two rollers during this case study was thus emulated by pressing certain keys using the

    keyboard. A knowledge base for the components of the computer mouse was preparedaccording to the discussion in Section 4.

    (e) (f)

    Warning message

    (c) (d)

    (b)(a)

    Figure 10. Case study of 3D puzzle assembly.

    3934 J. Zhanget al.

  • 8/9/2019 assembly assistance in AR environment.pdf

    17/21

    Figure 12 shows the scenes captured during this case study in the order of the assembly

    steps. In Figure 12, after the operator picks the correct component according to the

    pictures of succeeding candidates rendered in the VirIP, the CAD model of this component

    can be superimposed onto the current assembly, demonstrating to the operator the proper

    position and orientation of the component for the next assembly operation. The rendering

    was achieved according to the point matching and camera pose estimation. Wrong

    rendering was observed occasionally during this case study when the operators hand

    occluded some of the feature points as the operator performed the assembly operations.

    However, the correct camera pose can be recovered as shown in Figures 12(e) and 12(f).

    This case study demonstrates that the proposed system is applicable in guiding theassembly of a real product given a set of prepared data in the knowledge base.

    6. Conclusion

    This paper presented the research on the application of the AR technology, the RFID and

    sensors technologies in assembly guidance. Assembly information management, assembly

    activity detection, and assembly activity-oriented information rendering, especially the

    rendering of CAD models in the assembly coordinate system, have been investigated.

    A prototype system has been established with the proposed methodologies, and two case

    studies using a 3D puzzle and a computer mouse have been carried out to proveits feasibility.

    This paper contributes in the research of assembly guidance in three aspects. Firstly,

    a novel assembly activity detection device using RFID technology and sensor technology

    was proposed and implemented to detect the operators activity so as to facilitate just-in-

    time information rendering and intuitive information navigation. Secondly, this research

    proposed an improved point matching algorithm and applied this to the prototype system.

    The improved method has a higher tolerance level when some of the projected 3D points

    are not detected in the image plane. Thirdly, the research has implemented an assembly

    information management scheme based on a comprehensive assembly tree data structure

    for hierarchical assembly sequences so that the proposed system can be applied toa broader range of applications.

    Base PCB circuit board Cover

    Two rollers

    RFID tags

    IR reflective

    tapes

    Figure 11. The assembly components in the computer mouse.

    International Journal of Production Research 3935

  • 8/9/2019 assembly assistance in AR environment.pdf

    18/21

    The proposed system has a few constraints and limitations. In order to achieve just-

    in-time guidance during an assembly process, the knowledge base has to be prepared prior

    to the start of the assembly process. In order to execute the proposed IR-enhanced point

    matching method, the corner features of each assembly component need to be labelled

    with reflective tape, and the coordinates of these features need to be stored in the text data

    file. Corner features for the base component should be selected carefully such that at leastfour pairs of 3D and 2D points can be found in the scene and matched for camera pose

    (e) (f)

    Virtual rollerVirtual roller

    (c) (d)

    Virtual PCB board

    (a) (b)

    Figure 12. Case study of a computer mouse assembly.

    3936 J. Zhanget al.

  • 8/9/2019 assembly assistance in AR environment.pdf

    19/21

    calculation. For the succeeding assembly components, the number of corner features could

    be much smaller as the 3D and 2D points have been obtained from all the components

    in the scene. Moreover, a large number of feature points in the scene will reduce the

    computation efficiency as has been observed in the case study. Smaller components need

    not be labelled as an assumption applied here that a small component will not occlude any

    existing corner features in the scene and interfere with the camera pose estimation.

    However, if an occlusion happens, one or two corner features of this small component can

    be labelled to overcome this problem. The recognition of these small components relies on

    small commercially available RFID tags attached to them. It is estimated that the labelling

    time for each component will be less than one minute on average.

    The proposed point matching method has two limitations which can be further

    improved. Firstly, the computing efficiency relies on the number of points in both point

    sets, which means the 3D rendering will be slow and lagging if there are many feature

    points in the assembly components. Secondly, the proposed method is not able to solve

    point matching when the assembly component is large, and is only captured partially in the

    image plane.

    References

    Azuma, R., 1997. A survey of augmented reality. Presence: Teleoperators and Virtual Environments,

    6 (4), 355385.

    Azuma, R., et al., 2001. Recent advances in augmented reality. IEEE Computer Graphics and

    Applications, 21 (6), 3447.

    De Berg, M.,et al., 2008. Convex hulls.Computational geometry: algorithms and applications. 3rd ed.

    Berlin: Springer, 243258. .

    Fischler, M.A. and Bolles, R.C., 1981. Random sample consensus: a paradigm for model fitting with

    applications to image analysis and automated cartography. Communications of the ACM,

    24 (6), 381395.

    Gu, S., et al., 2008. A quick 3D-to-2D points matching based on the perspective projection. In:

    Proceedings of 4th international symposium on advances in visual computing, ISVC 2008 , 13

    December Las Vegas, NV, 634645.

    Hakkarainen, M., Woodward, C., and Billinghurst, M., 2008. Augmented assembly using a mobile

    phone. In: Proceedings of the 7th IEEE international symposium on mixed and augmented

    reality 2008 (ISMAR 2008), 1518 September, Cambridge, UK, 167168.

    Huang, C.P., Agarwal, S., and Liou, F.W., 2004. Validation of the dynamics of a parts feeding

    system using augmented reality technology. In: S.K. Ong and A.Y.C. Nee, eds. Virtual andaugmented reality applications in manufacturing. New York: Springer, 257276.

    Huang, G.Q., Zhang, Y.F., and Jiang, P.Y., 2007. RFID-based wireless manufacturing for walking-

    worker assembly islands with fixed-position layouts. Robotics and Computer-Integrated

    Manufacturing, 23 (4), 469477.

    Huang, G.Q., et al., 2008. RFID-enabled real-time wireless manufacturing for adaptive assembly

    planning and control. Journal of Intelligent Manufacturing, 19 (6), 701713.

    Lepetit, V., Moreno-Noguer, F., and Fua, P., 2009. EPnP: An accurate O(n) solution to the PnP

    problem. International Journal of Computer Vision, 81 (2), 155166.

    Li, Z., Gadh, R., and Prabhu, B.S., 2004. Applications of RFID technology and smart parts in

    manufacturing. In: Proceedings of ASME 2004 design engineering technical conferences and

    computers and information in engineering conference (DETC04), 28 September2 October,Salt Lake City, Utah, 123129.

    International Journal of Production Research 3937

  • 8/9/2019 assembly assistance in AR environment.pdf

    20/21

    Liverani, A., Amati, G., and Caligiana, G., 2004. A CAD-augmented reality integrated environment

    for assembly sequence check and interactive validation. Concurrent Engineering: Research and

    Applications, 12 (1), 6777.

    Ong, S.K., Yuan, M.L., and Nee, A.Y.C., 2008. Augmented reality applications in manufacturing:

    a survey. International Journal of Production Research, 46 (10), 27072742.

    Pathomaree, N. and Charoenseang, S., 2005. Augmented reality for skill transfer in assembly task.In: Proceedings of 2005 IEEE international workshop on robots and human interactive

    communication, 1315 August, Nashville, TN, 500504.

    Salonen, T., et al., 2007. Demonstration of assembly work using augmented reality. In: Proceedings

    of the 6th ACM international conference on image and video retrieval, CIVR 2007, 911 July,

    Amsterdam, The Netherlands, 120123.

    Shen, Y., Ong, S.K., and Nee, A.Y.C., 2008. AR-assisted product information visualization in

    collaborative design. Computer-Aided Design, 40 (9), 963974.

    Valentini, P.P., 2009. Interactive virtual assembling in augmented reality. International Journal of

    Interactive Design and Manufacturing, 3 (2), 109119.

    Wang, J., Luo, Z., and Wong, E.C., 2010. RFID-enabled tracking in flexible assembly line.

    International Journal of Advanced Manufacturing Technology, 46 (14), 351360.

    Yuan, M.L., Ong, S.K., and Nee, A.Y.C., 2008. Augmented reality for assembly guidance using

    a virtual interactive tool. International Journal of Production Research, 46 (7), 17451767.

    Zauner, J., Haller, M., and Brandl, A., 2003. Authoring of a mixed reality assembly instructor for

    hierarchical structures. In:Proceedings of the second IEEE and ACM international symposium

    on mixed and augmented reality (ISMAR03), 710 October, Tokyo, Japan, 237246.

    Zhang, J., Ong, S.K., and Nee, A.Y.C., 2008. AR-assisted in situ machining simulation: architecture

    and implementation. In: Proceedings of SIGGRAPH 7th international conference on virtual-

    reality continuum & its applications in industry, SIGGRAPH VRCAI conference, 89

    December, Singapore. New York: ACM, article no. 26 [CDROM].

    3938 J. Zhanget al.

  • 8/9/2019 assembly assistance in AR environment.pdf

    21/21

    Copyright of International Journal of Production Research is the property of Taylor & Francis Ltd and its

    content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's

    express written permission. However, users may print, download, or email articles for individual use.