a formal characterization and analysis for hardware-in-the- loop

30
International Journal on Interactive Design and Manufacturing manuscript No. (will be inserted by the editor) A Formal Characterization and Analysis for Hardware-in-the- Loop and Hybrid Process Simulation During Manufacturing System Deployment William S. Harrison · Dawn M. Tilbury Received: date / Accepted: date Abstract Manufacturing process deployment is one of the most integral parts of a pro- duction life cycle. The intelligent application of simulation at the deployment stage of a manufacturing process can result in a great reduction in ramp-up time and cost. One new approach to deployment is to use a mix of simulation and real processes so as to lessen the short comings of simulation accuracy and real equipment availability. Hybrid Process Simulation is an approach to the merging of both real and simulated components of a manu- facturing process during deployment. In this paper an ontology and analysis is put forth for the quantitative description of a manufacturing process during deployment called Equiva- lence analysis. Equivalence analysis consists of three quantities: structural equivalence, leaf equivalence, and communication equivalence. Structural equivalence is the measure of how many of the entities and bodies in the HPS are represented in simulated or actual form. Communication equivalence is the measure of how much of the communication infrastruc- ture is represented and whether it is physically present. Leaf equivalence is the measure of how many of the terminal entities and bodies in the HPS are represented in simulated or actual form. The equivalence analysis put forth here has been applied to two manufacturing processes containing both real and simulated components. These example demonstrate how equivalence analysis can be used as a tool to track and describe deployment when the HPS approach is employed. This work was supported in part by the National Science Foundation ERC for Reconfigurable Manufacturing Systems, grant number EEC 95-92125. William Harrison Mechanical Engineering Department University of Michigan 2350 Hayward Street Tel.: +1-734-7644336 E-mail: [email protected] Dawn Tilbury Mechanical Engineering Department University of Michigan 2250 G. G. Brown Building 2350 Hayward Street

Upload: others

Post on 12-Sep-2021

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A Formal Characterization and Analysis for Hardware-in-the- Loop

International Journal on Interactive Design and Manufacturing manuscript No.(will be inserted by the editor)

A Formal Characterization and Analysis for Hardware-in-the-Loop and Hybrid Process Simulation During ManufacturingSystem Deployment

William S. Harrison · Dawn M. Tilbury

Received: date / Accepted: date

Abstract Manufacturing process deployment is one of the most integral parts of a pro-duction life cycle. The intelligent application of simulation at the deployment stage of amanufacturing process can result in a great reduction in ramp-up time and cost. One newapproach to deployment is to use a mix of simulation and real processes so as to lessenthe short comings of simulation accuracy and real equipment availability. Hybrid ProcessSimulation is an approach to the merging of both real and simulated components of a manu-facturing process during deployment. In this paper an ontology and analysis is put forth forthe quantitative description of a manufacturing process during deployment called Equiva-lence analysis. Equivalence analysis consists of three quantities: structural equivalence, leafequivalence, and communication equivalence. Structural equivalence is the measure of howmany of the entities and bodies in the HPS are represented in simulated or actual form.Communication equivalence is the measure of how much of the communication infrastruc-ture is represented and whether it is physically present. Leaf equivalence is the measure ofhow many of the terminal entities and bodies in the HPS are represented in simulated oractual form. The equivalence analysis put forth here has been applied to two manufacturingprocesses containing both real and simulated components. These example demonstrate howequivalence analysis can be used as a tool to track and describe deployment when the HPSapproach is employed.

This work was supported in part by the National Science Foundation ERC for Reconfigurable ManufacturingSystems, grant number EEC 95-92125.

William HarrisonMechanical Engineering DepartmentUniversity of Michigan2350 Hayward StreetTel.: +1-734-7644336E-mail: [email protected]

Dawn TilburyMechanical Engineering DepartmentUniversity of Michigan2250 G. G. BrownBuilding 2350 Hayward Street

Page 2: A Formal Characterization and Analysis for Hardware-in-the- Loop

2 William S. Harrison, Dawn M. Tilbury

Keywords Hardware-in-the-Loop, Hybrid Process Simulation · Manufacturing Automa-tion · Process Deployment · Manufacturing Ontology

1 Introduction

Today, manufacturing industries are facing intense global competition with shorter productlife cycles, higher quality standards, and cost constraints. To address these challenges, ongo-ing research is investigating technologies that facilitate product launch with shorter ramp-uptimes and lower costs. Simulation has emerged as a promising validation solution in the lasttwo decades. There are many simulation tools available to support the testing and validationneeds of a process planner for a manufacturing system. Here, a manufacturing system refersto the set of components (conveyors, robots, CNCs, etc.), controllers, and communicationmethods that make up a process. Typically during the deployment of a manufacturing sys-tem, simulation stands as a disjointed planning step before process implementation. In thispaper deployment will refer to the time when the manufacturing system is being broughtonline but is not yet in production.

Hybrid Process Simulation (HPS), as put forth by [12], is a mix of distributed simula-tion and Hardware-in-the-Loop (HIL) methodologies that can be used to decrease the costdue to long ramp up times, while also leveraging the strengths of having both real processesand simulations mixed together in a distributed structure. The HPS methodology enablesthe simulation phase to continue into and proceed during the implementation phase. De-ployment can then start with a complete pure simulation to which real components andprocesses are added piece by piece until the entire system is in its non-simulated final form.

When using the approach of pure simulation followed by separate physical implementa-tion, the most pertinent concern for the user of the simulation is how accurate the simulationis at predicting the behavior of the non-simulated system. In contrast the process planner’smain concern is how much of the system is in place and functional. The HPS approachbrings the simulation user and the process planner together to mitigate both of their con-cerns. The HPS includes real processes so the setup need not be completely dependant onmodel accuracy, and the installation and system level testing can commence sooner withthe use of HPS simulated components to fill in the gaps where components are not yet in-stalled. With the previous approach of first simulation and then separate implementation,macro scale system-level testing was only possible after all components were installed.

The HPS approach introduces new concerns to be addressed in deployment, and specif-ically with component installation. Some of these concerns include:1. How best to make the transition from a complete simulation to a production ready pro-

cess.2. How to compare deployments of similar or different manufacturing processes.3. How to best describe the HPS as it progresses through its various different compositions

of simulations and real processes.4. How to measure how close the HPS is to its production ready state.5. How to calculate the exact accuracy of the HPS as it moves from simulation to a com-

pletely real process.Concern 1 is addressed in [12]. Concerns 2-4 are addressed in this paper, and concern 5 isleft to future work.

Concerns 2-4 require a formal approach to the description of an HPS. This formal ap-proach should employ a quantitative means to calculate how much of the system is rep-resented in some way by either simulation or real processes, how much of the system is

Page 3: A Formal Characterization and Analysis for Hardware-in-the- Loop

Characterization and Analysis for Manufacturing System Deployment 3

physically present, and how much of the communication infrastructure is in place. Thisdescription is useful because it gives the process planners more information about the de-ployment of their manufacturing system, allowing them to make decisions about deploy-ment resources. The description also provides a framework in which other studies can bedone using the HPS approach, such as finding the optimal component installation order fordeploying a system.

The contribution of this paper is an analysis methodology for a manufacturing system inits deployment phase. The analysis results in a description as well as comparison method toprovide a process planner with information that will aid in creating a concise understandingof an HPS’s deployment status as it progresses to a production ready non-simulated system.

The rest of the paper is organized as follows: First, a background in Section 2 coversrelated work and possible applicable approaches. Section 3 discusses how the proposedapproach fits into the current existing approaches for deployment. Sections 4 and 5 coverthe specifics of the terminology and the analysis equations. The concepts are then applied totwo examples in Section 6. Finally the conclusion and future work are in Section 7.

2 Hardware-in-the-Loop and Hybrid Process Simulation

Traditionally, in the initial phases of manufacturing process deployment, a simulation modelis created and validated. The final production-capable system is then built after validationis completed. The validation of the simulation is limited by the accuracy of the simulationmodel, and testing of the physical system can be time consuming and may result in damageto equipment. Various techniques have been used to address these shortcomings, includingdistributed simulation and Hardware-in-the-Loop (HIL). Both of these techniques can beapplied to a manufacturing process in its deployment phase.

“Distributed Simulation is an application of distributed systems technology that enablesmodels to be coupled over computer networks so that they interoperate during a simulationrun” [5]. Distributed simulation is a useful technique because it allows the use of specializedsimulation models for different components of the system. This enables model reusabilityand reduces simulation execution time [8]. Additionally, the specialized simulations can bemore accurate because they can come from the vendor of the component they represent.Though the distributed approach does begin to address the issue of model accuracy, via spe-cialized individual models, the simulation is still limited by this individual model accuracy.

HIL is a process by which physical components and hardware are implemented togetherwith simulations, allowing the validation of different aspects of the system before it is com-pletely physically present. When HIL was first implemented many of the applications onlyincluded one piece of hardware connected to a simulation. More recently research has begunto address mixing the concept of HIL and distributed simulation, where multiple simulationsor emulations can be connected to multiple real components [7, 14, 15, 17, 22, 26].

A formalized approach to the integration of real and simulated components could havegreat benefit. The same real or simulated component could be used in different manufac-turing systems for validation and testing, if the integration methods were standardized. Aformal analysis technique could then be used to quantitatively describe the composition ofmanufacturing systems with both real and simulated components during its deployment.This quantitative description could then be used for further studies about how best to inte-grate real and simulated components.

There is no accepted standard approach for application, or widely accepted formalizedanalysis method that has been applied to HIL, distributed simulation for manufacturing, or

Page 4: A Formal Characterization and Analysis for Hardware-in-the- Loop

4 William S. Harrison, Dawn M. Tilbury

Fig. 1: An HPS consists of multiple components, some of which are computer simulated components C̃ andothers of which are components that are part of the final physical implementation C. The large arrow showsthe progression during a system’s deployment from all simulated components within a computer simulation,through HPS, to complete physical implementation

a mix of the two [3]. A discussion of HIL formalism can be found in [12]. The US Depart-ment of Defense, however, has a standard for the integration of different models in a dis-tributed simulation called the High Level Architecture (HLA) [6]. Unfortunately HLA hasnot caught on in industrial simulation, possibly in part due to the increased effort involvedin its adherence [5]. There has been a push in recent years for interoperability standardsand testing [16, 18, 21, 28]. Interoperability testing focuses on the interrelation, communica-tion, and interface of dissimilar models for distributed simulation in industrial automation.Interoperability could also include real processes along with simulations.

To address the lack of formalism, [12] put forth the HPS approach as a generalizationof HIL with a distributed structure. An HPS is defined as a test setup with at least oneregion of simulation (simulated) and one region of real components and software. An HPSmay, however, include many regions of simulated and real components, as seen in Fig. 1.Within an HPS, simulated components must connect to non-simulated components in thesame manner as the components they simulate. Because of the way simulated componentsconnect, HPS does not require a middle layer like other similar approaches [15, 17, 26].The way components connect also means that simulations in the manufacturing system canbe replaced piece by piece, until the entire system is in its final state, creating a seamlesstransition from simulation to final physical implementation. HPS, as put forth by [12], has aformalized approach for implementation with its own ontology.

Despite the approaches to formalism, [12] lacks an analysis methodology. This is ashortcoming to the approach in that, though the implementation is prescribed, quantitativeunderstanding of the HPS as a whole is not addressed. As the system takes on differentcombinations of simulated and non-simulated components, understanding how close theHPS is to the final system (how much of the HPS is non-simulated) as a whole becomesimportant. Fidelity and validity are aspects of simulation that begin to broach this topic.Devising a formalism for HPS should first explore the existing body of work concerningvalidity and fidelity.

2.1 Fidelity and Validity

Validation is a means to assess how good the simulation is at portraying the real sys-tem [1, 3]. The validation of pure simulation is chiefly concerned with the simulation’s func-tion (its predictive quality), but not its form (how the simulation is connected and organized).How the simulation works is not nearly as important as whether the output and performanceof the simulation represent the real system to an acceptable (user-defined) degree. The HPS’s

Page 5: A Formal Characterization and Analysis for Hardware-in-the- Loop

Characterization and Analysis for Manufacturing System Deployment 5

form can take on many combinations of simulated and non-simulated components, affectingthe accuracy and operation of the HPS as a whole, thus the form of an HPS holds as much rel-evance as its function. Since an HPS’s form changes during deployment, a formal approachto describing an HPS’s form is needed, but because the HPS approach is new to manufac-turing deployment, the literature does not already contain work concerning a description ofform. The most pertinent related body of work is fidelity and validity, which both concernfunction. If a formal approach to fidelity or validity exists, and is general enough to applyto different manufacturing systems, this approach could possibly be applied to devising amethod of describing the form of an HPS.

Validity and fidelity have similar definitions, however, some do differentiate [25]:

Many confuse fidelity and validity. Fidelity is an absolute measure of simulationrelationship to the reality represented (usually described in terms of resolution, ac-curacy, etc.). On the other hand, validity is a relative measure of whether the fidelityof a simulation is adequate to support its intended use.

Validity and fidelity, however, are not commonly used within the existing discussion ofvalidation [10, 11]. In [4] the introduction mentions validity but only in passing. The vastusage differences between validation and validity points to the difficulty with quantifying theresults of a validation study. Validity is the measure where validation is the action. Validationcan be done in many ways but coming up with a quantifiable measurement of validity isdifficult and thus often times not done at all. Much of the literature suggests that validity isvery close to fidelity, so close in fact that validity would almost seem to be a synonym. Thisis at least partially contradictory as can be seen by [25]. From this point forth the definitionssupplied by [25] will be use for fidelity and validity.

2.2 Validation Methodologies

The following is a three step approach put forth by [4] that has been widely followed forvalidation.

1. Face Validity: “... construct a model that appears reasonable on its face to model usersand others who are knowledgeable about the real system being simulated.”

2. Validation of model assumptions: Assumptions can either be structural or data. “Struc-tural assumptions involve questions of how the system operates and usually involvesimplifications and abstraction of reality.” “Data assumptions should be based on thecollection of reliable data and correct statistical analysis of the data.”

3. Validate Input-Output Transformations: Given input-output sets from the real system,compare that to the input-output set created by the simulation. One set should be usedfor calibration and a different set should be used to validate the simulation after it iscalibrated.

Though there is quite a bit of discussion on validation, most approaches take some formof the above [19]. This existing body of work lacks a quantifiable means of assessing fidelityor validity that is general enough to be applied to a large variety of systems [24]. Devisinga means of assessment that is general enough to apply to different scenarios is challenging.Methodologies that apply to one simulation are typically hard to apply to another.

Henderson talks about assessing the fidelity of a virtual reality environment [13]. In herdescription she discusses quantifying the fidelity of the simulated environment by comparingit to how the user is allowed to connect with it. One such example would be the field of view.

Page 6: A Formal Characterization and Analysis for Hardware-in-the- Loop

6 William S. Harrison, Dawn M. Tilbury

Human field of view is over 180 degrees but head mounted displays are rarely that high.Henderson suggests comparing these numbers to give a quantifiable sense of simulationfidelity.

Henderson’s approach is insightful considering how difficult the quantification of sim-ulation fidelity is. The human user, which is the one commonality in all virtual reality ap-plications, is perfect for basing a quantification of fidelity that is general enough to applyto all virtual reality applications. Quantification can be achieved by comparing two quan-tifiable characteristics of a particular virtual reality system to the average human user (fieldof view for example). Any approach for fidelity or validity assessment that can be appliedto many different scenarios must leverage commonality on some level. This commonalitycould come from comparing one part of a system that all applicable systems have in com-mon such as an interface [13]. Commonality could also be achieved by using a particulartechnique to create a model of the system to which analysis can be performed.

In practice simulation validity often has a more qualitative approach [10], where validityis assessed to be “high, medium, or low” for example. This is likely due to the variance insimulation solutions, and the lack of output data for calibration and validation. For thisreason validation primarily relies “on review by experts and peers”.

Typical simulation validation approaches are not easily applied to an HPS. Providinginputs to the system and monitoring the outputs, which is a popular form of validation, be-comes a much more complex endeavor when the simulations are separated by real processes(Fig. 1). This complexity is compounded by the evolving nature of an HPS as it developsfrom all simulated to all non-simulated processes during a manufacturing system’s deploy-ment.

3 Deployment

Customarily when a complex hierarchical system, such as a manufacturing assembly pro-cess, is being developed, a simulation analysis is performed. There are many tools commer-cially available for such an analysis including Arena, Delmia and Demo3d. After this initialsimulation analysis is completed, physical implementation can begin, and will continue un-til the non-simulated manufacturing system is fully deployed. Here the term non-simulatedmanufacturing system is used to refer to the final production-ready system. There exists,however, no formal method for analysis of a manufacturing system during the deploymentphase. This phase is important because it is where a manufacturer faces a great deal of pres-sure. Shortening ramp-up times can result in higher profit margins and longer product lifecycles, but conversely can also mean overspending for implementation and the deploymentof an inefficient processes which will cut into profits [2, 20].

The HPS approach was developed for the deployment phase of a manufacturing process;therefore, it is important to outline how deployment fits into a process life cycle, and whatspecifically deployment entails. The lifespan of a manufacturing system can be divided intomultiple phases. In this paper three phases will be considered: 1) planning 2) system widedeployment and 3) production. The production phase of the manufacturing process can alsobe followed by a component deployment phase for reconfiguration and new component in-stallation. The HPS approach can be applied during both the system wide deployment phaseand the component deployment phase. System wide deployment starts with no componentsin place, then uses pure simulation, and finally ends with a production ready system (Fig.2,a-e). The component deployment phase starts with a full manufacturing system, at whichpoint components are added or replaced, also resulting in a production ready system (Fig.2,

Page 7: A Formal Characterization and Analysis for Hardware-in-the- Loop

Characterization and Analysis for Manufacturing System Deployment 7

e,d). When the HPS approach is employed in both system wide and component deployment,component simulations are used initially and replaced iteratively by the real components thesimulations represent (Fig.2).

Definition 1 Pure Simulation: A simulation that includes the components of the manufac-turing system in a single simulation environment with no outside connections or externalparts.

Definition 2 Component Simulation: A simulation of some specific part of the manufac-turing process with the capability of interfacing with real or other simulated components.

With the HPS approach described in this paper, the individual component simulationswill use the same interface (communication protocol) as the real components they represent,thus, each simulation is a distinct separate piece of software and hardware. Real componentscan then be added on a component by component basis (Fig. 2d), until the entire process isin its non-simulated actual form (Fig. 2e).

Fig. 2: For a completely new process, deployment will start with all non-present components and end in allactual components. An inner box around a component C corresponds to an individual simulation environ-ment. An inner box around multiple Cs corresponds to one simulation environment that includes multiplecomponents.

Fig. 2 represents one progression for deployment. Other progressions for deploymentare also possible as illustrated in Fig. 3. Step a is the same but b, c, and e are different. Twolimitations of the HPS approach to deployment are that the simulations must be capable ofcommunicating with real components and real components must be capable of interfacingwith simulations [12]. The exact order of the introduction of component simulations and realcomponents will at least partially be based on the availability of resources and expertise.

4 Characterization of a Hybrid Process Simulation

Though the literature addresses the topic of simulation function in great detail [9, 29], a for-mal approach to the quantitative assessment of form does not yet exist. This paper proposesan analysis methodology for a manufacturing system in its deployment phase using the HPS

Page 8: A Formal Characterization and Analysis for Hardware-in-the- Loop

8 William S. Harrison, Dawn M. Tilbury

Fig. 3: An alternative deployment progression to Fig. 2.

approach called “equivalence”. Equivalence is a formal means of description for an HPS asit moves from complete simulation to an all real process. The first step of equivalence anal-ysis is to abstract a manufacturing system into conceptualized parts. The analysis of theseconceptualized parts provides insight into how much of the HPS is simulated. The analysisalso reveals information about the hierarchical structure of the individual components of theHPS.

Equivalence is defined as a quantitative measure of how similar in structure the HPS is tothe non-simulated manufacturing system. The analysis involved in equivalence also providesinformation about the relative importance of individual parts of the process. A more detaileddescription will be given later in this section.

The HPS approach and analysis can be applied to any production system with identi-fiable interfaces and standard communication methodologies such as DeviceNet, Profibus,or OPC. The non-simulated manufacturing system must be capable of being separated atits interfaces and modularized. Manufacturing assembly and machining processes serve asgood examples of processes to which the HPS approach and analysis can be applied.

In this paper we will focus on manufacturing systems with control hierarchies that canbe translated into tree structured directed graphs (exactly one root node, with all nodes inthe process having exactly one parent). This structure includes many of the systems in use,leaving the extension of the approach to more general structures for future work.

Understanding the amount of simulation in an HPS is more complex then just summingthe number of simulation models. Equivalence provides three quantitative descriptions of anHPS: 1) how much of the system is represented in some way (simulated or actual), 2) howmany of the HPS’s physical components (machines and robots) are present, 3) how much ofthe communication infrastructure is present. All three quantitative descriptions describe theHPS’s current status of deployment differently.

4.1 HPS Object Oriented Ontology Terms

The object oriented approach to programming can be leveraged as a tool to describe theontology put forth for HPS and the accompanying equivalence analysis methodologies. Ob-ject oriented approach to programming languages provide a standardized means to describestructure (how something is organized) and behavior (what it does). The remainder of thispaper will describe the HPS ontology in the context of the object oriented approach as put

Page 9: A Formal Characterization and Analysis for Hardware-in-the- Loop

Characterization and Analysis for Manufacturing System Deployment 9

Fig. 4: A hierarchical break down of the classifications of the pieces of an HPS, with the attributes illustratedon the right. Spatial essence, data essence, newtonian effect, and procedural effect are categories (not values)of the essence and effect attributes

forth by [27]. The ontology put forth consists of terminology, relations, and definitions usedfor describing a manufacturing system. Specifically the paper will define both the abstractcomponents of an HPS, and the quantifiable characteristics of an HPS.

The characterization of HPS using the Object Oriented (OO) approach will be describedfrom a top-down perspective. The OO approach is characterized by objects and classes.Classes are described as a group of things with the same properties, behaviors, and rela-tionships, where an instance of a class is an object. Fig. 4 is the UML diagram for thecharacterization terminology.

Definition 3 Existent: An abstract superclass of objects that includes all things in the finalmanufacturing system such as machines, controllers, and the means of communication be-tween them.

Definition 4 Essence Attributes: A set of characteristics that make an existent object whatit is, such as location, tracking number, mass, volume, etc.

Definition 5 Effect Attributes: A set of outcomes stemming from an act that changes ormaintains essence, such as change in location, tracking number, mass, volume, etc.

Definition 6 Presence: An enumerated attribute that describes how an existent object isimplemented. Its possible values are:

– Simulated: attribute value which indicates that the existent object is a simulation andis represented in a form other than that which will be present when the entire system isnon-simulated.

– Actual: attribute value which indicates that the existent object exists and is non-simulated.– Non-present: attribute value which indicates that an existent object is not simulated or

implemented in any way in the current HPS.

Definition 7 Representation: An enumerated attribute that describes how an existent objectis separated from the rest of the system. Its possible values are:

– Aggregated: attribute value which means that the existent with the attribute is locatedinside of a larger simulation as a function.

– Distinct: attribute value which indicates that the existent is distinctly separate and islocated in its own environment, in which it communicates by the same interface as itsnon-simulated counterpart.

Page 10: A Formal Characterization and Analysis for Hardware-in-the- Loop

10 William S. Harrison, Dawn M. Tilbury

The representation attribute is relevant only if the presence attribute of the correspond-ing body (Def. 6) is simulated.

Definition 8 Entity: A subclass of objects that can perform actions and/or provide deci-sions. Two separate entity objects cannot share information without the recognition andpermission of the other entity.

Definition 9 Body: A subclass of objects that can store information and perform actionsbut cannot make decisions or provide control.

Definition 10 Pathway: A subclass of objects that provide communication between anycombination of Entities and Bodies.

The terminology will be explained through an example illustrated in Fig. 5. In this ex-ample a fan receives signals from a controller based on readings from a thermometer. Boththe thermometer and the fan have interfaces to the computer. The computer receives realtimetemperature information from the thermometer where it can then provide control for the fanbased on the information from the thermometer. The example includes five Existent objects:the computer, the thermometer, the fan, the connection between the computer and the fan,and the connection between the computer and the thermometer.

Fig. 5: An example of an HPS/HIL, where the real time controller is a simulated component (C̃), and thethermometer and fan are part of the final physical implementation (C).

4.2 Existent Superclass

The highest level class or superclass in HPS is called an Existent (Def. 3). Since an Existentis at the highest level, it exists as an abstract class, and therefore all machines, controllersand connections of a manufacturing system fall under the Existent superclass in a general-ization hierarchy. The Existent superclass includes all parts of the manufacturing system,those represented in the present HPS as real components or simulations and those that arenot present yet but will be implemented in the non-simulated manufacturing system. The Ex-istent superclass has a set of essence attributes (Def. 4) and a set of effect attributes (Def. 5).

Page 11: A Formal Characterization and Analysis for Hardware-in-the- Loop

Characterization and Analysis for Manufacturing System Deployment 11

Table 1: Possible Values of the Essence and Effect Attributes (non-relevant in italics)Physical Domain Digital Domain

Spatial Essence Newtonian Effect Data Essence Procedural EffectMass Wind speed Stored fan location Change of stored dataVolume Weight Stored fan orientation

Temperature changeElectrical signalEnergy consumedChange in fan angle

Two categories of effect attributes are considered here. The first category encompasseseffects from the physical world, which is any effect stemming from physical interaction. Be-cause the effect attributes of physical interaction are echoed in Newton’s third law, this typeof effect attribute is termed Newtonian effect. Newtonian effect attributes are categorizedas being a part of the physical domain. One effect attributed in the cooling system exam-ple in Fig. 5 would be the electrical signal sent from the thermometer, or the new ambienttemperature of a room cooled down by the fan.

The other category of effect attributes encompasses effects from the digital world. Be-cause procedural programming languages focus more on functions and the change of datarather than the data itself, this category of effect attributes is termed procedural effect. Pro-cedural effect attributes take place in the digital domain. Here the digital domain includesany computer stored information. One procedural effect attribute might be the changing ofthe temperature stored in the controller.

The existence of an effect suggests a cause. The cause of an effect from the physi-cal point of view involves mass and volume. Mass or volume could then be consideredthe essence of physical existence. For this reason the essence attribute (Def. 4) is comple-mentary to the effect attribute, and also defines the effect attribute. In the physical domainspatial essence (mass/volume) is complementary to Newtonian effect. In the digital domaindata essence is complementary to procedural effect. The spatial essence attribute is consid-ered to be the physical volume or mass (the actual physical thermometer) representing anexistent object in the physical domain. The data essence attribute (the stored temperatureinformation) is considered to be the electronic information representing an existent objectin the digital domain. The fan is an existent object with values for the essence and effectattributes listed in Table 1. Table 1 has been divided into digital and physical domains foreasier understanding but this division is not required for the analysis. Fig. 6 is the UML classrepresentation of the fan where the domain distinction is not made.

Fig. 6: The fan body can be written in its UML class diagram form.

Page 12: A Formal Characterization and Analysis for Hardware-in-the- Loop

12 William S. Harrison, Dawn M. Tilbury

Enumerations are a data type with a finite set of values. The Existent superclass also hastwo enumerated attributes which are presence (Def. 6) and representation (Def. 7).

If the cooling system was tested with only a thermometer and a controller, the pres-ence attribute of the fan would be non-present. This might happen if the fan’s effect onthe environment and its internal behavior are very well understood and simulating it is notnecessary.

Representation is an enumerated attribute which applies only to simulated Existents.The representation attribute can be either aggregated or distinct. If an object of the Existentsuperclass is currently located within a larger program with other simulated entities, thatsimulation’s representation attribute is aggregated. If it is separate, then it is distinct.

Fig. 7: When a simulated Existent is grouped together with other Existents in the same program it is aggre-gated.

Fig. 8: When a simulated Existent has the same interface as the process or component it represents it isdistinct.

Figure 2 uses this terminology, where a manufacturing system starts from all non-presentcomponents and evolves to become all actual components.

One key to the HPS approach is that a simulated component within an HPS can emulatethe Newtonian effect attributes which means that the non-simulated components are notaffected by the absence of the spatial essence of the simulated component. The term emulateis used here to mean the recreation of an effect. An example might be a simulation of thefan that can communicate and send the same signals that the real fan would send to thecontroller.

Page 13: A Formal Characterization and Analysis for Hardware-in-the- Loop

Characterization and Analysis for Manufacturing System Deployment 13

4.3 Entity, Body, and Pathway Subclasses

The Existent superclass has three sub classes which are Entity (Def. 8), Body (Def. 9), andPathway (Def. 10). All three subclasses inherit the attributes of the Existent superclass.

Fig. 9: A diagram showing the composition of an HPS.

Instances of the Entity class serve as the brains of the type of manufacturing systems de-scribed here; they will make all decisions and account for the control. The controller in Fig. 5is an instance of the Entity class because it makes the decisions for the system. The ther-mometer and the fan are not because they have no decision making capabilities, therefore,they are instances of the Body class. The connections between the controller, thermometer,and fan are the pathways. An HPS is composed of Entities Bodies and Pathways. Fig. 9shows the aggregation of the three subclasses.

4.4 Class Application

Analysis using the HPS approach will compare simulated or non-present existent objects totheir non-simulated actual counterparts. This comparison will include the essence and effectattributes of an actual existent with those of a simulated existent. In the example describedabove including the controller, fan, and thermometer, consider the fan in particular. Theessence attributes of the actual existent includes physical characteristics that represent thefan, where mass and volume are in the physical domain and the fan’s data is in the digitaldomain. The fan data could be status information such as its present orientation or its presentfunctioning percent of maximum speed. The effect attributes of the actual fan include thoseeffects that can be observed to occur because of the fan. All of the effects listed in Table 1are the effects the fan has on its environment, both in the digital and physical domain.

When considering the comparison between a simulated and an actual existent object,only the effect and essence attributes relevant to the process should be considered. In the fanexample the weight of the fan is not a relevant effect, and thus should not be included in thecomparison. The ambient temperature change and possibly the effect on power consumptioncould be relevant and thus should be in the comparison between the simulated and the actualexistents. In this paper we consider simulations that emulate either all or none of the relevanteffects and essences (Table 1). If the fan is a distinct simulated component, it should have allthe same digital domain states, inputs, and outputs as the real fan. Table 2 is an example ofhow the comparison can be displayed. In this case the simulated existent object has two outof four of the total essences and effects so it could be considered a 50% simulation. Table 2 isconsidered a typical example, where all essences and effects of a simulated component are inthe digital domain (data essence and procedural effect). A less typical example would be anengine dynomometer, where the dynomometer simulates the engine load but is completelyin the physical domain (spatial essence and Newtonian effect). A more in depth discussionof equivalence calculations can be found in Section 5.

Page 14: A Formal Characterization and Analysis for Hardware-in-the- Loop

14 William S. Harrison, Dawn M. Tilbury

Table 2: Example Comparison of Existent Essences and EffectsActual Existent Object Simulated Existent Object

Spatial Essence Newtonian EffectData Essence Procedural Effect Data Essence Procedural Effect

5 Equivalence Analysis

The three equivalence metrics that will be defined in this paper include: structural equiv-alence, leaf equivalence, and communication equivalence. Equivalence analysis provides ameasure of an HPS’s level of simulation, physical presence, and level of dependance. Thesemetrics were chosen because they represent aspects of an HPS that are not present in a puresimulation. With the HPS approach the amount of simulation could transition from 0% (allnon-present) to 100% (all simulated) to 0% (all actual) with many intermediate percentages,but this measure is not as simple as adding up the number of simulations and dividing itby the number of components. Certain questions about the HPS should be answered fromboth a control and connectivity perspective, such as: how are these processes simulated, howdo the simulations communicate, and how are the different components dependant on oneanother. The equivalences and the analysis that goes along with them attempt to incorporatethe answers to these questions in a quantifiable manner.

Fig. 2 can provide context for a discussion of what the equivalences are. Developmentcan start with planning and nothing simulated, where all components are non-present. Thenext step is to develop a pure simulation where all components are simulated in the sameprogram (aggregated). At this step structural equivalence becomes a useful quantitative mea-surement. Structural equivalence is a measure of the amount of the HPS that is representedin at least simulated form. After pure simulation, deployment can proceed in two differentdirections. Simulated existent objects capable of functioning in an HPS can be connectedwith actual components depending on what is available, or the communication infrastruc-ture can be put in place first with all simulated existent objects. The latter case is called ahardwired simulation because all of the wires for communication are in place. Moving fromthe pure simulated phase to the hardwired simulation phase means that both structural equiv-alence and communication equivalence become non-zero. Communication equivalence is ameasure of how much of the communication infrastructure is in place and in what form.If the step after pure simulation is to begin to add actual existent objects and simulatedexistent objects then structural, communication, and leaf equivalence become useful quanti-tative measurements. Leaf equivalence is an indirect measure of how much of the physicallyinvolved (directly or indirectly physically interacts with the workpiece) part of the manufac-turing system is present. The physically involved system includes components that physi-cally interact, such as robots, CNCs, and actuators. Even with the hardwired simulation stepemployed, deployment will eventually reach a point where some part of the manufacturingsystem will have actual components making all three equivalences descriptive quantities.

5.1 Pre-Analysis

The quantities involved in calculating the equivalence metrics require a pre-analysis to beperformed on the directed graph of the final system. Pre-analysis translates the directedgraph into the objects and attributes (entity, body, essence, effect, etc.) introduced in Sec-

Page 15: A Formal Characterization and Analysis for Hardware-in-the- Loop

Characterization and Analysis for Manufacturing System Deployment 15

(a) (b)

Fig. 10: Pre-analysis divides a hierarchical control and communication infrastructure (a) into levels and divi-sions, where E represents an entity, and B represents a body (b).

tion 4. Equivalence analysis is a comparison between the system in its final form and in itscurrent state as an HPS. Pre-analysis provides a structural description of the system that isused to map out the system’s control structure and connectivity. Fig. 10a shows the controlhierarchy and connectivity of the Reconfigurable Factory Testbed (RFT) at the University ofMichigan [23] in the form of a directed graph. Fig. 10b shows the pre-analysis of the finalsystem.

Definition 11 System Graph: A directed graph of a manufacturing system where the edgesof the graph represent control communication from the parent node (the controller) to thechild node (the controlled). Nodes represent entities/bodies and the edges of the graph rep-resent the pathways.

Fig. 10b shows Entities (E) and Bodies (B) that correspond to the blocks in Fig. 10a.The number after E or B represents an instance of that class. E1 corresponds to the SystemLevel Controller (SLC) which is an instance of the Entity subclass, E2 corresponds to theCell 2 Controller (C2C) which is also an instance of the Entity subclass, B1 corresponds tothe CNC under the Cell 2 Controller which is an instance of the Body subclass, and so forth.The numbering convention is chosen for convenience, but E1 could also be referred to as theSLC Entity. From this point forward instances of the Entity or Body subclass will be referredto as entity and body in non-italicized form.

Definition 12 Level: A subset of the graph containing nodes and edges in an HPS controlhierarchy denoted by the distance of the nodes from the root node. Levels are numberedstarting from the root node (node with no parent). The root node resides in level 1, andchildren of the root node reside in level 2, children of level 2 nodes reside in level 3 and soon. Edges/pathways between level i and i + 1 belong to level i + 1; therefore, the pathwaybetween E1 and E2 is in level 2.

Level 1 in the RFT example is the SLC (E1). Level 2 in the RFT contains the cell 2controller (E2), the conveyor controller (E3), the cell 1 controller (E4), as well as the edgesbetween E1 and the level 2 entities. All of the bodies in the RFT example are in level 3.

Definition 13 Division: A subtree of the graph that corresponds to all the children, bothindirect and direct, of a particular node.

Page 16: A Formal Characterization and Analysis for Hardware-in-the- Loop

16 William S. Harrison, Dawn M. Tilbury

Definition 14 Terminal Node: A node (leaf) with no children.Physical equipment will often times be a terminal node because the equipment itself

does not usually supply control for another part of the manufacturing process.

For an HPS with K total levels let I={all nodes }, DCi= {all direct children of node i}, ICi= {all indirect children of node i } , and |I| represent the total number of nodes. Anindividual node can be referenced by noting that a node i is on level k of an HPS. Node ihas a number of direct children |DCi| and indirect children |ICi|.

As shown in Fig. 10b, the level 1 division has all pathways and bodies within it. Thereare three level 2 divisions because there are three level 2 entities. The level 2 divisions canbe differentiated by using the name of the corresponding entity/object; therefore, the E2division includes E2, B1, B2, B3, and the pathways that connect them.

Another part of pre-analysis involves determining the importance of the entities/bodiesas well as their influence. Importance is determined by an entity’s/body’s degree D.

Definition 15 Degree: The total number of direct children for node i (|DCi|) plus the num-ber of indirect children of node i (|ICi|).

Di = |ICi|+ |DCi| (1)

Di can also be calculated based on the degree of its direct children.

Di =

∑j∈DCi

(Dj + 1) if |DCi| > 0

0 if |DCi| = 0

In Fig. 10b the degree of each entity/node is shown in the lower right corner of its block.Terminal nodes have zero degree and so are left blank. The entity E2 in Fig. 10b (in level 2)has three direct children and no indirect children, therefore, it has a degree of 3.

The other quantitative measure associated with entities/bodies is the influence; the in-fluence is noted in the top right corner of an entity’s or body’s box.

Definition 16 Influence: The effect an entity/body has on the system as a whole. The influ-ence IFLi of node i is calculated by

IFLi =Di

Droot(2)

where Droot is the degree of the root node which is equal to |I| − 1 (the SLC/E1 in theRFT example) and the IFLroot = 1.

Page 17: A Formal Characterization and Analysis for Hardware-in-the- Loop

Characterization and Analysis for Manufacturing System Deployment 17

Table 3: A sample comparison table created after structural equivalence analysisk NF

k Nk

1 1 02 3 13 7 5

5.2 Structural Equivalence Analysis

Structural Equivalence (STE) is defined as a measure of how much of the system is rep-resented in some form, either simulated or non-simulated. With structural equivalence, aprocess planner can better understand how much of the system is accounted for.

Definition 17 Structural Equivalence: The measure of how many of the entities and bodiesin the HPS are represented in simulated or actual form.

STE =1

K

K∑

k=1

Nk

NFk

(3)

where Nk is the number of simulated or actual nodes at level k of the HPS, and NFk , is the

number of nodes in the non-simulated manufacturing system at level k for levels 1 to K.Equation 3 can also be written as:

STE =N

|I| (4)

where N is the current number of nodes. STE is a measure between 0 and 1, where an STEnear 1 indicates that much of the system is actual or simulated. The closer the STE is to 1the more useful any testing and validation is to the final non-simulated system. An STE of 1means that the system has all entities/bodies that will exist in the final system represented inthe current HPS. If the STE is low, a process planner can use this understanding to allocatemore resources to increasing the STE, rather than to validation, in the beginning of processdeployment.

The STE can provide an overall look at the current state of deployment for the HPS;however, a more detailed look at the factors that contribute to structural equivalence may bemore useful. A comparison table can be used to give a more in depth view of the currentHPS and its relation to the completely actual process. The columns represent the level k,the number of nodes in the non-simulated system at level k (NF

k ), and the number of nodesin the current HPS at level k (Nk). Table 3 is a comparison table for the RFT with anexample HPS in the third column. The comparison table presents a bird’s eye view of theHPS from a simulation and deployment perspective. A process planner can then have moreinformation to make good decisions about how to proceed with implementation and troubleshooting. The process planner may have information about what equipment has arrived andeven what equipment is installed; however, a comparison table is a mix of the informationabout the installed equipment, the implemented simulations, and where simulated and actualcomponents are in the control hierarchy.

Page 18: A Formal Characterization and Analysis for Hardware-in-the- Loop

18 William S. Harrison, Dawn M. Tilbury

5.3 Leaf Equivalence Analysis

Leaf Equivalence (LE) describes what is physically deployed at the moment based on ter-minal bodies and entities. Terminal bodies and entities (Def. 14) are used because machineswith a physical footprint in a manufacturing process will in most cases be located on thebottom of the control hierarchy; therefore, an analysis on terminal entities and bodies isclosely correlated to the physical presence of the system. LE differs from structural equiva-lence because a completely simulated system could have an STE of 1, but a much lower LEof 0.5.

Fig. 10 illustrates an example where level 3 contains the terminal nodes of the HPS,and thus all bodies on level 3 are terminal bodies. Once the terminal bodies and entitiesare identified, a Comparison Number for Leaf Equivalence (CNL) can be calculated forall simulated components. The comparison number comes from comparing the essences ofthe simulated body or entity in the HPS to its corresponding non-simulated form. Effects arenot included because their comparisons are captured in the analysis of the pathways betweenentities and bodies. Equation 5 shows how the comparison number is calculated.

Definition 18 Leaf Equivalence: The measure of how many of the terminal entities andbodies (nodes) in the HPS are represented in simulated or actual form.

CNLi =

1 if Presence = actual,ESSi

ESAiif Presence = simulated,

−1 if Presence = non-present.

(5)

Let TN= {nodes with Di = 0}.

LE =1

|TN |∑

j∈TN

CNLj (6)

where for node i ESAi and ESSi are the number of essences in the actual entity/body,and the number of essences in the simulated entity/body. In this paper, essences and effectswithin a domain are treated as all present or all absent; thus, ESA is almost always 2. ESAcould be 1 for cases when a component only has a digital existence, such as a controller, or amechanical component that performs a physical task and has no need for data essence. CNLwill be 0.5 in the vast majority of simulated components; however, CNL can equal 1 if boththe physical and digital domain essences are simulated.

The LE is always between -1 and 1, and an LE of less than 0 means that the majorityof the components are non-present. An LE of 0 means that there are an equal number ofnon-present and simulated or actual essences. An LE greater than 0 means that the majorityof the essences are represented in some way.

5.4 Communication Equivalence Analysis

Communication Equivalence (CE) focuses analysis on the pathways of the process. Fig. 10shows different pathways at different levels. Pathways are signified by the child node of aparent-child pair; therefore, in Fig. 10 there is a B1 pathway, a B2 pathway, an E2 pathwayand so on. With the exception of the root node, every connected node has a pathway. Thesystem illustrated in Fig. 10 has ten (=|I| − 1) pathways. The number of pathways is also

Page 19: A Formal Characterization and Analysis for Hardware-in-the- Loop

Characterization and Analysis for Manufacturing System Deployment 19

equal to the degree of the root node (Droot). Multiple pathways can be implemented on asingle network, such as DeviceNet or Profibus.

Calculation of CE is similar to LE, in that CE requires that each pathway in the HPS becompared to the pathway in the non-simulated manufacturing system to attain a Compar-ison Number for Pathway Equivalence (CNP). Unlike CNL, however, CNP includes bothessence and effect to capture both the physical structure (wires and connections) as well asthe information used in communication.

Let I−root={all nodes except the root node}Definition 19 Communication Equivalence: The measure of how much of the communica-tion infrastructure is represented and whether it is physically present.

CNPi =

1 if Presence = actual,

0.5(

ESSi

ESAi+ EFSi

EFAi

)if Presence = simulated,

−1 if Presence = non-present.

(7)

CE =1

|I| − 1

j∈I−root

(CNPj) (8)

where CNPi, is the comparison number for the pathway to node i from its parent, EFAi,and EFSi are the number of effects in the actual entity/body for node i, and the numberof effects in the simulated entity/body for node i respectively. The denominator is |I| − 1because the root node (i = 1) has no parent. The CE is between -1 and 1, and correspondsto the same conclusions as LE but only pertains to the communication infrastructure.

5.5 Equivalence Analysis Discussion

Pre-analysis and the equivalences not only provide information about the HPS in referenceto how close it is to the non-simulated system, but also provide information about the sys-tem as a whole. Pre-analysis done on the non-simulated manufacturing system can helpdetermine how the setup should be brought online, and where resources should be allocated.For example, if pre-analysis identifies an entity with both a high degree and influence, theprocess planner can allocate resources early to this entity and its corresponding division.Equivalence analysis can also help outline an overall plan for deployment. During a manu-facturing system’s deployment, structural equivalence will be the first to increase while leafequivalence will be the last. Communication equivalence will complement both structuralequivalence information as well as leaf equivalence information by quantitatively giving thestatus of the connectivity of the system as a whole. During deployment the equivalences canbe used to quantitatively track progress. The tracking can be recorded across different de-ployments and used to optimize a strategy for deployment. An example of this comparisonwould be the deployments outlined in Fig. 2 and 3. Even though the equivalence metricsare the same for a and f, they are different for b through e. If these cases represented twosimilar systems then some conclusions could be made depending on which deployment wasfaster and cheaper. For large OEMs, equivalence analysis can also be used to compare mul-tiple different deployment sites, aiding in company wide strategic allocation of resources.This enables quick educated decisions to be made from a non-local perspective based on anobjective analysis.

It is important to note that the information gained from equivalence analysis does notprovide the complete picture for deployment status and allocation resources. There are an

Page 20: A Formal Characterization and Analysis for Hardware-in-the- Loop

20 William S. Harrison, Dawn M. Tilbury

almost infinite number of factors that could also affect either a process planner’s or a largeOEM’s decisions about deployment. The equivalence analysis does, however, provide a sim-ple objective view of progress during deployment using the HPS approach.

6 Application

In this section, we apply the developed equivalence analysis to two separate scenarios. Thefirst scenario is an HPS created on the RFT, and the second test case will be a manufacturingtest setup put forth by [15]. Each example represents a snapshot of one of the deploymentprocesses described in Section 3.

6.1 RFT HPS

The RFT is located in the Engineering Research Center for Reconfigurable ManufacturingSystems at the University of Michigan. The RFT is designed to make small trains out oftwo pieces of wax, with the letter M carved on the front. The RFT’s physical componentsinclude two robots, four CNC machines, and one conveyor, which make up the serial-parallelline (figure 11 shows a photo of the serial-parallel line). The robots and CNCs are equallydivided into two cells (Cell 1 and Cell 2) (Fig. 10a), which are coordinated by the SystemLevel Controller (SLC). The cells themselves are in series but the processing within eachcell can be done in parallel.

Fig. 11: A photo of the serial-parallel line. Cell 2 is on the left and Cell 1 is on the right

Page 21: A Formal Characterization and Analysis for Hardware-in-the- Loop

Characterization and Analysis for Manufacturing System Deployment 21

(a) (b)

Fig. 12: Graphs of the equivalences for the RFT component deployment example where Ta represents thetime when the simulated supply cell is introduced, prior to which the supply cell is non-present. Tb marks thetime when the actual component is installed.

The RFT test scenario represents a process that is initially a completely actual manufac-turing system in need of modification due to some malfunctioning or outdated component.In this scenario, a simulated cell/component is integrated into the existing process causingit to become an HPS. This scenario is a snapshot of the component deployment process de-scribed in Section 3. Fig. 12 contains two graphs of what the equivalences would look likein this deployment.

Fig. 13: A screen shot of the simulated supply cell visualization

The supply cell addition is a scenario that includes the addition of a simulated overheadgantry (Fig. 13). The overhead gantry works as a supply cell to the rest of the RFT (Fig. 14a).Testing the system with the simulated supply cell did not require the use of both cell 1 andcell 2, therefore, only cell 1 was used (Fig. 14a).

6.1.1 RFT Equivalence Analysis

The first step in preliminary analysis is to translate Fig. 14a into the same form as Fig. 10b,where the levels and divisions are specified, and the degrees and influences are calculated.Fig. 15 shows the results of the preliminary analysis (Fig. 15a). Table 4 shows the resultingcomparison table.

The structural equivalence can be calculated from Equation 4, where Nc = 9, and|I| = 9. The values for the divisions are listed in Table 4.

Page 22: A Formal Characterization and Analysis for Hardware-in-the- Loop

22 William S. Harrison, Dawn M. Tilbury

(a)

Fig. 14: A directed graph of the version of the RFT used in the HPS (14a)

(a)

Fig. 15: The preliminary analysis of the HPS used in testing (15a)

Table 4: Comparison table of the RFT HPSk NF

k Nk

1 1 12 4 43 8 8

The STE changes during the deployment process. When the plan for adding the cellis first begun the supply cell is completely non-present making the STE 0.78. After thesimulation of the supply cell is installed the STE then becomes 1. Time Ta on Fig. 16a and16b corresponds to the time when the simulation of the supply cell is installed.

Leaf equivalence is determined by first calculating the comparison number for all termi-nal/leaf nodes (CNL). All of the bodies are terminal nodes in the RFT, and Table 5 lists allof the terminal bodies along with their CNL (Equation 7). If the component does not haveany of the essences or effects of its actual counterpart it is listed as not simulated.

The leaf equivalence is 0.9, which indicates that the majority of the process is in placewith both simulated and actual components.

The CNP for pathways are calculated in Table 6. The communication equivalence asdefined in Equation (8) is determined to be 0.88. Pathways from the System Level controller(E1) to the supply cell controller (E3), and from the supply cell controller to the gantry (B4),are simulated.

Page 23: A Formal Characterization and Analysis for Hardware-in-the- Loop

Characterization and Analysis for Manufacturing System Deployment 23

Table 5: The RFT’s HPS terminal bodies and their calculated CNLSupply Cell Gantry (CNLB4 = 0.5)

Physical Domain Spatial Essence not simulatedNewtonian Effect not simulated

Digital Domain Data Essence Gantry simulation programProcedural Effect Workpiece data generation

Conveyor (CNLB8 = 1)

Physical Domain Spatial Essence The physical conveyorNewtonian Effect Workpiece transport

Digital Domain Data Essence Conveyor control programProcedural Effect Workpiece location data change

Cell 1 CNCs (CNLB5 and CNLB7 = 1)

Physical Domain Spatial Essence The physical CNCNewtonian Effect Cutting of the workpiece

Digital Domain Data Essence CNC control programProcedural Effect Workpiece process status change

Cell 1 Robot (CNLB6 = 1)

Physical Domain Spatial Essence The physical robotNewtonian Effect Loading and unloading of the workpiece into the CNCs

Digital Domain Data Essence Robot control programProcedural Effect Workpiece location data changed

Table 6: The pathways within the HPS and their comparison numbersUDP Pathway (CNPE3 = 0.5)

Physical Domain Spatial Essence not simulatedNewtonian Effect not simulated

Digital Domain Data Essence Part request commandsProcedural Effect transported commands

OPC Pathways (CNPE5 and CNPE4 = 1)

Physical Domain Spatial Essence Ethernet cableNewtonian Effect Ethernet OPC signal

Digital Domain Data Essence Conveyor and cell control commandsProcedural Effect transported commands

UDP Pathway (CNPB4 = 0.5)

Physical Domain Spatial Essence not simulatedNewtonian Effect not simulated

Digital Domain Data Essence Part transport requestProcedural Effect transported commands

DeviceNet Pathway (CNPB8 = 1)

Physical Domain Spatial Essence DeviceNet wiringNewtonian Effect DeviceNet Signals

Digital Domain Data Essence Stop control commandsProcedural Effect transported commands

DeviceNet Pathway (CNPB5, CNPB6 and CNPB7 = 1)

Physical Domain Spatial Essence DeviceNet wiringNewtonian Effect DeviceNet Signals

Digital Domain Data Essence cell control commandsProcedural Effect transported commands

Page 24: A Formal Characterization and Analysis for Hardware-in-the- Loop

24 William S. Harrison, Dawn M. Tilbury

6.1.2 RFT Equivalence Analysis Summary

A structural equivalence of 1 means that no component in the system has a non-presentattribute. If this information is coupled with the leaf equivalence of 0.9, it can be concludedthat there are both simulated and actual components in the HPS.

The communication equivalence, which is 0.88, means that the communication infras-tructure is generally at the same point of installation as the other components. A manufac-turing process planner can deduce from the equivalences that there is very little installationremaining, and a good deal of the actual components are in place and connected. The pro-cess planner can now shift some resources from installation (if they haven’t already) andinto validation. A structural equivalence of 1 and a communication equivalence greater than0.5 means that system wide validation is possible.

6.2 Manufacturing System Emulation Environment

The Manufacturing System Emulation Environment (MSEE) put forth by [15], like the RFTHPS, is a manufacturing process with both simulated and actual components. Their imple-mentation approach is very similar to the HPS implementation approach described in [12],with a few exceptions. [15] makes a distinction between simulation and emulation, where“manufacturing system simulation means to create certain conditions of manufacturing sys-tems by means of models”, and “a manufacturing system emulator is a device or piece ofsoftware that enables a program or an item of equipment intended for one type of com-puter or equipment to be used with exactly the same results with another type of computeror equipment”. Their definition of an emulator is the same as the simulated componentsdescribed in an HPS. The MSEE approach also makes use of a middle layer that allowscommunication between all involved simulated and actual components. Fig. 17 from [15]shows the layout of the MSEE system.

In this paper, the MSEE example will represent a snapshot of the system wide deploy-ment process. Fig. 16 illustrates how the complete deployment process might look. Theinstallation procedure in this case dictates that the CE lag behind the LE for the entire de-ployment. The graph also illustrates how the STE becomes 1 (at time Ta) when all compo-nents are simulated or actual. After time Ta, CE and LE progress toward 1 until the processhas all actual components and connections.

Simulations in an HPS are meant to connect to the system in the same way as the compo-nents they represent, where emulators in an MSEE communicate through a software middlelayer called ORiN (Open Resource interface for the Network / Open Robot interface for theNetwork). All communication takes place through ORiN, even between actual components.Their case study setup consists of a robot, a tester, a palettizer, and a conveyor as emulators,and a robot controller, a bar code reader, and a patlite (warning sign indicator) as actualequipment. Their are also four systems present which include a production control system,a manufacturing cell emulator, a soft-wiring system, and a working monitoring system.

The soft-wiring system runs on ORiN and takes the place of the hard-wiring that wouldnormally exist between controllers and equipment. The soft-wiring system enables commu-nication between real and emulated components. The manufacturing cell emulator enablesall of the actual and emulated components to function as if there is material flow. Becauseparts of the system are emulated, there can be no physical material flow, therefore the syn-chronization of all processes require the material flow to be emulated.

The following is quoted directly from [15] to describe the function of the MSEE:

Page 25: A Formal Characterization and Analysis for Hardware-in-the- Loop

Characterization and Analysis for Manufacturing System Deployment 25

(a) (b)

Fig. 16: Graphs of the equivalences for the MSEE component deployment example, where Ta signifies thepoint in time represented by the example. LE and CE are on the same graph to show how CE can lag behindLE.

Fig. 17: System Structure of the case study done in [15].

1. The barcode reader receives production order information by a Kanban.2. The production control system receives the production order information via ORiN.3. The production control system indicates the need to investigate the quantity of a product

along with the production order information toward the PLC via the soft-wiring system.4. The assigned product is picked up and moved to the conveyor by the palettizer in the

emulator. These behaviors are controlled by the PLC via the soft-wiring system.5. The assigned product is translated to a position in front of the robot by the conveyor in

the emulator. This behavior is controlled by the PLC via the soft-wiring system.6. The assigned product is picked up and moved to the tester by the robot. These behaviors

are controlled by the PLC and the robot controller via ORiN.7. The assigned product is investigated by the tester in the emulator.

Page 26: A Formal Characterization and Analysis for Hardware-in-the- Loop

26 William S. Harrison, Dawn M. Tilbury

Fig. 18: The directed graph of the MSEE system.

Fig. 19: The preliminary analysis of the MSEE

8. Results of this investigation in the emulator are sent to the PLC via the soft-wiring sys-tem. Then the control panel displays the results via the PLC. The patlite shows warningsigns along the results via the PLC.

9. The assigned product is picked up and moved from the tester to the conveyor by therobot. These behaviors are controlled by the PLC and the robot controller via ORiN.

10. The assigned product is translated to a position to the end of conveyor by the conveyorin the emulator. This behavior is controlled by the PLC via the soft-wiring system.

11. The assigned product after evaluation is eliminated in the emulator.

For explanation purposes we will assume both the patlite and the robot controller re-ceive commands from the PLC via ORiN, and the conveyor, tester, and palettizer receivecommands from the PLC via the soft-wiring system on ORiN. Fig. 18 shows the directedgraph of the system as interpreted by the authors of this paper. The bar code reader is leftout of the hierarchy because it only provides communication between Kanban and the Pro-duction Control System (PCS).

6.2.1 MSEE Equivalence Analysis

The first step in preliminary analysis is to translate Fig. 18 into the same form as Fig. 10b,where the levels and divisions are specified, and the degrees and influences are calculated.Fig. 19 shows the results of the preliminary analysis, and Table 7 shows the resulting com-parison table.

Structural equivalence can be calculated using Equation (4), where K = 4 is the numberof levels and the values for the divisions are listed in Table 7. The structural equivalence is

Page 27: A Formal Characterization and Analysis for Hardware-in-the- Loop

Characterization and Analysis for Manufacturing System Deployment 27

Table 7: Comparison table of the MSEEk NF

k Nk

1 1 12 1 13 5 54 1 1

Table 8: The terminal bodies and their calculated comparison numbersConveyor (CNLB1 = 0.5)

Physical Domain Spatial Essence not simulatedNewtonian Effect not simulated

Digital Domain Data Essence Production control programProcedural Effect Emulated product transportation

Tester (CNLB2 = 0.5)

Physical Domain Spatial Essence not simulatedNewtonian Effect not simulated

Digital Domain Data Essence Tester emulation in the cell emulatorProcedural Effect Product investigation results

Palettizer (CNLB3 = 0.5)

Physical Domain Spatial Essence non-presentNewtonian Effect not simulated

Digital Domain Data Essence Tester emulation in the cell emulatorProcedural Effect Emulated product placement on the emulated conveyor

Patlite (CNLB4 = 1)

Physical Domain Spatial Essence The physical patliteNewtonian Effect Warning light signals

Digital Domain Data Essence NoneProcedural Effect None

Robot (CNLB5 = 1)

Physical Domain Spatial Essence The physical robotNewtonian Effect Movement of the product

Digital Domain Data Essence NoneProcedural Effect None

1 because all of the bodies and entities in the non-simulated system are also present in thecurrent setup.

Leaf equivalence is determined by first calculating the CNL for all terminal bodies andentities. In this example, all of the bodies are terminal nodes in the MSEE. Table 8 lists allof the terminal bodies along with their comparison numbers (Equation 7). If the emulatedcomponent of the body does not have an essence or effect that the actual component has, itis listed as not simulated. If neither the emulated nor the actual component have essence oreffect, it is listed as none.

The patlite and the robot both have CNL = 1 because they are in their actual form. Theother three components are all emulations and so only exist in the digital domain, wheretheir actual counterparts would exist in both domains. From Equation 6, we see that the leafequivalence is 0.7.

The CNP for pathway equivalence is calculated in Table 9. The communication equiv-alence as defined in Equation 8 is determined to be 0.5. Both ORiN and the soft wiringsystem transport the same commands as the hardwired system would; however, the actualhard wiring is not present nor is the same electrical signal that relays the message. The

Page 28: A Formal Characterization and Analysis for Hardware-in-the- Loop

28 William S. Harrison, Dawn M. Tilbury

Table 9: The pathways within the MSEE and their comparison numbersSoft wiring system Pathway (CNPE2 = 0.5)

Physical Domain Spatial Essence not simulatedNewtonian Effect not simulated

Digital Domain Data Essence order commands and inventory inquiriesProcedural Effect transported commands

ORiN Pathway (CNPE3 = 0.5)

Physical Domain Spatial Essence not simulatedNewtonian Effect not simulated

Digital Domain Data Essence Robot task commandsProcedural Effect transported commands

ORiN Pathway (CNPB5 = 0.5)

Physical Domain Spatial Essence not simulatedNewtonian Effect not simulated

Digital Domain Data Essence Robot motion commandsProcedural Effect transported commands

Soft wiring system Pathway (CNPB1, CNPB2, and CNPB3 = 0.5)

Physical Domain Spatial Essence not simulatedNewtonian Effect not simulated

Digital Domain Data Essence Cell control commandsProcedural Effect transported commands

ORiN Pathway (CNPB4 = 0.5)

Physical Domain Spatial Essence not simulatedNewtonian Effect not simulated

Digital Domain Data Essence Light commandsProcedural Effect transported commands

CE does capture that the hardwired infrastructure is not present but it does not capture thatthe soft wiring system is perhaps a better simulation of the communication than the ORiNsystem alone.

6.2.2 MSEE Equivalence Analysis Summary

From the perspective of the process planner, a structural equivalence of 1 means that thesystem is completely accounted for in some way, be it actual or emulated. The leaf equiv-alence is 0.7 meaning that, of the components that make up the physical presence of thesystem, 70% of their essences are in place. If the leaf equivalence is compared to the struc-tural equivalence, it then becomes apparent that the system contains simulated componentsbecause there are no non-present entities (STE = 1). The communication equivalence meansthat 50% of the communication infrastructure’s essences and effects are in place. A commu-nication equivalence of 50% by itself means that all of the pathways could be representedin some way. If the STE is then factored in, it reasonable to say that the entire system hascommunication capability, and system wide validation is possible. Additionally, some of thephysical equipment that makes up the system has begun to arrive, is installed, and of theequipment that is not physically installed the rest is simulated. The communication equiv-alence also says issues that arise from component interfacing may arise later because thecommunication infrastructure has a large amount of simulation.

Page 29: A Formal Characterization and Analysis for Hardware-in-the- Loop

Characterization and Analysis for Manufacturing System Deployment 29

7 Conclusion and Future Work

In the paper we have put forth an equivalence analysis methodology for a manufacturingsystem in its deployment phase using the HPS approach. The analysis results in a quanti-tative description of an HPS’s current composition as compared to its final non-simulatedform. A process planner can use the results of an equivalence analysis for forming a conciseunderstanding of an HPS’s deployment progress.

Because the HPS approach to deployment requires a different perspective from the onespopularly employed for pure simulation, equivalence analysis is developed to create a frame-work for quantitatively describing form. The process planner can use this information aboutform to make decisions about resource allocation during process deployment.

Future work should include using the equivalences to develop methodologies for op-timizing deployment. The quantitative description gained from applying the equivalenceanalysis provides an objective means to track and compare similar deployment strategies,and to explore their possible mathematical correlations. Future work could also explore theapplication of this methodology and analysis to other domains outside of manufacturing.

Acknowledgements The authors would like to thank C. Yuan and F. Gu from General Motors Global Re-search and Development as well as J. Moyne for the University of Michigan

References

1. Fidelity, Tech. report, VV&A RPG Specail Topic, 2000.2. P. Andreas, J. Fransoo, and A. Kok, What determines product ramp-up performance?:a review of charac-

teristics based on a case study at Nokia Mobile Phones, Tech. report, Technische Universiteit Eindhoven,2007.

3. Marko Bacic, Two-port network modelling for hardware-in-the-loop, Proceedings of the American Con-trol Conference, 2007.

4. J. Banks, J. Carson, B. Nelson, and D. Nicol, Discrete-event system simulation, Prentice Hall, Inc., 2001.5. C. Boer, A. de Bruin, and A. Verbraeck, A survey on distributed simulation in industry, Journal of Sim-

ulation 3 (2009), 3–16(14).6. J. Dahmann, R. Fujimoto, and R. Weatherly, The department of defense high level architecture, WSC

’97: Proceedings of the 29th Conference on Winter Simulation (Washington, DC, USA), IEEE ComputerSociety, 1997, pp. 142–149.

7. R. Diogo, C. Vicari, E. Loures, M. Busetti, and E. Santos, An implementation environment for automatedmanufacturing systems, Proceedings of the 17th World Congress for the International Federation ofAutomatic Control, 2008.

8. R. Fujimoto, Parallel simulation: parallel and distributed simulation systems, WSC ’01: Proceedings ofthe 33rd conference on Winter Simulation, 2001, pp. 147–157.

9. C. Grimard, J.H. Marvel, and C.R. Standridge, Validation of re-design of a manufacturing work cellusing simulation, 2005 Proceedings of the Winter Simulation Conference, 4-4 2005, p. 6 pp.

10. D. Gross, Report from the fidelity implementation study group, Tech. report, Fall Simulation Interoper-ability Workshop, 1999.

11. H. Hanisch, A. Lobov, M. Lastra, R. Tuokko, and V. Vyatkin, Formal validation of intelligent-automatedproduction systems: towards industrial applications, International Journal of Manufacturing Technologyand Management 8 (2006), 75–106.

12. W. Harrison, D. Tilbury, and C. Yuan, From hardware-in-the-loop to hybrid process sim-ulation: An ontology for the implementation phase of manufacturing systems, submit-ted(http://dl.dropbox.com/u/3267895/Equivalence.pdf) (2010).

13. E. Henderson, How real is virtual real?, Colloquium on Virtual Reality Person Mobile and PracticalApplications, 1998.

14. H. Hibino, T. Inukai, and Y. Fukuda, Efficient manufacturing system implementation based on combina-tion between real and virtual factory, International Journal of Production Research 44 (2006), no. 18,3897–3915.

Page 30: A Formal Characterization and Analysis for Hardware-in-the- Loop

30 William S. Harrison, Dawn M. Tilbury

15. Hironori Hibino and Yoshiro Fukuda, Emulation in manufacturing engineering processes, WSC ’08:Proceedings of the 40th Conference on Winter Simulation, 2008, pp. 1785–1793.

16. Sanjay Jain, Frank Riddick, Andreas Craens, and Deogratias Kibira, Distributed simulation for inter-operability testing along the supply chain, WSC ’07: Proceedings of the 39th Conference on WinterSimulation, 2007, pp. 1044–1052.

17. S. Kanai, T. Miyashita, and T. Tada, A multi-disciplinary distributed simulation environment for mecha-tronic system design enabling hardware-in-the-loop simulation based on HLA, International Journal onInteractive Design and Manufacturing 1 (2007), 175–179.

18. Deogratias Kibira and Charles R. McLean, Generic simulation of automotive assembly for interoperabil-ity testing, WSC ’07: Proceedings of the 39th Conference on Winter Simulation, 2007, pp. 1035–1043.

19. P. Knepell and D. Arangno, Simulation validation, IEEE Computer Society Press, 1993.20. A. Matta, M. Tomasella, and A. Valente, Impact of ramp-up on the optimal capacity-related reconfigu-

ration policy, International Journal of Flexible Manufacturing Systems 19 (2007), 173–194.21. C. McLean, S. Jaen, A. Craens, and D. Kibira, A virtual manufacturing environment for interoperability

testing, Proceedings of the Delft University of Technology Conference, 2007.22. Charles McLean, Sanjay Jain, Frank Riddick, and Y. Tina Lee, A simulation architecture for manufactur-

ing interoperability testing, SCSC: Proceedings Of The 2007 Summer Computer Simulation Conference,2007, pp. 601–608.

23. J. Moyne, J. Korsakas, and D. Tilbury, Reconfigurable factory testbed RFT: A distributed testbed forreconfigurable manufacturing systems, Proceedings of the Japan-USA Symposium on Flexible Automa-tion, 2004.

24. D. Pace, Modeling and simulation verification and validation challenges, Johns Hopkins Applied Tech-nical Digest 25 (2004), 163–172.

25. D. Pace and J. Sheehan, Peer use in M&S V&V, Tech. report, The Foundation for V&V in the 21stCentury Workshop, 2002.

26. L. Pedro, A. Dias, L. Massaro, and G. Caurin, Dynamic modelling and hardware-in-the-loop simulationapplied to a mechatronic project, Procedings of COBEM 2007 (19th International Congress of Mechan-ical Engineering), 2007.

27. J. Rumbaugh and M. Blaha, Object-oriented modeling and design with UML, Alan Apt, 2005.28. G Shao, S. Leong, and C. McLean, Simulation-based manufacturing interoperability standards and test-

ing, Proceedings of the 9th International Conference on Progress of Machining Technology, 2009.29. D. Thomas, A. Joiner, Wei Lin, M. Lowry, and T. Pressburger, The unique aspects of simulation verifi-

cation and validation, Aerospace Conference, 2010 IEEE, 6-13 2010, pp. 1 –7.