Download - Using enterprise architecture and technology adoption models to predict application usage
Ua
PI
a
ARRAA
KEATTMCSMA
1
plvpptpf(cda
mteWi
0d
The Journal of Systems and Software 85 (2012) 1953– 1967
Contents lists available at SciVerse ScienceDirect
The Journal of Systems and Software
jo u rn al hom epage: www.elsev ier .com/ locate / j ss
sing enterprise architecture and technology adoption models to predictpplication usage
er Närman ∗, Hannes Holm, David Höök, Nicholas Honeth, Pontus Johnsonndustrial Information and Control Systems, the Royal Institute of Technology, Osquldas väg 12, 100 44 Stockholm, Sweden
r t i c l e i n f o
rticle history:eceived 5 June 2011eceived in revised form 8 January 2012ccepted 15 February 2012vailable online 7 March 2012
eywords:nterprise architecturerchitecture analysis
a b s t r a c t
Application usage is an important parameter to consider in application portfolio management. This paperpresents an enterprise architecture analysis framework which can be used to assess application usage.The framework, in the form of an architecture metamodel, incorporates variables from the previouslypublished Technology Acceptance Model (TAM) and the Task-Technology Fit (TTF) model. The paperdescribes how the metamodel has been tailored for a specific domain, viz. industry maintenance man-agement. The metamodel was tested in the maintenance management domain through a survey with 55respondents at five companies. Data collected in the survey showed that the domain-specific metamodelis able to explain variations in maintenance management application usage. Integrating the TAM and TTF
echnology Acceptance Modelask-Technology Fitaintenance management
omputerized Maintenance Managementystemsetamodel
variables with an architecture metamodel allows architects to reuse research results smoothly, therebyaiding them in producing good application portfolio decision-support.
© 2012 Elsevier Inc. All rights reserved.
pplication portfolio management
. Introduction
Modern organizations have large application portfolios com-rising hundreds if not thousands of applications. Despite the very
arge investments these portfolios represent, realizing their fullalue often proves to be elusive (Brynjolfsson, 1993). An importantroblem encountered by most organizations is the uncontrolledroliferation of applications leading to a heterogeneous applica-ion portfolio with redundant functions and data, high IT costs andoor business-IT alignment (Ross et al., 2006). Thus, there is a caseor structured application portfolio and landscape managementRiempp and Gieffers-Ankel, 2007) to support decision-makingoncerning changes to the application portfolio. Such rationalecision-making calls for means to assess the value of the individualpplications (Simon et al., 2010).
Delone and McLean introduced a six-dimension model of infor-ation systems value (DeLone and McLean, 1992, 2003). One of
hese dimensions is system usage. System usage has been found toxplain business performance (Devaraj and Kohli, 2003). Similarly,
eill and Vitale (1999) introduced system usage as one of the fivemportant parameters in assessing application portfolio health.
∗ Corresponding author. Tel.: +46 8 790 68 22; fax: +46 8 790 68 39.E-mail address: [email protected] (P. Närman).
164-1212/$ – see front matter © 2012 Elsevier Inc. All rights reserved.oi:10.1016/j.jss.2012.02.035
For the past couple of decades, two theories have reignedsupreme in explaining system usage: the Technology AcceptanceModel (TAM) (Davis, 1989; Venkatesh and Davis, 2000) and theTask-Technology Fit Model (TTF) (Goodhue and Thompson, 1995;Zigurs and Buckland, 1998). TAM is built around the two constructsperceived usefulness and perceived ease of use, and the TTF modelon the idea that having a good match of functional capabilities andtask requirements leads to higher usage. Although these theorieshave been shown to explain significant variations in system usage,very little has been published about their applications by practi-tioners (Lee et al., 2003; Benbasat and Barki, 2007); their use hasbeen mostly limited to the information systems research commu-nity.
One way of making the TAM/TTF models more easily accessibleto practitioners is to integrate them into already existing appli-cation portfolio management disciplines. A promising candidatediscipline which has been used for application evaluations pre-viously (Gammelgård, 2007; Gustafsson et al., 2009; Addicks andAppelrath, 2010) is enterprise architecture (EA).
EA is an holistic model-based IT and business management dis-cipline (Ross et al., 2006) featuring architecture change planningmethods such as the TOGAF Architecture Definition Method (ADM)
(The Open Group, 2009b), enterprise modeling languages such asArchiMate (The Open Group, 2009a) as well as methods for anal-ysis of the architecture models (Närman et al., 2011; Iacob andJonkers, 2004). The use of EA is becoming standard practice in most1 tems a
lBp2B
mpmaTEa
tidMdtci
i(ei
DmwacsotcT
ccAmudrqceiitw
1
2
3
t
954 P. Närman et al. / The Journal of Sys
arge organizations (Van der Raadt et al., 2010; Ross et al., 2006;urton and Allega, 2010), and many have suggested using EA to sup-ort decision-making concerning the application portfolio (Walker,007; Optland et al., 2008; Gammelgård, 2007; Pulkkinen, 2006;uckl et al., 2009a).
However, the current EA approaches to application manage-ent rely mostly on a qualitative understanding of the application
ortfolio, or on quantitative evaluations based on common-senseetrics with little or no theoretical underpinning. Thus, it appears
s if integration of the scientifically proven yet seldom usedAM/TTF constructs with the widely used, yet theoretically weak,A modeling frameworks would be a significant contribution topplication portfolio management.
This article attempts to do just that: its main objective iso demonstrate how architecture analysis can be performed byntegrating TAM and TTF models into a combined architectureescription language and analysis framework based on the Archi-ate language. Using this framework means collecting qualitative
ata on the application landscape as well as quantitative data fromhe application’s users so as to be able to make an analysis of whyertain applications are well-liked and widely used and – moremportantly – why others are not.
The architecture description language is expressed in a formal-sm known as the Hybrid Probabilistic Relational Models (HPRM)Närman et al., 2010). By employing the HPRM formalism the mod-ling and analysis is seamlessly integrated and allows for easy toolmplementation, should this be desired.
The TAM model is applicable for all functional domains, but asishaw and Strong (1999, 1998) noted, the TTF is a domain-specificodel in the sense that there is a need for reference descriptionshich operationalize the domain-specific tasks and IT function-
lity. There are already such TTF operationalizations for, e.g. theomputer maintenance domain (Dishaw and Strong, 1998) or groupupport systems (Zigurs and Buckland, 1998), but these comprisenly a small fraction of all available domains. Reference descrip-ions of tasks, processes or services have been used before in an EAontext, see e.g. (Council, 2001; Kelly, 2003; SAP, 2006) but not forTF analysis.
This paper will demonstrate how such TTF reference modelsan be developed, validated and integrated into a domain spe-ific architecture description language and analysis framework.n example of an instantiated, organization-specific architectureodel will also be shown and analyzed with respect to system
sage. The TTF reference model operationalizations concern theomain of maintenance management for heavy industry. Theseeference models were developed using a literature study and aualitative case study, and were validated quantitatively in a surveyomprising 55 respondents at five Swedish companies. These ref-rence model operationalizations are also valuable contributionsnsofar as practitioners can re-use them to predict application usagen the maintenance management domain. For purpose of illustra-ion, we will show an instantiated model for TAM/TTF analysis asell.
In summary, this paper aims to do the following:
. To integrate TAM/TTF analysis into an architecture descrip-tion language founded upon the Hybrid Probabilistic RelationalModel formalism and the ArchiMate language.
. To show how a domain-specific architecture description lan-guage for the domain of maintenance management can bedeveloped, tested and validated.
. To show an instantiated, organization-specific model for
TAM/TTF analysis.The remainder of this paper unfolds as follows. The next sec-ion goes through some related works followed by Section 3 which
nd Software 85 (2012) 1953– 1967
introduces the TAM and TTF models. Section 4 describes the archi-tecture description language, Section 5 goes into detail on thedevelopment of the task and functional descriptions, Section 6 cov-ers the validation of the language, both quantitatively through asurvey and qualitatively by showing an instance model. Section 7discusses the results and Section 8 concludes the paper.
2. Metrics and models for application portfoliomanagement
There are a number of studies addressing the management ofapplication landscapes in one way or another.
The standard ISO 9126 (ISO, 2003), which is now being super-seded by ISO 25010 (ISO, 2011), contains widely used applicationmetrics, but there is no apparent validation of the use of the metricswith respect to their impact on application usage, with the excep-tion of the usability attribute which has been shown to positivelyinfluence application usage (Roca et al., 2006). Gammelgård (2007)suggested a method for application consolidation which involveda number of metrics and their connections to business benefits.Although the metrics were based on the literature, they were notvalidated in the sense that their impact on the business side was notinvestigated. Similarly, Buckl et al. (2009b) proposed a metamodelto support application landscape management but no metrics weresuggested.
Riempp and Gieffers-Ankel (2007) propose a number of view-points to address concerns of application management, but do notconnect them explicitly to theory or any explicit validation of theusefulness of the metrics. Simon et al. (2010) propose a process andmaturity model for application portfolio management, but do notsuggest any concrete metrics. Weill and Vitale (1999) introduce aframework for assessing the health of application portfolios, whichincluded metrics to assess application usage. These metrics did notinvolve data collected directly from users, thus making predictingthe usage difficult. Van der Raadt et al. (2010) does concern theinvolvement of stakeholders in EA work, but does not suggest anyparticular method for assessing the quality of the application port-folio. McKeen and Smith (2010) suggest a number of metrics forapplication portfolio assessment and propose criteria pertaining touser interfaces but they do not measure user satisfaction nor dothey offer any evidence of their metrics correlating with externalbusiness impact.
3. The Technology Acceptance Model and Task-TechnologyFit
TAM (Davis, 1989) is arguably the most influential theory ininformation systems research (Lee et al., 2003). The TAM modelexists in quite a few different shapes, but the fundamental modelposits that the usage of information systems can be explained bytwo variables: the perceived usefulness (PU) and the perceived easeof use (PEoU) of the information system.
A great number of studies (e.g. Hu et al., 1999; Gefen and Straub,1997; Chau, 1996; Pavlou, 2003; Mathieson, 1991; Venkatesh andDavis, 2000; Venkatesh, 2000; Karahanna et al., 1999) have estab-lished that at least the PU construct is an important variable indetermining user intentions to use and thus their actual technologyusage (Legris et al., 2003) with the PEoU variable having significantimpact on the perceived usefulness as well as a smaller direct rela-tion to intention to use and actual usage (Legris et al., 2003), seeFig. 1.
Another theory for explaining usage of technology is the TTFmodel (Goodhue and Thompson, 1995). TTF is built on the idea thatif the users perceive a technology to have characteristics which fittheir work tasks, they are more likely to use the technology and
P. Närman et al. / The Journal of Systems and Software 85 (2012) 1953– 1967 1955
pGe
iamodimf
aesett(
tafit2u
tctvtuto
description language is expressed. The language itself is describedin the two last subsections.
Fig. 1. The Technology Acceptance Model (Venkatesh and Bala, 2008).
erform their work tasks better (Zigurs and Buckland, 1998;oodhue, 1998; Dishaw and Strong, 1998; Lee et al., 2007; Gebauert al., 2010; Ferratt and Vlahos, 1998; Majchrzak et al., 2005).
Dishaw and Strong (1998) used the concept of strategic fit asnteraction (Venkatraman, 1989) (meaning multiplication) to oper-tionalize TTF for a specific domain; in their case computer softwareaintenance. They defined Task-Technology Fit as “the matching
f the functional capability of available software with the activityemands of the task”, and operationalized task and tool character-stics based on previously published reference models of computer
aintenance tasks (Vessey, 1986) and maintenance software toolunctionality (Henderson and Cooprider, 1990), see Fig. 2.
Using the same field studies, Dishaw and Strong (1999) alsopplied an integrated TAM/TTF model which provided greaterxplanatory power than the individual models taken for them-elves, see Fig. 3. Note, that this model incorporates the “toolxperience” construct as well. Other studies have corroboratedhat adding the TTF model to the TAM model increases explana-ory power, albeit when using slightly different TAM/TTF modelsKlopping and McKinney, 2004; Pagani, 2006; Chang, 2008).
The TAM and TTF models are not homogeneous in the sensehat all authors include the same constructs in their models. Someuthors for instance include the “attitude toward use” variableound in Fig. 3 (Dishaw and Strong, 1999) whereas others excludet (Venkatesh and Davis, 2000). Some authors include external fac-ors as for instance the voluntariness of use (Venkatesh and Davis,000), and other leave it out. Most authors, but not all, use toolsage as the dependent variable.
The architecture description language which will described inhe next section will be using tool usage (tools in the sense of appli-ation) as the dependent variable, together with the PU, PEoU, TTF,ool functionality, and task fulfillment variables, see Fig. 4. Theseariables need to be operationalized with a number of survey ques-ions per variable in order to be measurable. The questions are then
sually assessed on Lickert scales by application users. The opera-ionalization of these variables for this particular study is the topicf Section 6.Fig. 2. The TTF model (Dishaw and Strong, 1999).
Fig. 3. A combined TAM/TTF model (Dishaw and Strong, 1999).
To keep the language as simple as possible, we have omittedsome variables often included in the TAM/TTF models, e.g. “volun-tariness of use”, “intention to use” or “tool experience”. This is notunique in any way, for instance the original TAM model by (Davis,1989) did not include any of these extra variables. As for the “vol-untariness of use” variable it will most probably impact usage, butwe believe that also mandatory applications with low PU, PEoU andTTF will still be used less frequently. We also omit the direct linkbetween PEoU and PU and keep only the PEoU and PU relations tousage. We now proceed to integrate the TAM/TTF constructs withan architecture description language.
4. Integrating TAM/TTF and enterprise architecture
This section will integrate TAM and TTF variables described inthe previous section with an enterprise architecture descriptionlanguage based on ArchiMate (The Open Group, 2009a). We willbegin by introducing ArchiMate and then a formalism known asthe Hybrid Probabilistic Relational Model in which the architecture
Fig. 4. The TAM/TTF constructs employed in this article.
1 tems a
4
eBfaeiheasip
A(ptPt
4
aphba(aL
XAu(ReAstsD�f
tpnd
waBsfmops
Hh
956 P. Närman et al. / The Journal of Sys
.1. ArchiMate
The original ArchiMate metamodel contains active structurelements, passive structure elements and behavioral elements.ehavioral elements describe dynamic behavior which can be per-
ormed by actors such as IT systems and human beings, whichre modeled as active structure elements. The passive structurelements describe what is produced as a consequence of the behav-or, such as data. Furthermore, these elements are structured in aierarchy of three levels, business, application and technology. Anxample of a business behavioral element is a business process,n example of an application behavioral element is an applicationervice, an example of a technology active structure element is annfrastructure node, such as a server. An example of an applicationassive structure element would be a data object.
ArchiMate is a mature enterprise modeling language but therchiMate language does not in itself offer analysis capabilities
although it has been augmented to be used for these purposesreviously (Närman et al., 2011; Iacob and Jonkers, 2007)). In ordero integrate modeling and analysis we propose using the Hybridrobabilistic Relational Model formalism, which is introduced inhe next subsection.
.2. Hybrid Probabilistic Relational Models
The Hybrid Probabilistic Relational Model (HPRM) formalismllows integrated modeling and probabilistic analysis of complexhenomena through the merger of entity relation models withybrid Bayesian networks (Närman et al., 2010). The analysis cane automated using the EAT tool (Ekstedt et al., 2009). HPRMsre slightly extended versions of Probabilistic Relational ModelsFriedman et al., 1999) which have been applied for architecturenalysis previously (Sommestad et al., 2010; Närman et al., 2011;agerström et al., 2010).
An architecture metamodel M describes a set of classes, X =1, . . . , Xn. Each class is associated with a set of descriptive attributes(X). Attribute A of class X is denoted X.A and its domain of val-es is denoted V(X.A). Each class also has a set of reference slotsrelationships). The set of reference slots of a class X is denoted(X). We use X.� to denote the reference slot � of class X. For
xample, the class Application Function may have the attributepplication Function.TotalFunctionality and a referencelot Application Function.Realizes which range is the class Applica-ion Service and domain is the class Application Function,ee Fig. 5. Each reference slot � is typed with the domain typeom[�] = Xi and the range type Range[�] = Xj, where Xi; Xj ∈ X. A slot
denotes a function from Xi to Xj, and its inverse �−1 denotes aunction from Xj to Xi.
A probabilistic relational model � specifies a probability dis-ribution over all instantiations I of the metamodel M. Thisrobability distribution is specified in terms of a hybrid Bayesianetwork (Lauritzen, 1992) which is formed by a qualitative depen-ency structure and associated quantitative parameters.
The qualitative dependency structure is defined by associatingith each attribute X.A a set of parents Pa(X.A) through so called
ttribute relations. Each parent of X.A is defined as X.�.B where ∈ A(X .�) and � is either empty, a single reference slot � or aequence of reference slots �1, . . ., �k (called a slot chain) such thator all i, Range[�i] = Dom[�i+1]. Sometimes the multiplicities of the
etamodel implies that the slot chain X.�.B references a whole setf attributes rather than a single one. In these cases, we let A dependrobabilistically on some aggregate property over those attributes,
uch as the SUM or the MEAN.Considering the quantitative dependency, each attribute of thePRM is seen as a node in a hybrid Bayesian network and thusas a probability distribution which is conditioned on that of its
nd Software 85 (2012) 1953– 1967
parents. This is expressed in Hybrid Conditional Probability Tables(HCPT) defined as follows (Yuan and Druzdzel, 2007)
Definition 1. For every attribute A(X), its parents Pa(X) are dividedinto two disjoint sets: discrete parents DPa(A(X)) and continuousparents CPa(A(X)). Then, its HCPT P(A(X)j|Pa(A(X))) is a table indexedby its discrete parents DPa(A(X)) and with each entry representingone of the following conditional relations:
1. If A(X) is a discrete variable with only discrete parents, a discreteprobability distribution;
2. If A(X) is a discrete variable with continuous parents, a discreteprobability distribution dependent on CPa(A(X));
3. If A(X) is a continuous and deterministic variable, a deterministicequation dependent on CPa(A(X)).
4. If A(X) is a continuous and stochastic variable, a deterministicequation dependent on CPa(A(X)) plus a noise term having anarbitrary continuous probability distribution with parametersdetermined by the mean and variance of CPa(A(X)) (since it is astochastic variable).
In metamodel for TAM/TTF modeling, we will confine ourselves tocase 3 above, i.e. continuous relations only.
For instance and referring again to Fig. 5, the attribute Appli-cation Service.Functionality is determined by the attributeApplication Function.TotalFunctionality through theattribute relation SUM(Realizes−1.TotalFunctionality).
4.3. Modeling for Task-Technology Fit analysis
By using the HPRM formalism we are now able to integratethe TAM and TTF variables into an EA metametamodel whichforms the basis for the architecture description language. Themetametamodel describes the language constructs which new,domain-specific, metamodels conform to. These metamodels canin turn be instantiated into organization-specific architecture mod-els (Jouault and Bézivin, 2006). The metametamodel can be foundin Fig. 5, below we describe its classes, reference slots and attributerelations. We begin with those related to the TTF-analysis. Theclasses in gray in Fig. 5 are not found in the original ArchiMatemetamodel.
Starting from the top, there are Business Processes; groupsof activities ordered so as to create customer value. These BusinessProcesses use Application Services, the externally visibleapplication behavior. These are realized by Application Func-tions, which describe the internal behavior within ApplicationComponents. Application Components represent logical encap-sulations of functionality.
Task fulfillment in the TTF model is modeled qualitatively asBusiness Process and quantitatively as the attribute BusinessProcess.TaskFulfillment which is derived by taking the meanvalue of task fulfillment assessments by application users. Conse-quently, the domain of values for this attribute is [a, b], where a andb are positive integers.
The tool functionality is modeled qualitatively as Applica-tion Services which encapsulate behavior being performed bynon-human actors. The quantitative assessment of the exposedfunctionality is modeled as the attribute Application Ser-vice.Functionality with the domain [a, b] and which is assessedas the sum of the user assessments (using Lickert scales) of allApplication Function.Functionality.
The Application Functions are used to describe how much
functionality is assigned to the respective Application Compo-nents. The users will answer a number of survey questions foreach Application Service, and for each question, the user isasked to choose which Application Component implements theP. Närman et al. / The Journal of Systems and Software 85 (2012) 1953– 1967 1957
TAM/
fiAa
t
Fig. 5. The integrated
unction described in the question the most. For each surveytem one Application Function is modeled and its attribute
pplication Function.Functionality is the mean of all users’ssessments of that particular question.The mean of all of the Application Func-ion.Functionality attributes for one specific Application
TTF metametamodel.
Component is then summarized in one overarching Applica-tion Function which is composed by the underlying functions.
Specifically, the mean value is attributed to the ApplicationFunction.TotalFunctionality attribute. In the case one ofthe composing Application Functions is not present in theApplication Component, the Application Function is still1 tems a
mi
BtmAcr
BtepwTStmaAq
amwSu
cpACtva
bSUAtpn
Scttso
fitatpC
4
ttat
958 P. Närman et al. / The Journal of Sys
odeled, but its Application Function.Functionality values set to zero before calculating the mean.
The business and application behavior represented throughusiness Process and Application Service respectively needo be represented in the domain-specific metamodels. Further-
ore, we posit that the way in which Business Process usepplication Services is approximately constant for each spe-ific domain, making it possible to include the reference slotsepresenting this behavior in the reference models as well.
The TTF variable is an emergent property belonging neither tousiness Processes nor Application Services, but rather onhe reference slot between them. We cannot place attributes on ref-rence slots in the HPRM formalism and thus have to introduce thelaceholder class Process-Service Association with whiche associate the attribute Process-Service Association.TTF.
his represents the TTF for each Business Process-Applicationervice pair and analogous with (Dishaw and Strong, 1998)his is found as the interaction (Venkatraman, 1989) (i.e. the
ultiplication of the values of task and functionality) of the vari-bles represented by Business Process.TaskFulfillment andpplication Service.Functionality. The domain is conse-uently [a * a, b * b].
The Process-Service Association.TTF attribute predictspplication usage. The strength of influence needs to be deter-ined empirically for each Process-Service Association. Weill return to how such an empirical study can be constructed in
ection 6, but suffice it to say that it involves a quantitative studysing survey data on TTF and application usage.
The quantitative strength of the influence can be found in thelass Usage Relation, which models which Application Com-onents’ usages are affected by a particular Process-Servicessociation.TTF. The attribute Usage Relation.Regressionoefficient is the regression coefficient (Warner, 2008) showinghe (domain-specific) strength of a particular ProcessSer-ice Association.TTF on Application Component.Usage,nd whose domain is any real number.
The usage of the individual Application Components wille contingent upon their functional contribution to the Processervice Association.TTF. Thus, we introduce the attributesage Relation.Weight which shows the fraction of the totalpplication Service which is realized by Application Func-ions assigned to a single Application Component. Thereby it isossible to calculate the degree to which an Application Compo-ent affects Process Service Association.TTF.
Thus, for all ApplicationServices connected to the Process-ervice Association.TTF and for all Application Functionsonnected to both the Application Services and the Applica-ion Component the attribute Usage Relation.Weight divideshe sum of the Application Service.Functionality with theum of the Application Function.Functionality. The domainf Usage Relation.Weight is [0, 1].
The attribute Usage Relation.WeightedTTF isnally arrived at by multiplying the Usage Rela-ion.RegressionCoefficient, Usage Relation.Weightnd the Process-Service Association.TTF. Usage Rela-ion.WeightedTTF is then used together with PU and PEoU toredict Application Usage in the form of the attribute Applicationomponent.Usage. More on this below.
.4. Modeling perceived usefulness and ease of use
In order to model PU and PEoU we start by modeling user groups
hrough the class Role. These access Application Componentshrough one or several Role-Component Associations. The PUnd PEoU variables belong neither to the Role nor to the Applica-ion Component, but are emergent properties depending on thend Software 85 (2012) 1953– 1967
relation between these. Therefore, these properties are modeled asbelonging to the Role Component Association class.
Thus we have the Role-Component Association.PU andthe Role-Component Association.PEoU attributes respectively.These attributes are found by taking the mean value of the indi-vidual assessments of the interfaces from each actor manning theRoles. These assessments are made using a Lickert scale, and thedomain for both attributes is therefore [a, b].
As mentioned in the previous section, we hypothesize that itis sufficient to model that PU and PEoU impacts usage. First thestrength of the relation between PEoU, PU and usage needs to bequantified as regression coefficients for PU and PEoU with respect tousage. In the model this is reflected as the attributes ApplicationComponent.RegressionCoefficientPU and Application Com-ponent.RegressionCoefficientPEoU respectively. These needto be determined empirically for each domain.
The attribute Application Component.WeightedTAM is thelinear combination of the regression coefficients and the meanof the PU/PEoU attributes. The domain of PU and PEoU is [a, b],whereas the regression coefficients may assume any real num-ber, thus Application Component.WeightedTAM may assumeany real number.
Finally, Application Component.Usage is predicted by tak-ing adding Application Component.WeightedTAM to the sumof the Usage Relation.WeightedTTF values, plus the Applica-tion Component.Domain Constant. The constant is taken fromthe empirically determined linear regression model showing howall variables affect usage. The domain of the constant is anyreal number. Since we may have negative regression coefficients,it is entirely possible that the model might indicate negativeApplication Component.Usage, something which could per-haps translate into “anti-usage” of an application. To avoid thishighly counterintuitive result we disallow negative values of usage.The domain of Usage is therefore all positive real numbers.
The next section will elaborate on the method for translatingthe metametamodel into a domain-specific metamodel useful forthe TTF-evaluations of the maintenance management domain.
5. A metamodel for the maintenance domain
The domain-specific metamodel will contain re-usable andoperationalized reference descriptions of Business Processes(task requirements) as well as the Application Services (toolfunctionality) of the domain maintenance management.
This section commences by a short introduction of the domainof maintenance management within heavy industry. Next, we pro-ceed to show how domain knowledge was condensed into taskand functionality reference models which were operationalized tomake them useful for TTF analysis. Three hypotheses related to theTTF model for maintenance management are presented and finallywe describe operationalizations and hypotheses for PU, PEoU andsystem usage.
5.1. Maintenance management
Maintenance refers to all activities enacted to maintain the func-tional status of items (European Committee For Standardization,2001). The process of maintenance management refers to all man-agement activities intended to support the maintenance activities(European Committee For Standardization, 2001). These includeplanning, control and oversight as well as continuous improve-
ment of the maintenance process (Crespo Marquez and Gupta,2006). Increased product quality requirements, greater emphasison just-in-time production processes with little of no storage ofgoods coupled with lower tolerance for factory downtime meanstems a
tm
amriaCmt
5
frc
B2towpb2
tWwI
FniTwa
emii
mciat
5
f2cpwf
•
P. Närman et al. / The Journal of Sys
hat maintenance and thereby maintenance management receivesore attention in industry (Crespo Marquez and Gupta, 2006).Computerized Maintenance Management Systems (CMMS)
re essential in providing decision support in the maintenanceanagement process. CMMS are usually built around an asset
epository which stores data on all of the maintainable assetsncluding basic data such as type, make, year of installment as wells maintenance and failure data (Brown and Humphrey, 2005). TheMMS functionality revolves around the items and the planning,onitoring and continuous improvement of the maintenance of
hese items (Kans, 2008).
.2. Developing the maintenance management metamodel
The metamodel was developed and operationalized iteratively,ollowing recommendations from (Gable, 1994) using a literatureeview as a baseline and improving upon the baseline model in aase study featuring several interviews with domain experts.
The generic maintenance process description of the standardS:EN 13460:2002 (European Committee For Standardization,002) constituted the backbone of the maintenance managementask descriptions. The IT-related functional descriptions is basedn the IEC 61968-1 standard (IEC Technical Committee 57, 2003)hich defines functions for CMMS within power distribution com-anies and has been used as a basis for functional reference modelsoth inside and outside the power industry (Närman et al., 2006,007; Gammelgård et al., 2010; Gunaratne et al., 2008).
To add more detail to the models, detailed task activity descrip-ions were taken from Smit and Slaterus (1992), Johansson (1997),
oodhouse (2006) and the IT functionality model was augmentedith functional descriptions from Patton et al. (1983) and Kans and
ngwald (2009).Three empirical studies were performed to refine the model.
irst and independent of the literature reviews, maintenance engi-eers at a power company and a manufacturing company were
nterviewed about their maintenance processes and IT support.he process and IT function descriptions based on these interviewsere then compared with the baseline model and improvements
nd clarifications were made.Next, the second iteration of the model was shown to one
xpert from a CMMS vendor and one maintenance engineer at aanufacturing company. Feedback was collected during telephone
nterviews with these experts and the model was updated accord-ngly.
Finally, to ascertain that the task and functionality referenceodels were useful survey instruments the reference models were
onverted into survey questions which were tested in telephonenterviews with maintenance engineers at three power companiesnd two manufacturers. These tests revealed some flawed ques-ions which were either changed or removed.
.3. The maintenance management metamodel
Following (Johansson, 1997) the maintenance activities andunctions where grouped according to the Deming Cycle (Deming,000); Plan, Do, Study and Learn. Since the “Learn” process does notoncern functionality per se, but rather feedback from the studyhase it did not contain any specific task activities or functions andas removed. The remaining three phases comprised roughly the
ollowing functionality:
Plan functions: Includes setting parameters in the asset repos-itory, define recurring dates for maintenance and definingconditions that trigger work orders.
nd Software 85 (2012) 1953– 1967 1959
• Do functions: Includes functionality for monitoring of workorder progress
• Study functions: Decision support mechanisms such as creatingreports.
As for task activities these where
• Plan activities: Describes the overall planning of (preven-tive) maintenance and decision making concerning maintenancework.
• Do activities: Concerns the continuous monitoring of ongoingwork orders.
• Study activities: These activities concerns the continuous followup of quality and cost of the maintenance process.
To be useful for quantitative analysis of survey answers, therewere several questions describing each task and functionality groupand thus constitute good measurement instruments. The values ofthe variables are formed taking the mean of the answers to theindividual items, which are scored on a 1–7 Lickert scale in linewith previous studies (Dishaw and Strong, 1998, 1999; Venkateshet al., 2003; Davis, 1989). The final set of questions comprised 11task activity questions and 14 IT functionality questions distributedacross the Application Service and Business Process vari-ables, see Appendix A.
In analogy with (Dishaw and Strong, 1998), three hypotheseswhere formulated regarding Task-Technology Fit and applicationusage based on the reference model.
Hypothesis 1 (H1). Higher fit between Plan tasks and Plan func-tions is associated with a higher use of applications. In order toplan the maintenance work, a maintenance engineer needs someIT support to set recurring work orders on assets in the main-tenance repository as well as converting failure reports to workorders which are assigned to maintenance field crews.
Hypothesis 2 (H2). Higher fit between Do tasks and Do functions isassociated with a higher use of applications. In order to follow up onthe work actually being performed, a maintenance manager needsa way to see the status of the assigned work orders which calls forfunctions to continuously monitor work orders in the maintenancemanagement IT support tool.
Hypothesis 3 (H3). Higher fit between Study tasks and Studyfunctions is associated with a higher use of applications. In orderto follow up on the maintenance work, there is a need forreport generation functionality and good presentation functiona-lity.
These hypotheses are reflected in the maintenance manage-ment specific metamodel, see Fig. 6, by the three Process-ServiceAssociations Plan, Do and Study. The hypotheses state thathigher values of one or several of the three Process-ServiceAssociation.TTF attributes will lead to a higher ApplicationComponent.Usage.
Notice here that the Application Functions are compositeobjects, and their total functionality is the sum of their composingApplication Functions which correspond to the functionalitysurvey questions found in Appendix A. Although hidden here dueto space constraints, these Application Functions could be rep-resented in the metamodel.
The metamodel describes one-to-one relations between theApplication Services and the Business Process. Usually, the
Business Processes and Application Services are relatedin more complex many-to-many relations and the metameta-model naturally also supports modeling and analysis of suchrelations.1960 P. Närman et al. / The Journal of Systems and Software 85 (2012) 1953– 1967
the m
5
S(at2sd
t
Hoov
Hdhap
Fig. 6. The metamodel for
.4. Technology Acceptance Model operationalization
The PU and PEoU constructs were operationalized usingwedish translations of the questions found in Venkatesh and Bala2008). The reliability of the translated questions was tested in
survey with 26 respondents at a medium-sized power utilityhrough Cronbach alpha assessments (Cronbach and Shavelson,004; Cronbach, 1951). The study showed high internal consistencycores of both PEoU (˛ = 0.90) and PU (˛ = 0.84). No respondentsisplayed difficulties answering the questions.
Two hypotheses were formulated for the Technology Accep-ance Model:
ypothesis 4 (H4). A higher degree of PU leads to a higher degreef application usage. In the model this means that a higher valuef Role-Component Association.WeightedPU leads to a higheralue for the Application Component.Usage attribute.
ypothesis 5 (H5). A higher degree of PEoU leads to a higher
egree of application usage. In the model this translates intoaving a higher Role-Component Association.WeightedPEoUttribute value leads to a higher value for the Application Com-onent.Usage.aintenance management.
5.5. Application component usage operationalization
Application Component.Usage was operationalized by threequestions for each Application Component that was encoun-tered. First the respondents were asked to rate the number of hoursspent using the Application Component the previous week aswell as the coming week. They were also asked to rate the extentto which they use the applications on a Lickert scale. The two firstitems were normalized to the Lickert scale and the attribute wasthen found by taking the mean of all three normalized items.
6. Validation
This section describes how the previously presented mainte-nance management model was validated quantitatively through asurvey which tested the five hypotheses. In order to demonstratethe use of the metamodel, this section also contains an instantiatedmodel showing some applications, services and processes from oneof the surveyed companies.
6.1. Validating the reference models – a survey
The hypotheses were tested in a study with 79 respondents atfive companies. Maintenance engineers, control officers and other
P. Närman et al. / The Journal of Systems and Software 85 (2012) 1953– 1967 1961
Table 1The distribution of the respondents among the five case studies.
Company Number of respondents
Power company 1 10Power company 2 7Power company 3 22
rpr
upcacsMnst
t1Tttadtf1
grtterPFani
t(d
Oveot2m0iTA
Table 2Models of hypotheses (H1–H3), with or without TTF modeled.
Model Pearson Adj. R2 p-Value Samples
H1 without TTF 0.136 0.012 50Plan Task −0.304 0.024 55Plan Tech 0.264 0.064 50H1 with TTF 0.148 0.016 50Plan TTF −0.206 0.151 50
H2 without TTF −0.003 0.402 55Do Task −0.135 0.328 55Do Tech −0.134 0.329 55H2 with TTF 0.040 0.172 54Do TTF 0.045 0.747 54
H3 without TTF 0.102 0.027 51Study Task 0.289 0.032 55Study Tech −0.023 0.87 52H3 with TTF 0.402 0.000 52Study TTF 0.591 0.0001 51
Table 3Models of hypotheses (H4 and H5).
Model Pearson Adj. R2 p-Value Samples
several applications, and each respondent rated all of the applica-tions which he or she used with regard to PEoU and PU, the numberof samples are considerably higher for TAM than for the TTF model.
Table 4Complete model statistics.
Model Adj. R2 p-Value Samples
TAM 0.163 0.004 55TTF 0.480 0.0001 45Unified TAM-TTF 0.548 0.0001 45
Table 5Chronbach alpha (˛) for all variables used in the study.
Variable ̨ No. of items Samples
Perceived usefulness 0.953 4 223Perceived ease of use 0.886 4 220
Nuclear power plant 35Manufacturing company 5
oles actively managing the maintenance process were set as theopulation of the survey study. The companies and the number ofespondents are shown in Table 1.
Before the survey was distributed, a list of the applicationssed for maintenance management was compiled for each com-any. These lists comprised between two and six applications perompany including both enterprise applications and MS Officepplications. Using the enterprise applications was mandatory to aertain degree (a work order can for instance only be issued using apecific application), but users were usually free to use for instanceicrosoft Excel to do workarounds for many of the other mainte-
ance management tasks. The use of Excel is usually voluntary. Theurveys were distributed electronically (using the Survey Monkeyool1) through contact persons at the companies.
The surveys comprised one section per application in whichhe respondents rated the applications’ PU and PEoU (Lickert scale–7) as well as their usage of the applications (Lickert scale 1–7).he final sections dealt with the level of functional implementa-ion in the applications. For each survey question pertaining tohe functional area, the respondents were asked to assess whichpplications implemented this function the most, and to whichegree on a Lickert 1–7 scale (Dishaw and Strong, 1998). Finally,he respondents were asked to rate the degree to which they per-ormed the activities from the activity reference model on a Lickert–7 scale (Dishaw and Strong, 1998, 1999).
Although the contact persons who distributed the surveys wereiven instructions regarding which respondents to target someespondents were clearly not part of the target population andherefore excluded. Some did not use any maintenance applica-ions and some did not perform any maintenance work. This wasspecially prevalent at power company 3 where five out of 22espondents were discarded for these reasons and at the Nuclearower Plant where six out of 35 respondents were discarded.urthermore a few respondents gave incomplete or inconsistentnswers which lowered the useful response rate somewhat. Theumber of useful respondents for the various variables can be seen
n the “Samples” column in Table 2.Following Dishaw and Strong (1999), the hypotheses were
ested using linear regression models and Pearson correlationsWarner, 2008). The analysis was performed using SPSS for Win-ows.
The results of the TTF hypotheses (H1–H3) can be seen in Table 2.ur tests of Hypothesis 1 showed significant correlation and moreariance explained by the prediction model with Plan TTF mod-led (change adj. R2 of 0.012, sig. change of −0.004). However, thepposite change in significance suggests that modeling Plan TTF nei-her increases nor decreases the accuracy of the model. Hypothesis
variables do not significantly explain utilization. However, theodel is more accurate when Do TTF is modeled (change adj. R2 of
.0043 and sig. change of 0.230). Tests of Hypothesis 3 show signif-
cant correlation and a more accurate regression model when StudyTF is modeled (change adj. R2 of 0.300 and sig. change of 0.027).s all regression models provide higher prediction accuracy when1 http://www.surveymonkey.com/.
H4:U-PU 0.439 0.177 0.001 55H5:U-PEoU 0.239 0.04 0.078 55
TTF is modeled the TTF aspects of the model can be seen as valid(Dishaw and Strong, 1999).
The results regarding the TAM hypotheses (H4 and H5) can befound in Table 3. Both models show significant correlations andfairly high explanations of the variance of the modeled data. Thesefindings furthermore conform to findings by (Dishaw and Strong,1999) and suggest that the TAM aspects of the model are valid. Thus,there are relations between usage (U) and PU/PEoU.
The statistics for the complete model can be seen in Table 4.TTF outperforms TAM and the unified TAM/TTF model outperformsboth models, validating the relations in the proposed metamodel.
The reliability of the study was quantitatively assessed throughCronbach alpha (Cronbach and Shavelson, 2004; Cronbach, 1951)tests of all variables. These results can be seen in Table 5. All vari-ables received high scores, suggesting a high internal consistencyof the study. Notice here that “samples” refer not to individualrespondents, but to the overall number of answers. Since there were
Utilization 0.772 3 65Plan, tasks 0.773 4 69Plan, technology 0.817 6 35Do, tasks 0.780 4 69Do, technology 0.854 5 56Study, tasks 0.775 3 69Study, technology 0.804 3 46
1 tems a
iip
b
wPbt
6
opie
APtrd
onatt
Ampmsr
iCiAn
PoRttantRpc
ta
so
962 P. Närman et al. / The Journal of Sys
Thus, there are strong reasons to believe that our models useful for predicting system usage. Next, we show what annstantiated architecture model for predicting Application Com-onent.Usage might look like.
The regression coefficients determining how usage is affectedy the TAM/TTF variables is shown in Eq. (1).
Usage = 1.739 − 0.021 ∗ TTFPlan−0.006 ∗ TTFDo + 0.048 ∗ TTFStudy
+ 0.255 ∗ PU − 0.128 ∗ PEoU (1)
here the TTFPlan/Do/Study variables are on the scale 1–49, andU/PEoU are on the scale 1–7. These coefficients, which were leftlank in the metamodel in Fig. 6 can now be used for usage predic-ions. In the next subsection we will show an example.
.2. An example model
To illustrate the approach, we present an example model fromne of the power companies (power company 2) which partici-ated in the survey. We show the model in its entirety in Fig. 7, for
ncreased readability we also zoom in on the model and show anxcerpt with one application, one task and one service in Fig. 8.
Here we see that there are three Business Process andpplication Service pairs named Plan, Do and Study, each with arocess-Service Association. The Business Process Do hashe highest level of task fulfillment, the Plan Business Processeceives a slightly lower score while the Study Business Processisplays the lowest score.
The Application Service.TotalFunctionality attributesf the three Application Services are ordered in the same man-er, with Do having the highest functionality, followed by Plannd Study. As a consequence, the Process Service Associa-ion.TTF attributes are ranked from highest to lowest accordingo Do, Plan and Study.
There are three Application Components which deliver thepplication Services. First the CMMS, a dedicated maintenanceanagement system, Schedule, which supports tasks related to
lanning of power network operation interruptions in order foraintenance tasks to be undertaken, and finally the standard
preadsheet Microsoft Excel which is used for various maintenanceelated data storage and analysis.
CMMS realizes part of all three Application Services, whichs shown through the three Application Functions namedMMS Plan, CMMS Do and CMMS Study respectively. Likewise Excel
mplements functionality which contributes to the to all threepplication Services, but the Schedule Application Compo-ent offers functionality for the Plan Application Service only.
The Usage Relations show the degree to which the combinedrocess Service Association.TTF contributes to the usagef the Application Components. As can be seen, all the Usageelations related to the CMMS have the highest Usage Rela-ion.Weight. This means that the CMMS contributes the mosto the functionality within the maintenance management domainnd should consequently be the most used Application Compo-ent. The Schedule Application Component contributes only tohe Plan Application Service, where it has a substantial Usageelation.Weight, but does not contribute at all to the Do or Studyhases. Excel contributes functionality to all three Business Pro-esses, and mostly so in the Do and Study phases.
Looking at the PU and PEoU constructs we discover that there arewo Roles associated with managing the three business processes,
Technical Control Officer (TCO) and a Maintenance Engineer (ME).The Role-Component Associations at the bottom of Fig. 7
how how the two Roles perceived the usefulness and ease of usef the three Application Components. The CMMS receives higher
nd Software 85 (2012) 1953– 1967
marks from the ME than from the TCO, which is natural since itis an application targeted mostly at maintenance engineers. TheSchedule application is perceived more useful by the ME than bythe TC Role. This is probably due to the fact that the MEs dependon having the power shut off to do their work, whereas TCs findmaintenance to be more of a nuisance, something which interfereswith their daily work.
Excel is not used at all by the TCO role, and was thus not assessedby this role. The MEs on the other hand find it to be a very usefultool, even more so than the CMMS. One reason for this may be thatthe ME use Excel for other work tasks than the purely maintenancerelated and may not have separated the application’s usefulness formaintenance from all other kinds of usefulness.
Regarding the PEoU, the TCOs perceive the CMMS to be less easyto use than do the MEs but interestingly enough, they find the Sched-ule application to be a lot more easy to use than the MEs even thoughthe MEs find it more useful. Whether this is due to differences ingraphical user interface used by the two roles or for some otherreason has not been made clear.
Having assessed the PU, the PEoU, application functionality andthe business process requirements it is now possible to infer whatthe likely Application Component.Usage is. Excel is predicted tobe the most widely used application, with CMMS a close second andSchedule, the least used. The actual usage as reported by respon-dents was CMMS, followed closely by Excel and Schedule being adistant third.
If one were to zero the negative regression coefficients for TTFPlan and Do (which were not significant) the order becomes CMMS,Excel and Schedule. It is therefore possible that given a larger sam-ple which may provide both significant and positive regressioncoefficients the ordering would become in line with the actualreported usage.
7. Discussion
This section will discuss the findings starting with the contribu-tions to practitioners, research contributions and then moving intolimitations and future works.
7.1. Contributions to research
The metamodel for maintenance management and its asso-ciated survey instruments provide researchers with a tool forinvestigating what makes maintenance engineers use maintenancesoftware solutions. Especially the TTF operationalization concern-ing H1–H3 is a substantial contribution in this regard.
The measurement instruments for Plan/Do/Study TTF, PU,and PEoU are internally consistent as shown by achieving highCronbach alpha values. By testing hypotheses H1–H5 on severalcompanies in different industries we have shown that the instru-ments have a high level of generalizability, and are not confined toone single organization.
The strong support for H3 concerning Study TTF is interesting,functionality for following up maintenance work is clearly desiredby maintenance engineers.
As for the PU and PEoU variables (H4 and H5), they also showedsignificant contribution to explaining variance in ApplicationComponent.Usage, which is perhaps hardly surprising, given theoverwhelming number of published TAM studies to this date.
Furthermore, by successfully employing the general methodol-ogy for TTF operationalization and testing as outlined by Dishawand Strong (1999) we have further corroborated their and others’
findings (Dishaw and Strong, 1999; Klopping and McKinney, 2004;Pagani, 2006; Chang, 2008) by showing that the addition of the TTFmodel to the original TAM makes it possible to explain variationsin application behavior to a higher degree.P. Närman et al. / The Journal of Systems and Software 85 (2012) 1953– 1967 1963
ealizi
7
maao
Fig. 7. An example model showing three applications r
.2. Contributions to practitioners
Using this architecture description language architects can
odel their application landscape, and collect the relevant userssessments of tasks, functionality, PU, PEoU. The predictions ofpplication usage then provides IT decision makers with evidencen which applications users are most, and least, likely to use, at least
ng three services supporting three business processes.
when discounting voluntariness of use. When confronted with hav-ing to improve or phase out applications, this information will bemost helpful.
Being able to explain why the usage is high or low through thePU, PEoU and the TTF variables also provides decision makers withmore concrete data on where to focus improvement efforts. Forinstance, by knowing that a certain application is unlikely to be
1964 P. Närman et al. / The Journal of Systems and Software 85 (2012) 1953– 1967
F pplicaa
uoc
mwa
umt
Haldi
7
pfitbpbdb
o
ig. 8. An excerpt of the model from Fig. 7 showing the Plan tasks and the CMMS alone.
sed due to a low TTF, decision makers can focus their attentionn adding more functionality to the application or, possibly, onhanging the work tasks of the users so as to obtain a higher usage.
Another important contribution is the maintenance manage-ent specific metamodel. The survey items in Appendix A togetherith the model can be used at other companies’ maintenance man-
gement processes and applications to predict usage.Obviously, the domain-specific models are confined to being
sed within a specific domain. When wishing to employ theetametamodel in other domains, the method employed to derive
he survey items and metamodel could be re-used.Vendors of CMMS could note that the strong support for
ypothesis 3 stating that Study TTF explains a large portion of vari-nce means that the existence of report generation functionality isikely to improve usage of CMMS applications. This points in theirection of increasing the possibilities for integration of business
ntelligence tools with CMMS.
.3. Limitations and future works
Both Plan and Do TTF were found to have negative or very weakositive Pearson correlation with respect to application usage. Thendings are not significant, however. One explanation might behat the sample population is too small, another explanation coulde that the use of the applications is mandated in some of the com-anies regardless of their functionality. A third explanation coulde that the literature-grounded baseline models do not adequately
escribe the maintenance process in a generic enough manner toe useful for task and functionality reference descriptions.To take this matter further, studies with a larger samplef respondents are needed. These studies should also include
tion. * indicates that the number cannot be computed by the figures in this excerpt
voluntariness as a variable. To investigate the suitability of the lit-erature, further qualitative case studies trying to isolate the basicbuilding blocks of the maintenance process are in needed.
Although we were able to conduct one successful test ofthe metametamodel by instantiating it into a metamodel and aorganization-specific model, it is difficult to make any definitiveclaims regarding generalizability. To investigate if it would be appli-cable universally, new metamodels for other domains would haveto be created and tested in a manner similar to this study.
The single instantiated organization-specific model does showthat it is possible to instantiate the metamodel and thereby implic-itly the metametamodel. However, the present study says nothingon the quality of the instance models as such. Further studies shouldinvestigate different qualities of the instance models, such as forinstance model comprehensibility or ease of use.
The usefulness of the architecture description languagefor decision-making is yet to be tested. Although enterprisearchitecture models are widely regarded as good tools for improv-ing application landscape decision-making and this paper hasdescribed a theoretically well supported enterprise architecturedescription language, there is no guarantee that the decision-makers faced with for instance an application consolidation projectwould find the metrics or the models useful. Further studiesdescribing the use of the models for these purposes are thereforenecessary.
Using the EA models based on the introduced metamodel is notvery prescriptive in the sense that the decision maker is presented
with clear improvement suggestions. Rather, the predictions ofapplication usage should be seen as indications of which applica-tions users are likely to embrace voluntarily. A way of increasing theprescriptiveness of the model would be to integrate variables intotems a
twwBi
8
cbtfiol
mfm
itnme7cr
m
A
irfenbiS
p
•
•
•
i(
••
••
i(
P. Närman et al. / The Journal of Sys
he architecture metametamodel which could explain for instancehy a certain application receives low scores on PU or PEoU. Theork on antecedents of PU and PEoU published by Venkatesh andala (2008) could perhaps serve as a useful starting point, e.g. by
ncluding variables describing the user interface.
. Conclusions
This paper has achieved three goals: (i) to merge TAM and TTFonstructs with an enterprise architecture description languageased on ArchiMate; (ii) to show how such an architecture descrip-ion language can be adapted and tested to fit a specific domain; andnally (iii) to illustrate what an organization-specific model basedn this domain-specific architecture description language wouldook like when instantiated.
The first goal was achieved by taking a combined TAM/TTFodel featuring six variables (PU, PEoU, TTF, tool functionality, task
ulfillment and usage) and cast it in the form of an architectureetametamodel based on ArchiMate.The second goal was achieved by the development and val-
dation of a domain-specific metamodel which conformed tohe metametamodel and adapted specifically for the mainte-ance management domain. The basis for the development of theetamodel was a literature study, a case study involving sev-
ral interviews with domain experts and a survey comprising9 respondents at five companies. The survey instruments whichould be used together with the EA metamodel consist of 11 taskelated questions and 14 IT functionality questions.
The third goal was accomplished by showing an instantiatedodel of one of the companies that participated in the survey study.
ppendix A. Survey questions
The variables were formed using several items. Each variables formed by taking the mean of its items. All items except thoseelated to application component usage were evaluated on a scalerom 1 to 7, where 1 is a very small extent and 7 is very largextent. To evaluate usage respondents were asked to report theumber of hours they spent using the applications, and these num-ers were then normalized to the same scale as the other items. The
tems below are translations from the original items which were inwedish.
Application Component Usage For Application Component X,lease rate the following:
On average, how many hours per week do you spend with Appli-cation X (the last few months)?On average, how many hours per week do you expect to spendusing Application X (the coming few months)?To which extent are you using Application X, on a scale from 1 to7, where 1 is very small extent and 7 is very large extent?
Perceived Ease of Use For Application X, please rate the follow-ng (These questions are directly taken from Venkatesh and Bala2008)):
My interaction with Application X is clear and understandable.Interacting with Application X does not require a lot of my mentaleffort.I find the Application X to be easy to use.I find it easy to get Application X to do what I want it to do.
Perceived Usefulness For Application X, please rate the follow-ng (These questions are directly taken from Venkatesh and Bala2008)):
nd Software 85 (2012) 1953– 1967 1965
• Using that Application X improves my performance in my job.• Using the system in my job increases my productivity.• Using the system enhances my effectiveness in my job.• I find the system to be useful in my job.
Plan Functionality For Application Component X, please ratethe following:
• Define levels for when work orders should be initiated. Levelsmight refer to time and or physical condition for the equipment.
• Create a maintenance plan.• Initiate work orders based on failure reports, time or condition
intervals, or other events.• Co-ordinate operational interruptions with maintenance activi-
ties.• Create a maintenance work plan for each work order by allocating
resources such as personnel or tools.• Manage storage levels for spare parts, tools, materials, etc.
Do Functionality For Application Component X, please rate thefollowing:
• Monitor work order status (initiated, processed, executed, etc.).• Time reporting per work order.• actual materials used per work order.• Record equipment failure history.• Record equipment maintenance history.
Study Functionality For Application Component X, please ratethe following:
• Compile and generate reports on the physical condition of theequipment.
• Compile and generate reports on failure statistics for both indi-vidual pieces of equipment and groups of equipment.
• Compile and generate reports on the historical maintenance forequipment.
Plan Tasks To which extent do you perform the following activ-ities
• Receive and evaluate failure reports.• Initiate work orders based on failure reports.• Processing of work orders, including making estimates of cost,
time, material, etc.• Co-ordinate production interruptions with maintenance activi-
ties.
Do Tasks To which extent do you perform the following activi-ties
• Send work orders and associated information to those in chargeof performing the work.
• Follow up on-going work orders with respect to progress and ortime and money spent.
• Verify that information and invoices pertaining to closed workorders is correct.
• Approve closed work orders.
Study Tasks To which extent do you perform the followingactivities
• Compiling maintenance data.• Produce key performance indicators based on the compiled main-
tenance data.
1 tems a
•
R
A
B
B
B
B
B
BC
C
C
C
C
C
D
D
D
DD
D
D
E
E
E
F
F
G
G
G
G
G
G
G
G
966 P. Närman et al. / The Journal of Sys
Review the maintenance process. For instance by evaluating keyperformance indicators to find areas of improvements.
eferences
ddicks, J., Appelrath, H., 2010. A method for application evaluations in context ofenterprise architecture. In: Proceedings of the 2010 ACM Symposium on AppliedComputing, pp. 131–136.
enbasat, I., Barki, H., 2007. Quo vadis TAM? Journal of the Association for Informa-tion Systems 8 (4), 16.
rown, R., Humphrey, B., 2005. Asset management for transmission and distribution.Power and Energy Magazine, IEEE 3 (3), 39–45.
rynjolfsson, E., 1993. The productivity paradox of information technology. Com-munications of the ACM 36 (12), 77.
uckl, S., Ernst, A., Matthes, F., Ramacher, R., Schweda, C., 2009a. Using enterprisearchitecture management patterns to complement TOGAF. In: 2009 IEEE Inter-national Enterprise Distributed Object Computing Conference, IEEE, pp. 34–41.
uckl, S., Ernst, A., Matthes, F., Schweda, C., 2009b. An information model forlandscape management-discussing temporality aspects. In: Proceedings of theService-Oriented Computing-ICSOC 2008 Workshops, pp. 363–374.
urton, B., Allega, P., July, 2010. Hype Cycle for Enterprise Architecture, 2010. Matrix.hang, H., 2008. Intelligent agent’s technology characteristics applied to online auc-
tions’ task: a combined model of TTF and TAM. Technovation 28 (9), 564–577.hau, P.Y.K., September 1996. An empirical assessment of a modified technology
acceptance model. Journal of Management Information Systems 13, 185–204,http://portal.acm.org/citation.cfm?id=1189558.1189568.
ouncil, C., 2001. A Practical Guide to Federal Enterprise Architecture, Retrievedfrom www.cio.gov/documents/bpeaguide.pdf (on 29/1/2003).
respo Marquez, A., Gupta, J., 2006. Contemporary maintenance management: pro-cess, framework and supporting pillars. Omega 34 (3), 313–326.
ronbach, L., 1951. Coefficient alpha and the internal structure of tests. Psychome-trika 16 (3), 297–334.
ronbach, L., Shavelson, R., 2004. My current thoughts on coefficient alpha andsuccessor procedures. Educational and Psychological Measurement 64 (3), 391.
avis, F.D., 1989. Perceived usefulness, perceived ease of use, and useracceptance of information technology. MIS Quarterly 13 (3), 319–340,http://www.jstor.org/stable/249008.
eLone, W., McLean, E., 1992. Information systems success: the quest for the depen-dent variable. Information Systems Research 3 (1), 60–95.
eLone, W., McLean, E., 2003. The DeLone and McLean model of information systemssuccess: a ten-year update. Journal of Management Information Systems 19 (4),9–30.
eming, W., 2000. Out of the Crisis. The MIT Press.evaraj, S., Kohli, R., 2003. Performance impacts of information technology:
is actual usage the missing link? Management Science 49 (3), 273–289,http://www.jstor.org/stable/4133926.
ishaw, M., Strong, D., 1998. Supporting software maintenance with software engi-neering tools: a computed task-technology fit analysis. Journal of Systems andSoftware 44 (2), 107–120.
ishaw, M., Strong, D., 1999. Extending the technology acceptance model with task-technology fit constructs. Information & Management 36 (1), 9–21.
kstedt, M., Franke, U., Johnson, P., Lagerström, R., Sommestad, T., Ullberg, J., Buschle,M., 2009. A tool for enterprise architecture analysis of maintainability. In:European Conference on Software Maintenance and Reengineering, IEEE, pp.327–328.
uropean Committee For Standardization, 2001. Bs:en 13306:2001—MaintenanceTerminology.
uropean Committee For Standardization, 2002. Bs:en 13460:2002—MaintenanceDocuments for Maintenance.
erratt, T., Vlahos, G., 1998. An investigation of task-technology fit for managers inGreece and the US. European Journal of Information Systems 7 (2), 123–136.
riedman, N., Getoor, L., Koller, D., Pfeffer, A., 1999. Learning probabilistic rela-tional models. In: International Joint Conference on Artificial Intelligence, vol.16, Citeseer, pp. 1300–1309.
able, G., 1994. Integrating case study and survey research methods: an example ininformation systems. European Journal of Information Systems 3 (2), 112–126.
ammelgård, M., September 2007. Business value assessment of IT investments—anevaluation method applied to the electrical power industry. Ph.D. thesis, RoyalInstitute of Technology (KTH), tRITA-EE 2007:050.
ammelgård, M., Ekstedt, M., Närman, P., 2010. A method for assessing the businessvalue of information system scenarios with an estimated credibility of the result.International Journal of Services Technology and Management 13 (1), 105–133.
ebauer, J., Shaw, M., Gribbins, M., 2010. Task-technology fit for mobile informationsystems. Journal of Information Technology.
efen, D., Straub, D., 1997. Gender differences in the perception and use of e-mail: anextension to the technology acceptance model. MIS Quarterly 21 (4), 389–400.
oodhue, D., 1998. Development and measurement validity of a task-technology fitinstrument for user evaluations of information system. Decision Sciences 29 (1),105–138.
oodhue, D., Thompson, R., 1995. Task-technology fit and individual performance.MIS Quarterly 19 (2), 213–236.
unaratne, D., Chenine, M., Ekstedt, M., Närman, P., 2008. A framework to evaluatea functional reference model at a Nordic distribution utility. In: Proceedings ofthe Nordic Distribution and Asset Management Conference (NORDAC 2008).
nd Software 85 (2012) 1953– 1967
Gustafsson, P., Hook, D., Franke, U., Johnson, P., 2009. Modeling the IT impact on orga-nizational structure. In: Enterprise Distributed Object Computing Conference,2009, EDOC’09, IEEE International, IEEE, pp. 14–23.
Henderson, J., Cooprider, J., 1990. Dimensions of I/S planning and design aids: afunctional model of CASE technology. Information Systems Research 1 (3), 227.
Hu, P.J., Chau, P.Y.K., Sheng, O.R.L., Tam, K.Y., September 1999. Examining thetechnology acceptance model using physician acceptance of telemedicinetechnology. Journal of Management Information Systems 16, 91–112,http://portal.acm.org/citation.cfm?id=1189438.1189445.
Iacob, M., Jonkers, H., 2007. Quantitative analysis of service-oriented architectures.International Journal of Enterprise Information Systems 3 (1), 42–60.
Iacob, M.-E., Jonkers, H., 2004. Analysis of enterprise architectures. Tech. Rep., Telem-atica Instituut (TI).
IEC Technical Committee 57, 2003. IEC 61968-1—Application Integration at Elec-tric Utilities—System Interfaces for Distribution Management—Part 1: InterfaceArchitecture and General Requirements.
ISO, 2003. ISO/IEC 9126-2:2003 Software Engineering—Product Quality—Part 2:External Metrics.
ISO, 2011. ISO/IEC 25030: Software Engineering-Software Product Quality Require-ments and Evaluation (SQUARE)—Quality Requirements.
Johansson, K., 1997. Driftsäkerhet och underhåll. Studentlitteratur.Jouault, F., Bézivin, J., 2006. KM3: a DSL for metamodel specification. Formal Methods
for Open Object-Based Distributed Systems, 171–185.Kans, M., 2008. An approach for determining the requirements of computerised
maintenance management systems. Computers in Industry 59 (1), 32–40.Kans, M., Ingwald, A., 2009. Analysing it functionality gaps for maintenance man-
agement. In: Proceedings of the 4th World Congress of Engineering AssetManagement (WCEAM) 2009: Engineering Asset Lifecycle Management, originalpublication is available at www.springerlink.com.
Karahanna, E., Straub, D.W., Chervany, N.L., 1999. Information technology adoptionacross time: a cross-sectional comparison of pre-adoption and post-adoptionbeliefs. MIS Quarterly 23 (2), 183–213, http://www.jstor.org/stable/249751.
Kelly, M., 2003. Report: the telemanagement forum’s enhanced telecom oper-ations map (eTOM). Journal of Network and Systems Management 11 (1),109–119.
Klopping, I., McKinney, E., 2004. Extending the technology acceptance model andthe task and the task-technology fit model to technology fit model to consumerE consumer E-commerce. Information Technology, Learning, and PerformanceJournal 22 (1), 35.
Lagerström, R., Johnson, P., Höök, D., 2010. Architecture analysis of enterprise sys-tems modifiability—models, analysis, and validation. Journal of Systems andSoftware 83 (8), 1387–1403.
Lauritzen, S., 1992. Propagation of probabilities, means, and variances in mixedgraphical association models. Journal of the American Statistical Association 87(420), 1098–1108.
Lee, C., Cheng, H., Cheng, H., 2007. An empirical study of mobile commerce in insur-ance industry: task-technology fit and individual differences. Decision SupportSystems 43 (1), 95–110.
Lee, Y., Kozar, K., Larsen, K., 2003. The technology acceptance model: past, present,and future. Communications of the Association for Information Systems (vol. 12,article 50) 752 (780), 780.
Legris, P., Ingham, J., Collerette, P., 2003. Why do people use information tech-nology? A critical review of the technology acceptance model. Information &Management 40 (3), 191–204.
Majchrzak, A., Malhotra, A., John, R., 2005. Perceived individual collaboration know-how development through information technology-enabled contextualization:evidence from distributed teams. Information Systems Research 16 (1), 9–27.
Mathieson, K., 1991. Predicting user intentions: comparing the technology accep-tance model with the theory of planned behavior. Information Systems Research2 (3), 173–191.
McKeen, J., Smith, H., 2010. Developments in practice XXXIV: application portfoliomanagement. Communications of the Association for Information Systems 26(1), 9.
Närman, P., Buschle, M., König, J., Johnson, P., September 2010. Hybrid probabilisticrelational models for system quality analysis. In: Enterprise Distributed ObjectComputing Conference 2010, EDOC’10, Vitoria, ES, Brazil, 14th InternationalIEEE, IEEE.
Närman, P., Gammelgård, M., Nordström, L., 2006. A functional reference modelfor asset management Applications based on IEC 61968-1. In: Proceedings ofNordic Distribution and Asset Management Conference (NORDAC), Stockholm,Citeseer.
Närman, P., Holm, H., Johnson, P., König, J., Chenine, M., Ekstedt, M., 2011. Data accu-racy assessment using enterprise architecture. Enterprise Information Systems5 (1), 37–58.
Närman, P., Nordström, L., Gammelgård, M., Ekstedt, M., 2007. Validation and refine-ment of an asset management subset of the IEC 61968 interface reference model.In: Power Systems Conference and Exposition, 2006, PSCE’06, 2006 IEEE PES,IEEE, pp. 915–922.
Optland, M., Middeljans, K., Buller, V., 2008. Enterprise Ontology basedApplication Portfolio Rationalization at Rijkswaterstaat, http://www.via-nova-architectura.org/files/magazine/Optland.pdf.
Pagani, M., 2006. Determinants of adoption of high speed data servicesin the business market: evidence for a combined technology accep-tance model with task technology fit model. Information & Manage-ment 43 (7), 847–860, http://www.sciencedirect.com/science/article/B6VD0-4KWK12H-1/2/ee4029f6a59cbf0abb963a05b35f966f.
tems a
P
P
P
R
R
R
SS
S
S
TTV
V
V
V
V
V
V
W
W
computer engineering from the University of Cape Town, Republic of South Africa.
P. Närman et al. / The Journal of Sys
atton, J., International Society for Measurement and Control, 1983. Preventivemaintenance. Instrument Society of America New York.
avlou, P., 2003. Consumer acceptance of electronic commerce: integrating trust andrisk with the technology acceptance model. International Journal of ElectronicCommerce 7 (3), 101–134.
ulkkinen, M., 2006. Systemic Management of Architectural Decisions in EnterpriseArchitecture Planning. Four Dimensions and Three Abstraction Levels.
iempp, G., Gieffers-Ankel, S., 2007. Application portfolio management: a decision-oriented view of enterprise architecture. Information Systems and E-BusinessManagement 5 (4), 359–378.
oca, J.C., Chiu, C.-M., Martínez, F.J., 2006. Understanding e-learning con-tinuance intention: an extension of the technology acceptance model.International Journal of Human-Computer Studies 64 (8), 683–696,http://www.sciencedirect.com/science/article/pii/S107158190600005X.
oss, J., Weill, P., Robertson, D., 2006. Enterprise Architecture as Strategy: Creatinga Foundation for Business Execution. Harvard Business Press.
AP, A., 2006. Industry-Specific SAP Business Maps-Utility.imon, D., Fischbach, K., Schoder, D., 2010. Application portfolio management—an
integrated framework and a software tool evaluation approach. Communica-tions of the Association for Information Systems 26 (1), 3.
mit, K., Slaterus, W., 1992. Information Model for Maintenance Management(IMMM). Cap Gemini Publishing.
ommestad, T., Ekstedt, M., Johnson, P., 2010. A Probabilistic Relational Model forSecurity Risk Analysis. Computers & Security.
he Open Group, 2009a. ArchiMate 1.0 Specification. Van Haren Publishing.he Open Group, 2009. TOGAF Version 9 “Enterprise Edition”.an der Raadt, B., Bonnet, M., Schouten, S., van Vliet, H., 2010. The relation between
EA effectiveness and stakeholder satisfaction. Journal of Systems and Software.enkatesh, V., 2000. Determinants of perceived ease of use: integrating control,
intrinsic motivation, and emotion into the technology acceptance model. Infor-mation Systems Research 11 (4), 342–365.
enkatesh, V., Bala, H., 2008. Technology acceptance model 3 and aresearch agenda on interventions. Decision Sciences 39 (2), 273–315,http://dx.doi.org/10.1111/j.1540-5915.2008.00192.x.
enkatesh, V., Davis, F., 2000. A theoretical extension of the technology acceptancemodel: four longitudinal field studies. Management Science 46 (2), 186–204.
enkatesh, V., Morris, M., Davis, G., Davis, F., DeLone, W., McLean, E., Jarvis, C.,MacKenzie, S., Podsakoff, P., Chin, W., et al., 2003. User acceptance of informationtechnology: toward a unified view. Inform Management 27 (3), 425–478.
enkatraman, N., 1989. The concept of fit in strategy research: toward verbal andstatistical correspondence. Academy of Management Review 14 (3), 423–444.
essey, I., 1986. Expertise in debugging computer programs: an analysis of the con-tent of verbal protocols. IEEE Transactions on Systems, Man and Cybernetics 16(5), 621–637.
alker, M., 2007. Integration of Enterprise Architecture and ApplicationPortfolio Management. MSDN Library, http://msdn.microsoft.com/en-us/library/bb896054.aspx.
arner, R., 2008. Applied statistics: From Bivariate Through Multivariate Tech-niques. Sage Publications, Inc.
nd Software 85 (2012) 1953– 1967 1967
Weill, P., Vitale, M., 1999. Assessing the health of an information systems applica-tions portfolio: an example from process manufacturing. MIS Quarterly 23 (4),601–624.
Woodhouse, J., 2006. Putting the total jigsaw puzzle together: PAS 55 standard forthe integrated, optimized management of assets. In: International MaintenanceConference.
Yuan, C., Druzdzel, M., 2007. Importance Sampling for General Hybrid BayesianNetworks.
Zigurs, I., Buckland, B., 1998. A theory of task/technology fit and group supportsystems effectiveness. MIS Quarterly 22 (3), 313–334.
Per Närman Since 2006, Per Närman is a PhD student at the department of Indus-trial Information and Control Systems at the Royal Institute of Technology (KTH) inStockholm, Sweden. He has published several journal and conference papers on thetopic of Enterprise Architecture analysis. He has an MSc degree in Electrical Engi-neering from KTH. As of 2011, Mr. Närman works as a management consultant atCapgemini Consulting in Sweden.
Hannes Holm is a PhD student at the department of Industrial Information and Con-trol Systems at the Royal Institute of Technology (KTH) in Stockholm, Sweden. Hereceived his MSc degree in management engineering at Luleå University of Tech-nology. His research interests include enterprise security architecture and cybersecurity regarding critical infrastructure control systems.
David Höök has an MSc degree in Electrical Engineering from the Royal Institute ofTechnology (KTH) and is a PhD student at the department of Industrial Informationand Control Systems, also at KTH. Mr. Höök currently also works as a project managerin industry.
Pontus Johnson is Professor and Head of the Department of Industrial Informa-tion and Control Systems at the Royal Institute of Technology (KTH) in Stockholm,Sweden. He received his MSc from the Lund Institute of Technology in 1997 and hisPhD and Docent titles from the Royal Institute of Technology in 2002 and 2007. Hewas appointed professor in 2009. He has chaired and co-chaired a number of inter-national conferences and workshops and participated in the program committeesof approximately fifty such events. He has been associate and guest editor to severaljournals. He is secretary of the IFIP Working Group 5.8 on Enterprise Interoperabil-ity. Pontus holds undergraduate courses and supervises PhD students (currently six)at the Royal Institute of Technology. He has authored close to 100 scientific articles,mainly on the prediction of non-functional properties in software-intensive systemarchitectures.
Nicholas Honeth, received the MSc degree in computer science from Chalmers Uni-versity of Technology, Gothenburg, Sweden and the BSc, degree in electrical and
He is currently a PhD student at the department of Industrial Information and Con-trol Systems at KTH - The Royal institute of Technology, Stockholm, Sweden. Hisresearch interests are in the area of control and monitoring systems for electricalutilities and multi-agent applications in power systems.