anti-pattern specification and correction recommendations for

10
Anti-Pattern Specification and Correction Recommendations for Semantic Cloud Services Molka Rekik 1 , Khoulou Boukadi 1 , Walid Gaaloul 2 , Hanˆ ene Ben-Abdallah 3 1 Mir@cl Laboratory, Sfax University, Tunisia 2 Telecom SudParis, Samovar, France 3 Mir@cl Laboratory, King Abdulaziz University, KSA 1 {molka.rekik, khouloud.boukadi}@gmail.com 2 [email protected], 3 [email protected] Abstract The lack of standardized descriptions of cloud ser- vices hinders their discovery. In an effort to standard- ize cloud service descriptions, several works propose to use ontologies. Nevertheless, the adoption of any of the proposed ontologies calls for an evaluation to show its efficiency in cloud service discovery. Indeed, the exist- ing cloud providers describe, their similar offered ser- vices in different ways. Thus, various existing works aim at standardizing the representation of cloud com- puting services by proposing ontologies. Since the ex- isting proposals were not evaluated, they might be less adopted and considered. Indeed, the ontology evalu- ation has a direct impact on its understandability and reusability. In this paper, we propose anevaluation ap- proach to validate our proposed Cloud Service Ontology (CSO), to guarantee an adequate cloud service discov- ery. To this end, this paper has a three-fold contribution. First, we specify a set of patterns and anti-patterns in order to evaluate our CSO. Second, we define an anti- pattern detection algorithm based on SPARQL queries which provides a set of correction recommendations to help ontologists revise their ontology. Finally, tests were conducted in relation to: (i) the algorithm efficiency and (ii) anti-pattern detection of design anomalies as well as taxonomic and domain errors within CSO. 1. Introduction Over the last years, cloud computing has become an attractive strategy for the users thanks to its on-demand computing services provisioned and released with mini- mal management effort. Cloud computing offers its ben- efits through three types of services, namely, Infrastruc- ture as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). However, the wide range adoption of this computing paradigm is hindered by the lack of a standard language for describing providers’ of- fers making it difficult for users to discover appropri- ate services. Indeed, cloud providers describe similar services in different manners (different pricing policies, different quality of services, etc.). This motivated the proposition to standardize the description of cloud ser- vices by introducing ontologies [3][11][7]. However, the absence of an evaluation of the so-far proposed on- tologies did not encourage their adoption. Indeed, while developing the ontology, ontologists may commit errors, such as redundancy, missing information, inconsistency, etc. (for example, error regarding CPU missing part of virtual machine). Moreover, while populating the ontol- ogy, by automatically extracting information from the providers’ catalogs and the cloud registries, some miss- ing descriptions, hierarchical errors and redundant in- stances can be inserted (for example, error regarding missing to indicate the operating system of the virtual machine). Consequently, it is necessary to evaluate the ontology concept to ensure the consistency maintenance and the redundancy elimination. Usually, the ontology evaluation is an integral part of its development lifecy- cle. A set of tools known as reasoners have been pro- posed in the literature to get rid of errors in ontologies. According to [4], ”a reasoner is a program that pro- vides automated support for reasoning tasks, such as 1 4231 Proceedings of the 50th Hawaii International Conference on System Sciences | 2017 URI: http://hdl.handle.net/10125/41672 ISBN: 978-0-9981331-0-2 CC-BY-NC-ND

Upload: vuongdien

Post on 14-Feb-2017

221 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Anti-Pattern Specification and Correction Recommendations for

Anti-Pattern Specification and Correction Recommendations for SemanticCloud Services

Molka Rekik1, Khoulou Boukadi1, Walid Gaaloul2, Hanene Ben-Abdallah3

1Mir@cl Laboratory, Sfax University, Tunisia2Telecom SudParis, Samovar, France

3Mir@cl Laboratory, King Abdulaziz University, KSA1{molka.rekik, khouloud.boukadi}@gmail.com

[email protected], [email protected]

AbstractThe lack of standardized descriptions of cloud ser-

vices hinders their discovery. In an effort to standard-ize cloud service descriptions, several works propose touse ontologies. Nevertheless, the adoption of any of theproposed ontologies calls for an evaluation to show itsefficiency in cloud service discovery. Indeed, the exist-ing cloud providers describe, their similar offered ser-vices in different ways. Thus, various existing worksaim at standardizing the representation of cloud com-puting services by proposing ontologies. Since the ex-isting proposals were not evaluated, they might be lessadopted and considered. Indeed, the ontology evalu-ation has a direct impact on its understandability andreusability. In this paper, we propose an evaluation ap-proach to validate our proposed Cloud Service Ontology(CSO), to guarantee an adequate cloud service discov-ery. To this end, this paper has a three-fold contribution.First, we specify a set of patterns and anti-patterns inorder to evaluate our CSO. Second, we define an anti-pattern detection algorithm based on SPARQL querieswhich provides a set of correction recommendations tohelp ontologists revise their ontology. Finally, tests wereconducted in relation to: (i) the algorithm efficiency and(ii) anti-pattern detection of design anomalies as well astaxonomic and domain errors within CSO.

1. Introduction

Over the last years, cloud computing has become anattractive strategy for the users thanks to its on-demand

computing services provisioned and released with mini-mal management effort. Cloud computing offers its ben-efits through three types of services, namely, Infrastruc-ture as a Service (IaaS), Platform as a Service (PaaS) andSoftware as a Service (SaaS). However, the wide rangeadoption of this computing paradigm is hindered by thelack of a standard language for describing providers’ of-fers making it difficult for users to discover appropri-ate services. Indeed, cloud providers describe similarservices in different manners (different pricing policies,different quality of services, etc.). This motivated theproposition to standardize the description of cloud ser-vices by introducing ontologies [3][11][7]. However,the absence of an evaluation of the so-far proposed on-tologies did not encourage their adoption. Indeed, whiledeveloping the ontology, ontologists may commit errors,such as redundancy, missing information, inconsistency,etc. (for example, error regarding CPU missing part ofvirtual machine). Moreover, while populating the ontol-ogy, by automatically extracting information from theproviders’ catalogs and the cloud registries, some miss-ing descriptions, hierarchical errors and redundant in-stances can be inserted (for example, error regardingmissing to indicate the operating system of the virtualmachine). Consequently, it is necessary to evaluate theontology concept to ensure the consistency maintenanceand the redundancy elimination. Usually, the ontologyevaluation is an integral part of its development lifecy-cle. A set of tools known as reasoners have been pro-posed in the literature to get rid of errors in ontologies.According to [4], ”a reasoner is a program that pro-vides automated support for reasoning tasks, such as

1

4231

Proceedings of the 50th Hawaii International Conference on System Sciences | 2017

URI: http://hdl.handle.net/10125/41672ISBN: 978-0-9981331-0-2CC-BY-NC-ND

Page 2: Anti-Pattern Specification and Correction Recommendations for

classification, debugging and querying”. Researchersin [2][5] identified new error types not covered by thetraditional evaluation techniques. In fact, reasoners,such as FaCT++ [12], HermiT [10], Pellet [?]. can notdeal with the inconsistency, the incompleteness and theredundancy errors. Evaluating such errors is a very dif-ficult task [6].

In this paper, we define a novel ontology evaluationapproach which considers the cloud domain to enhancethe previously proposed CSO quality [7]. We denote thatour approach is the building block for an adequate ser-vice discovery. To do so, we take profit from the existingreasoners and overcome their limitations by introducinga set of patterns and anti-patterns. In fact, the introduc-tion of patterns represents a good solution to the recur-rent design problems as they help to create and maintaincorrect ontologies [1]. Moreover, patterns help to createrigorous ontologies with the least effort. In oppositionto the patterns, anti-patterns check the errors in order tocorrect them and consequently achieve our CSO correct-ness.

The rest of the paper is organized as follows. Section2 overviews ontology evaluation techniques. Section 3briefly describes our proposed CSO. Section 4 definesthe ontology evaluation approach by demonstrating thelimitations of the existing reasoners and presenting ouranti-pattern detection algorithm. Section 5 describes theevaluation of both the anti-pattern detection algorithmand CSO and discusses the experimental results. Finally,we summarize the presented work the paper and presentsome future endeavors.

2. Related Work

Several techniques for ontology evaluation exist inthe literature among which we can cite the use of De-scription Logics reasoners and anti-pattern techniques.

The authors in [2] proposed an evaluation frameworkof ontolingua ontologies. They suppose that the pres-ence of partition problems lead to an inconsistent ontol-ogy. Thus, they identified a set of taxonomic errors thatcan occur within an ontology. This set was revised andnew errors were added like disjointedness error, func-tional property omission for a single valued property andredundancy of disjoint relation error.

The authors in [6], criticized the DL reasoner perfor-mance by evaluating their consistency in a case studyof automobile ontology. They proved that the reasonerssuch as Racer, Pellet and FaCT++ could not detect vari-ous types of errors, such as circulatory errors, semanticinconsistency errors and some types of redundancy er-rors like disjoint relation and identical formal definition.

To overcome this limit of reasoner-based evaluation,

Figure 1. The Top Level Concepts

anti-patterns were introduced [1]. In [8], the authorsidentified an anti-pattern catalogue which is defined withDL expressions. In [9], the authors proposed an anti-pattern detection approach based on SPARQL queriesfor the OIL anti-pattern. They showed how this anti-pattern could not be detected by the reasoners.In [9], the authors defined a detection method basedon ontology transformations to simulate the reasoner’swork.

In summary, reasoners can not detect the taxonomicerrors as well as the domain errors. In addition, startingfrom scratch is not a good solution. To optimize theeffort, it is better to reuse and benefit from the existingreasoners and cover their limitations by a set of anti-patterns. In order to evaluate our CSO [7], we combine,in this work, the two evaluation techniques, the reasonerand anti-patterns, to identify and correct the redundancy,the incompleteness and the domain knowledge missingof the ontology.

3. Cloud Service Ontology: CSO

Our CSO building process was based on cloud stan-dards, such as NIST 1, OCCI 2, CIMI 3. For moredetails about CSO, we refer the reader to [7]. Inthis section, we present the important CSO’s con-cepts and axioms. Among the main concepts in CSO,we cite: Actor, Service, Essential Characteristic andCloudFederation, as shown in Figure 1. The Actorclass presents the different actors which participate inthe cloud. In Figure 2, we distinguish between: theuser, the provider and the broker. The Provider classincludes three subclasses, Provider IaaS, Provider PaaSand Provider SaaS. Each cloud provider offers three dif-ferent service types which are: (i) IaaS including theStorage and the Virtual Machine resources, (ii) PaaScovering the Platform resource and (iii) SaaS encom-passing the Software resource as CRM, mailing list,video gaming, etc.

1http://www.nist.gov/itl/cloud2http://occi-wg.org3https://www.dmtf.org/standards/cloud

4232

Page 3: Anti-Pattern Specification and Correction Recommendations for

Figure 2. An extract of CSO

Table 1. CSO important axiomsAxiom

Broker v Actor;User v Actor;Provider v Actor;Broker u Provider u User v ⊥;

User IaaS u User PaaS u User SaaS v ⊥;Provider ≡ Provider IaaS t Provider PaaS t Provider SaaS;

IaaS v Service;PaaS v Service;SaaS v Service;IaaS u PaaS u SaaS v ⊥;

Storage v IaaS;V irtual Machine v IaaS;Platform v PaaS;Software v SaaS;

OnDemand Service v Essential Characteristic;Reserved Service v Essential Characteristic;OnDemand Service uReserved Service v ⊥;

ComputeCapability ContainsCpu ≡ cpuIsPartOf−;ComputeCapability ContainsMemory ≡ memoryIsPartOf−;

Table 1 identifies the set of axioms which cover thespecialization, the disjoint and the union relations be-tween the important concepts within CSO. Each Vir-tual Machine (VM) has specific characteristics depend-ing on its type. The object property hasVMType links be-tween the two concepts Virtual Machine and VMType,as shown in Table 2.

The Provider IaaS offers a wide range of virtual ma-chine types, such as small, medium and large. EachVMType has an essential characteristic which is eitheron demand self-service (OnDemand Service), which isa prime feature of most cloud offerings where the userpays for use, or on reserved service (Reserved Service)where the user will get a discount after a service is usedfor a specific amount of time. Moreover, each VM-Type should have a compute capability which includestwo characteristics: CPU and memory. We admit thatthe two characteristics could not exist without a com-pute capability. The object properties cpuIsPartOf andmemoryIsPartOf are the inverse of the object proper-ties ComputeCapability ContainsCpu and ComputeCa-

pability ContainsMemory respectively, as shown in thetwo last lines in Table 1.

Figure 3 shows the CloudFederation concept. In acloud federation, the services are provisioned by a groupof cloud providers that collaborate together to share re-sources. The property interconnected represents thedifferent members of a cloud federation. Each feder-ation has an architecture hasFederationArchitecture, atype hasFederationType and a network hasNetwork, asshown in Table 2. The federation architecture can be in-stalled with broker, FederationArchitecture Centralized,where there is a central broker that performs and fa-cilitates the resource allocation or without broker, Fed-erationArchitecture Decentralized. The federation typecan be vertical which spans multiple levels, horizontalwhich takes place on one level of the cloud stack, hybridwhich covers both vertical and horizontal or delegation.It is worth noting that tables 1 and 2 present the axiomswhich are useful in the evaluation process, particularly,in the anti-pattern detection phase.

4233

Page 4: Anti-Pattern Specification and Correction Recommendations for

Figure 3. Cloud Federation Description

Table 2. Object properties in CSOObject property domain range

hasVMType Virtual Machine VMTypehasEssentialCharacteristic VMType Essential Characteristic

ComputeCapability ContainsCpu Compute Capability CPUcpuIsPartOf CPU Compute Capability

ComputeCapability ContainsMemory Compute Capability MemorymemoryIsPartOf Memory Compute Capability

hasFederationArchitecture CloudFederation FederationArchitecturehasFederationType CloudFederation FederationType

hasNetwork CloudFederation Networkinterconnected CloudFederation Provider

4. Ontology Evaluation Approach

The evaluation is an important phase in the ontologylifecycle. According to Gomez-Perez, ontology evalua-tion ”is a technical judgment of the content of the ontol-ogy” [2]. The main goal is to detect the errors within anontology and then correct them in order to increase theontology quality and consequently to guarantee its uti-lization. In this section, we present an overview of ourCSO evaluation approach (see Figure 4). Our approachis composed of two phases:

• Evaluation processIn order to validate CSO’s consistency, we choosethe DL reasoner FaCT++, developed at the Univer-sity of Manchester and designed as a platform forexperimenting through tableaux algorithms and op-timization techniques. It is an open source, free andavailable with Protege. Besides, in order to vali-date CSO cloud standard conformity, we define aset of patterns and anti-patterns. The detection ofontology anti-patterns contributes to ontology qual-ity assessment [9].

Figure 4. Ontology evaluation approach

As shown in Figure 4, this phase includes the fol-lowing three steps:

– Consistency Verification: we use theFaCT++ reasoner in order to validate the con-sistency of CSO. According to [2], ”a given

4234

Page 5: Anti-Pattern Specification and Correction Recommendations for

definition is consistent if and only if the indi-vidual definition is consistent and no contra-dictory knowledge can be inferred from otherdefinitions and axioms”. If ontology is incon-sistent, then the correction phase should beapplied and the reasoner’s application shouldbe repeated. Otherwise, the next step, namelyanti-pattern detection, will be triggered.

– Anti-pattern detection: this step consistsin detecting the errors undiscovered by theFaCT++ by using a set of proposed anti-patterns (see Sect. 4.2). If no error is detected,ontology is considered as evaluated. Other-wise, the recommendations for the detectederrors and anomalies will be triggered.

– Recommendations: in this step, we usethe anti-pattern catalogue which associatescorrection recommendations for each anti-pattern.The proposed recommendations can be takeninto consideration by ontologists, and in thiscase, the correction phase will be triggered.After that, we restart the evaluation process.If the recommendations are ignored by theontologists, who consider that the errors arenot relevant, the evaluation process is termi-nated.

• Ontology correctionThis manual phase aims at changing the ontologyin order to correct the errors detected by either thereasoner or by following the recommendations pro-posed by the anti-patterns.

4.1. Pattern Definition

As defined in [1], the Ontology Design Pattern (ODP)is a modeling solution to a recurrent ontology designproblem. In our work, the pattern definition is mainlybased on taxonomy.

In general, taxonomy hierarchically organizes classesand instances within an ontology [2]. In the cloud do-main, taxonomy can be identified based on standards,such as NIST 1, OCCI 2, CIMI 3, which hierarchicallypresent cloud service characteristics. We define the fol-lowing generic decomposition pattern, which a commonpattern, in order to guarantee cloud computing taxon-omy. It is worth mentioning that the following pattern isgeneric that can be applied to any ontology treating anydomain.

The decomposition pattern: to respect the cloudstandards, the hierarchy definition should meet one ofthe following decomposition manners.

Figure 5. Disjoint Decomposition Pattern

Figure 6. Exhaustive Decomposition Pat-tern

Figure 7. Partition Pattern: Disjoint Ex-haustive Decomposition

• Disjoint decomposition: this pattern defines a setof disjoint subclasses of class C. This classificationimplies that each instance can belong directly toclass C or to one of the subclasses of C, as shownin Figure 5. This pattern is expressed through DLas follows:

C1 v C;C2 v C;C3 v C;C1 u C2 u C3 v ⊥;(1)

• Exhaustive decomposition: this pattern definesa complete classification of subclasses of class C.These subclasses are not necessarily disjoint. In-deed, each instance of class C should be an instanceof one or more of the subclasses as shown in Fig-ure 6. This pattern is expressed through DL as fol-lows:

C1 v C;C2 v C;C3 v C;C1 t C2 t C3 ≡ C;(2)

• Partition: this pattern defines a set of disjoint sub-classes of class C. This classification is also com-plete and class C is the union of all the subclasses.Each instance of class C should be an instance ofonly one subclass, hence no shared instances areallowed. Figure 7 shows the partition pattern. This

4235

Page 6: Anti-Pattern Specification and Correction Recommendations for

pattern is expressed through DL as follows:

C1 v C;C2 v C;C3 v C;

C1 u C2 u C3 v ⊥;C1 t C2 t C3 ≡ C; (3)

4.2. Anti-Pattern Definition

The anti-pattern is similar to a pattern, ”except thatinstead of a solution, it gives something that looks su-perficially like a solution, but it is not” [13].

In our work, we propose three anti-pattern categories:taxonomic errors, design anomalies and domain errors.Each category includes a set of anti-patterns. It is worthnoting that taxonomic errors and design anomalies aregeneric and can be used with others ontologies in dif-ferent domains. However, domain errors are specific toour CSO. The reason behind the proposal of taxonomicand domain errors is to cover, respectively, the errorsrelated to the ontology concepts as well as to the ontol-ogy instances. Besides the errors, we propose the designanomaly category. In fact, anomalies removal is neces-sary to improve ontology usability [6]. We present inwhat follows, the anti-pattern definitions using the DLexpressions.

4.2.1 Generic Taxonomic Errors

In this category, we aim at detecting possible violationsof the set of patterns already defined in Sect. 4.1. Re-ferring to the work of Gomez-Perez [2], we identify dif-ferent error types that occur when modeling taxonomicknowledge within an ontology. In this context, these er-rors specify :

• InconsistencyThe inconsistency errors occur when there areexternal instances in a complete classification orwhen the disjoint relation is defined betweenclasses with different hierarchies.

– External instances in exhaustive decompo-sition and partition: these errors occur whenthere are one or more instances of the classthat do not belong to any subclass. This anti-pattern is expressed through DL as follows:

C(Individual 1);C1 v C;C2 v C;

C3 v C;C1 t C2 t C3 ≡ C; (4)

– Partition Error: this error appears when aclass is disjoint with the sibling of its superclass instead of being disjoint with its siblingclasses. This anti-pattern can be expressed

through DL as follows:

C1 v C;C2 v C;C3 v C1;

C2 u C3 v ⊥; (5)

• IncompletenessThis kind of error occurs when the ontologists omitthe definition of disjoint relation or the union rela-tion. In fact, the information about the super classor the subclasses is missing.

– Decomposition knowledge Omission: thiserror covers two types of omission wheneveridentifying a decomposition of a concept: (i)the omission of which subclasses are disjointor (ii) the omission, which it is the union of allits subclasses. This anti-pattern is expressedthrough DL as follows:

C1 v C;C2 v C; (6)

• RedundancyA redundancy error appears when two object prop-erties have the same formal definition or when thedisjoint relation is duplicated. The redundancy er-ror covers:

– Identical formal definition of object prop-erties: this error occurs when different ob-ject properties have the same formal defini-tion, i.e. they have the same couple (domain,range), although they have different names.This anti-pattern is expressed through DL asfollows:

∃R.> v C1;> v ∀R.C2;

∃R1.> v C1;> v ∀R1.C2; (7)

– Redundancy of Disjoint Relation: it is thefact of defining a concept as disjoint withother concepts more than once, i.e. classesthat have more than one disjoint relation. TheDL expression shows that the disjoint relationbetween subclasses C3 and C4 is repeatedsince they already inherit it from their baseclasses C1 and C2.

C3 v C1;C4 v C2;C1 u C2 v ⊥;C3 u C4 v ⊥; (8)

4.2.2 Generic Design Anomalies

Likewise, we identify the design anomalies within on-tology. The correction of these anomalies guarantees theusability of an ontology [6]. Actually, these anomaliesinclude:

4236

Page 7: Anti-Pattern Specification and Correction Recommendations for

• Lazy Concept: is an instantiated concept of whichall the datatype properties are not defined. Thisanti-pattern is expressed through DL as follows:

¬∃T.d ≡ ∀T.¬d; (9)

• Weak Lazy Concept: is an instantiated conceptof which the datatype properties are not defined.This anti-pattern can be expressed through DL asfollows:

¬∀T.d ≡ ∃T.¬d; (10)

• Weak Concept Definition: this refers to the situa-tion when an instantiated concept has all or some ofits object properties not defined. This anti-patternis expressed through DL as follows:

¬∀R.C;¬∃R.C; (11)

4.2.3 Domain Errors

The previous cited anti-patterns are generic, whichcan be useful in different domains. However, ourobjective is to evaluate a cloud service descriptionontology. For this reason, the following set ofanti-patterns focuses on cloud service description er-rors. It is worth noting that due to space limi-tations, we choose only the following set of anti-patterns and we can refer the reader to our websitehttps://sites.google.com/site/csoevaluation/home in or-der to find the complete list illustrated with examples.

• Invalid VMType Characteristic: each VM typemust have at least an essential characteristic anda compute capability. Such anti-pattern occurs, asshown in the DL expression, when these two char-acteristics are not defined.

¬∃hasEssentialCharacteristic.

Essential Characteristic;

¬∃Compute Capability Attachment.

Compute Capability; (12)

• Invalid Provider Description: each provider fol-lows a deployment model, has a hosting type, apricing model and a set of service capabilities. Theomission of one or more of the information leads toan invalid provider description.

¬∃hasDeploymentModel.Deployment Model;

¬∃hasHostingType.HostingType;

¬∃hasPricingModel.PricingModel;

¬∃Provider ServiceCapabilities.

Provider Capability; (13)

• Invalid Compute Capability Definition: eachvirtual machine type must have a compute capa-bility. This capability includes two characteristics:CPU and memory. We admit that these two char-acteristics could not exist without a compute ca-pability. The following DL expression shows theverification in both directions.

¬∃cpuIsPartOf.Compute Capability;

¬∃Compute Capability ContainsCPU.CPU ;

¬∃memoryIsPartOf.Compute Capability;

¬∃Compute Capability ContainsMemory.

Memory; (14)

• Invalid Service Capability Description: eachservice must have a set of capabilities. For ex-ample, the storage and network capacities must beprovided by the IaaS service. The following DLexpression shows the capabilities that should be of-fered by each service type.

¬∃Platform Capabilities.Functional Capability;

¬∃Software Capabilities.Functional Capability;

¬∃Storage Capabilities.Functional Capability;

¬∃Network Capabilities.Functional Capability; (15)

• Faulty Value: the previous set of domain errorsconsiders object properties. In the following part,we focus on the datatype properties. The val-ues used in the two following anti-patterns are ex-tracted from the three giant cloud provider cata-logues, namely AWS Cloud 4, Google Cloud 5 andMicrosoft Azure 6.

– Faulty Value CPU RAM: each cloudprovider offers a specific capability of a corenumber and memory size. This anti-patternoccurs, as shown in the DL expression, whenone or more datatype properties violate therestrictions of each provider.

Provider IaaS(Microsoft Azure);

∃Cores. < 1; ∃Cores. > 32;

∃MemorySize. < 0.75;∃MemorySize. > 448;

Provider IaaS(Google Cloud);

∃Cores. < 1; ∃Cores. > 32;

∃MemorySize. < 0.6;∃MemorySize. > 208;

4https://aws.amazon.com/5https://cloud.google.com/6https://azure.microsoft.com/

4237

Page 8: Anti-Pattern Specification and Correction Recommendations for

Provider IaaS(AWS);

∃Cores. < 1;∃Cores. > 40;

∃MemorySize. < 0.5;∃MemorySize. > 244; (16)

4.3. Consistency Verification

In order to investigate how FaCT++ handles variouserrors, we load CSO in Protege and introduce some il-lustrating errors, such as:

• we define an instance ”IaaS 1” and link it directlyto class IaaS instead of IaaS’s subclasses. This in-troduces an external instance in exhaustive decom-position and partition error,

• we add a disjoint relation between the class Bro-ker and the subclass User IaaS in order to make apartition error,

• we omit the union relation of the classProvider Capability. Due to the absenceof disjoint relation between the subclassFunctional Capability and the subclassNon Functional Capability, a decompositionknowledge omission error will occur,

• in order to introduce an identical formal defini-tion object properties error, we define a new objectproperty named hasArchitectureFederation that hasthe same couple (domain, range) of hasFederation-Architecture which is (FederationCloud, Architec-tureFederation) and

• we add a disjoint relation between the two Platformand the Software subclasses, yet, we have alreadyone between PaaS and SaaS. This new disjoint re-lation introduces redundancy of disjoint relation.

Thereafter, we execute the FaCT++ reasoner which con-siders our definition of the instance ”IaaS 1” as a normalone, and gives no error or warning. Furthermore, it doesnot detect any errors from these, previously, introducedones.

In summary, the FaCT++ reasoner is not able to de-tect the problems related to the taxonomic errors definedin Sect. 4.2. To the best of our knowledge, there is noexisting reasoner able to detect the design anomalies andthe domain errors.

4.4. Anti-Pattern Detection and Recommenda-tions

According to the results obtained through theFaCT++ reasoner, it seems necessary to apply theanti-patterns defined in Sect. 4.2. We propose an

anti-pattern detection algorithm based on SPARQLqueries. Due to space limitations, we present onlythe SPARQL query of the invalid VMType charac-teristic anti-pattern and its correction recommenda-tion. A complete description of the queries andthe recommendations is available in our websitehttps://sites.google.com/site/csoevaluation/home.

Invalid VMType Characteristic: the query in list-ing 1 returns each type of virtual machine (?VMt)that is instantiated, but its definition is missing an es-sential characteristic (?E C) and a compute capability(?compC).

Listing 1. SPARQL Query: Invalid VMTypeCharacteristic

1 SELECT ?VMt ?E_C ?compC2 WHERE3 {?VMt rdf:type ns:VMType4 OPTIONAL5 {?VMt ns:hasEssentialCharacteristic ?E_C}6 OPTIONAL7 {?VMt ns:Compute_Capability_Attachment8 ?compC}9 FILTER ((!bound(?E_C))||(!bound(?compC)))

10 }

Once this anti-pattern is detected, the following correc-tion recommendation is displayed in order to guide theontologists to correct the error, ”Please complete thedescription of the VM type class by defining these twoobject properties: hasEssentialCharacteristic and Com-pute Capability Attachment !”.

4.4.1 The anti-pattern detection algorithm

The proposed Algorithm 2, which is responsible for per-forming the anti-pattern detection, takes as input CSOand the list of SPARQL queries. As an output, it showsthe detected errors and guides the ontologists to cor-rect these errors by providing a set of recommendations.During execution, the following cycle happens: (i) se-lect a query from the list; (ii) execute the query and (iii)once the anti-pattern is detected, the error appears witha list of correction recommendations.

Listing 2. The anti-pattern detection algo-rithm

1 Input:CSO.owl,SPARQL_query_antipattern_list2 Output:Detected Errors and Recommendations3 for each anti-pattern4 Begin5 for (each query in6 SPARQL_query_antipattern_list) do

4238

Page 9: Anti-Pattern Specification and Correction Recommendations for

Figure 8. Detected Taxonomic Errors

7 {8 result=Execute(query)9 if( Exist(result) ) then

10 Display("Recommendation Text!")11 }12 End

To verify the anti-pattern detection algorithm effi-ciency, we implement it and we test its performance ona test data (i.e. CSO modified by the taxonomic errorintroduction). The algorithm is implemented using theapache jena 7.

5. Experimentation: Algorithm evaluationand Detecting anti-patterns in CSO

A modified CSO integrating several taxonomic er-rors, which are presented in Sect. 4.3, is taken as a testdata. First, we run the algorithm on this ontology to ver-ify its efficiency (Sect. 5.1). Then, we test the algorithmon the real CSO (Sect. 5.2).

5.1. Algorithm evaluation

We qualify the algorithm as efficient when it suc-cessfully detects the introduced errors. Figure 8 showsthat the proposed algorithm detects the taxonomic errorsas well as the design anomalies and the domain errorswithin the modified CSO.

Table 3 presents the three anti-pattern categories usedfor our evaluation. For each category, this table indicatesthe number of the detected errors by the anti-pattern de-tection algorithm. The precision measures the ratio ofcorrectly found anti-patterns over the total number of de-tected anti-patterns. The last column indicates the recallwhich infers the ratio of correctly found anti-patterns

7https://jena.apache.org/

over the total number of proposed anti-patterns. Resultsfrom Table 3 are encouraging. In fact, the anti-patterndetection algorithm covers the limitations of FaCT++reasoner. Indeed, the set of SPARQL queries is suffi-cient for detecting the three anti-pattern categories, suchas, taxonomic errors, design anomalies and domain er-rors.

5.2. CSO’s evaluation

After the efficiency verification, the algorithm wasapplied on CSO to detect the existing errors. We no-tice that the taxonomic errors are not detected in CSOwhich is mainly based on the patterns (see Sect. 4.1).Besides the design anomalies, the anti-pattern detectionalgorithm found some domain errors, as shown in Ta-ble 4.

These results prove that the classification of CSOconcepts is well-defined since the absence of taxonomicerrors. However, the presence of the design anomaliesand the domain errors can be explained by several rea-sons. The main reason is related to the ontology pop-ulation process. In fact, CSO population is based onthe providers’ catalogues and the cloud registries. Dueto standard unconformity, these data resources can in-clude some errors on their textual content. Moreover,they do not provide a complete cloud service descrip-tion. This is the reason why concepts, such as VMTypeand Service Capability, have some missing informationwithin CSO. Figure 9 shows the anti-pattern detected er-rors, namely Invalid VMType Characteristic, and its cor-rection recommendations. A complete demonstration ofthe CSO evaluation can be found in 8.

6. Conclusion and Future Work

Ontology evaluation is an active research domainwhich is mainly necessary to improve the ontology qual-ity. In fact, only a well defined ontology is utilized. Inthis paper, we proposed an ontology evaluation approachcomposed of two phases: evaluation process and ontol-ogy correction. We applied the FaCT++ reasoner to val-idate the consistency of the previously proposed CSO.After that, we applied the anti-pattern detection algo-rithm based on SPARQL queries in order to detect er-rors and anomalies not covered by this reasoner. To helpontologists revise CSO, the algorithm provided a set ofcorrection recommendations. The experimental resultsshowed that our algorithm is consistent and efficient asit can detect errors within CSO and present correctionrecommendations.

8https://sites.google.com/site/csoevaluation/cso-evaluation-s-demonstration

4239

Page 10: Anti-Pattern Specification and Correction Recommendations for

Table 3. Algorithm’s evaluation resultsCategory Number of Detected Errors Precision Recall

Taxonomic Errors 5 5/5 5/5Design Anomalies 3 3/3 3/3

Domain Errors 5 8/8 8/8Total 13 16/16 16/16

Table 4. Errors detected within CSO per categoryErrors Number of Detected Errors Percentage of CSO’s Errors

Taxonomic Errors 0/5 0%Design Anomalies 3/3 100%

Domain Errors 2/8 25%Total 5/16 31.25%

Figure 9. Detected Invalid VMType Charac-teristic Error

As future work, we plan to extend our proposed pat-terns and anti-patterns by proposing ones dealing withsemantic evaluation. In addition, we intend to integrateCSO in a cloud federation environment and then inter-rogate it with the users’ requirements.

References

[1] V. P. Aldo Gangemi. Handbook on Ontologies, chap-ter Ontology Design Patterns, pages 221–243. Inter-national Handbooks on Information Systems. SpringerBerlin Heidelberg, 2 edition, 2009.

[2] M. M. F.-L. P. M. O. C. M. a. Asuncin Gmez-Prez PhD,MSc. Ontological Engineering: With Examples from theAreas of Knowledge Management, e-Commerce and theSemantic Web. Advanced Information and KnowledgeProcessing. Springer-Verlag London, 1 edition, 2004.

[3] A. V. Dastjerd, S. K. Garg, F. R. Omer, and R. Buyya.Cloudpick: a framework for qos-aware and ontology-based service deployment across clouds. Softw., Pract.Exper., 45(2):197–231, 2015.

[4] K. Dentler, R. Cornet, A. Ten Teije, and N. De Keizer.Comparison of reasoners for large ontologies in the owl2 el profile. Semantic Web, 2(2):71–87, 2011.

[5] M. Fahad, M. A. Qadir, and M. W. Noshairwan. Onto-logical errors-inconsistency, incompleteness and redun-dancy. In ICEIS (3-2), pages 253–285, 2008.

[6] M. Fahad, M. A. Qadir, and S. A. H. Shah. Evaluation ofontologies and DL reasoners. In Intelligent InformationProcessing IV, 5th IFIP International Conference on In-telligent Information Processing, October 19-22, 2008,Beijing, China, pages 17–27, 2008.

[7] M. Rekik, K. Boukadi, and H. Ben-abdallah. Cloud de-scription ontology for service discovery and selection.In Proceedings of the 10th International Conference onSoftware Engineering and Applications, pages 26–36,2015.

[8] C. Roussey, O. Corcho, and L. M. Vilches-Blazquez. Acatalogue of owl ontology antipatterns. In Proceedingsof the fifth international conference on Knowledge cap-ture, pages 205–206. ACM, 2009.

[9] C. Roussey and O. Zamazal. Antipattern detection: howto debug an ontology without a reasoner. In Proceed-ings of the Second International Workshop on Debug-ging Ontologies and Ontology Mappings, Montpellier,France, May 27, 2013, pages 45–56, 2013.

[10] R. Shearer, B. Motik, and I. Horrocks. Hermit: A highly-efficient owl reasoner. In OWLED, volume 432, page 91,2008.

[11] A. Souza, N. Cacho, T. Batista, and F. Lopes. Cloudquery manager: Using semantic web concepts to avoidiaas cloud lock-in. In IEEE 8th International Conferenceon Cloud Computing, pages 702–709, June 2015.

[12] D. Tsarkov and I. Horrocks. Fact++ description logicreasoner: System description. In Automated reasoning,pages 292–297. Springer, 2006.

[13] D. Vrandecic. Ontology evaluation. Springer, 2009.

4240