[ieee 2011 7th international conference on natural language processing and knowledge engineering...

5
FOEval: Full Ontology Evaluation Model and Perspectives Abderrazak BACHIR BOUIADJRA Computer Science Departement Djilali Liabes University Sidi Bel Abbes, Algeria [email protected] Abstract- In this research, a new evaluation model to choose adequate ontology that fit user requirements is proposed. The proposed model presents two main features distinct from previous research models: First, it enables users to select from a set of proposed metrics, those who they help in the ontology evaluation process; and to assign weights to each one based on assumed impacts on this process. Second, it enables users to evaluate locally stored ontologies, and/or request search engines for available ontologies. The main goal of this model is to ease the ontology evaluation task, for users wishing to reuse available ontologies, enabling them to choose the more adequate ontology to their requirements. Keywords: ontology, ontolo evaluation, ontolo ranking, I. INTRODUCTION Ontologies have been shown to be beneficial for representing domain knowledge, and are quickly becoming the backbone of the Semantic Web; this has lead to the development of many ontologies in different domains. Developed ontologies need to be evaluated, to ensure their coectness and quality during their construction process. Likewise, users facing a large number of available ontologies need to have a way of assessing them and deciding which one fits their requirements the best. The need for ontology evaluation approaches and tools is crucial as the ontology development and reuse becomes increasingly important. The rest of the present paper is organized as follows. Section 2 addresses a survey of ontology evolution approaches according to some evaluation criteria. Section 3 reviews different tools developed for ontology evaluation. In section 4, we present our ontology evaluation model "FOEval: Full Ontology Evaluation". Finally, we conclude this paper and gives essential future researches in section 5. II. STATE OF THE ART Knowing that there is no single uniing definition of what constitutes ontology evaluation [4], we present in this section a review of the literature of different ontology evaluation approaches by answering four issues that will help us to classi them: 978-1-61284-729-0111/$26.00 ©2011 IEEE 464 Sidi-Mohamed BENSLIMANE Computer Science Departement Djilali Liabes University Sidi Bel Abbes, Algeria [email protected] A. What should be evaluated? A variety of researches of ontology evaluation have been established depends on the perspective of what should be evaluated. Most of them focus on the evaluation of the whole ontology; others focus on partial evaluation of the ontology, for reuse it in an ontology engineering task [16]. B. Why it should be evaluated? We divide different ontology evaluation goals on validity evaluation and quality evaluation. We define validity evaluation as the process that evaluates ontologies to guarantee its eeness om any formal or semantic error [2], [10]. We define quality evaluation as the process that evaluates the quality and the adequacy of an ontology from the point of view of a particular predefined criteria, for use in a specific context and puose. A variety of meics to the ontology quality evaluation have been proposed in the literature, om which: comprehensiveness, richness, completeness, interpretability, adaptability and re-usability [12]. C. When it should be evaluated? Ontology evaluation is an important issue that must be addressed during different ontology lifecycle process; we divide it into four principle steps: Before the ontolo building process: to evaluate resources used to build the ontology [5]. During the ontolo building process: to guarantee the ontology eeness from all eors [2], [20]. During the ontolo evolution process: to assess the effect of changes and to veri whether the ontology quality was increased or decreased; specifically in an automatic or semi-automatic ontology engineering approach [13], [14], [17]. Before reusing the ontolo: to choose amongst a set of available ontologies, the most appropriate to user needs [2], [12]. D. Based on what it should be evaluated? We divide the ontology evaluation basis on: Corpus-based evaluation: is used to estimate empirically the accuracy and the coverage of the ontology. Gold-Standard-based evaluation: that compares candidate ontologies to gold-standard ontology that serves as a reference. Task-based evaluation: looks at how the results of the ontology-based application are affected by the use of an ontology. Expert-based evaluation: where ontologies are presented to human experts who have to judge in how far the developed ontology is coect. Criteria-based evaluation: measures in how far an ontology adheres to desirable criteria.

Upload: sidi-mohamed

Post on 12-Dec-2016

215 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: [IEEE 2011 7th International Conference on Natural Language Processing and Knowledge Engineering (NLPKE) - Tokushima, Japan (2011.11.27-2011.11.29)] 2011 7th International Conference

FOEval: Full Ontology Evaluation Model and Perspectives

Abderrazak BACHIR BOUIADJRA

Computer Science Departement Djilali Liabes University Sidi Bel Abbes, Algeria

[email protected]

Abstract- In this research, a new evaluation model to choose

adequate ontology that fit user requirements is proposed. The

proposed model presents two main features distinct from

previous research models: First, it enables users to select from a

set of proposed metrics, those who they help in the ontology

evaluation process; and to assign weights to each one based on

assumed impacts on this process. Second, it enables users to

evaluate locally stored ontologies, and/or request search engines

for available ontologies. The main goal of this model is to ease the

ontology evaluation task, for users wishing to reuse available

ontologies, enabling them to choose the more adequate ontology

to their requirements.

Keywords: ontology, ontology evaluation, ontology ranking,

I. INTRODUCTION

Ontologies have been shown to be beneficial for representing domain knowledge, and are quickly becoming the backbone of the Semantic Web; this has lead to the development of many ontologies in different domains. Developed ontologies need to be evaluated, to ensure their correctness and quality during their construction process. Likewise, users facing a large number of available ontologies need to have a way of assessing them and deciding which one fits their requirements the best. The need for ontology evaluation approaches and tools is crucial as the ontology development and reuse becomes increasingly important.

The rest of the present paper is organized as follows. Section 2 addresses a survey of ontology evolution approaches according to some evaluation criteria. Section 3 reviews different tools developed for ontology evaluation. In section 4, we present our ontology evaluation model "FOEval: Full Ontology Evaluation". Finally, we conclude this paper and gives essential future researches in section 5.

II. STATE OF THE ART

Knowing that there is no single unifying definition of what constitutes ontology evaluation [4], we present in this section a review of the literature of different ontology evaluation approaches by answering four issues that will help us to classify them:

978-1-61284-729-0111/$26.00 ©20 11 IEEE

464

Sidi-Mohamed BENSLIMANE

Computer Science Departement Djilali Liabes University Sidi Bel Abbes, Algeria

[email protected]

A. What should be evaluated?

A variety of researches of ontology evaluation have been established depends on the perspective of what should be evaluated. Most of them focus on the evaluation of the whole ontology; others focus on partial evaluation of the ontology, for reuse it in an ontology engineering task [16].

B. Why it should be evaluated?

We divide different ontology evaluation goals on validity evaluation and quality evaluation. We define validity evaluation as the process that evaluates ontologies to guarantee its freeness from any formal or semantic error [2], [10]. We define quality evaluation as the process that evaluates the quality and the adequacy of an ontology from the point of view of a particular predefined criteria, for use in a specific context and purpose. A variety of metrics to the ontology quality evaluation have been proposed in the literature, from which: comprehensiveness, richness, completeness, interpretability, adaptability and re-usability [12].

C. When it should be evaluated?

Ontology evaluation is an important issue that must be addressed during different ontology life cycle process; we divide it into four principle steps: Before the ontology building process: to evaluate resources used to build the ontology [5]. During the ontology building process: to guarantee the ontology freeness from all errors [2], [20]. During the ontology evolution process: to assess the effect of changes and to verify whether the ontology quality was increased or decreased; specifically in an automatic or semi-automatic ontology engineering approach [13], [14], [17]. Before reusing the ontology: to choose amongst a set of available ontologies, the most appropriate to user needs [2], [12].

D. Based on what it should be evaluated?

We divide the ontology evaluation basis on: Corpus-based evaluation: is used to estimate empirically the accuracy and the coverage of the ontology. Gold-Standard-based evaluation: that compares candidate ontologies to gold-standard ontology that serves as a reference. Task-based evaluation: looks at how the results of the ontology-based application are affected by the use of an ontology. Expert-based evaluation: where ontologies are presented to human experts who have to judge in how far the developed ontology is correct. Criteria-based evaluation: measures in how far an ontology adheres to desirable criteria.

Page 2: [IEEE 2011 7th International Conference on Natural Language Processing and Knowledge Engineering (NLPKE) - Tokushima, Japan (2011.11.27-2011.11.29)] 2011 7th International Conference

III. EVALUATTON TOOLS

Several ontology evaluation tools have been developed during last years. They differ according to the issues described above. We present the most important from them below:

• Swoogle[3]: is an ontology search engine that offers a limited search facility that can be interpreted as topic coverage. Given a search keyword Swoogle can retrieve ontologies that contain a class or a relation matching (lexically) the given keyword.

• OntoKhoj [1]: is an ontology search engine that extends the traditional approach (keyword-based search) to consider word senses when ranking ontologies to cover a topic. It accommodates a manual sense disambiguation process, then, according to the sense chosen by the user, hypemyms and synonyms are selected from WordNet.

• Watson [11]: is an onto logy search engine that has an efficient mechanism for fmding the best ontologies taking into account the equivalent ontologies. The author considers that two ontologies describing the same vocabulary are semantically equivalent if they express the same meaning, even if they may be written differently from a syntactic point of view. Obtaining non-redundant results is a good way to increase efficiency and improve robustness.

• OntoQA [6]:OntoQA is a tool that measures the quality of ontology from the consumer perspective, using schema and instance metrics. It takes as input a crawled populated ontology or a set of user supplied search terms and ranks them according to some metrics related to various aspects of an ontology.

• OntoCA T [7]: OntoCA T provides a comprehensive set of metrics for use by the ontology consumer or knowledge engineer to assist in ontology evaluation for re-use. This evaluation process is focused on the ontology summaries that are based on size, structural, hub and root properties.

• AKTiveRank [9]: AKTiveRank is a tool that ranks ontologies using a set of ontology structure based metrics. It takes keywords as input, and queries Swoogle for the given keywords in order to extract candidate ontologies; then it applies measures based on the coverage and on the structure of the ontologies to rank them. Its shortcoming is that its measures are at the "class level".

• OS_Rank [lS]:OS_Rank is an ontology evaluation system that evaluates ontologies and ranks them based on class name, on detail degree of searched class, on number of semantic relation of searched class, on interest domain based on Wordnet to resolve different semantic problems.

IV. FULL ONTOLOGY EVALUATION MODEL

In this section, a new evaluation model is described. The main goal of this model is to ease the ontology evaluation task, for users wishing to reuse available ontologies, enabling them to choose the more adequate ontology to their requirements. The proposed model is considered as a ranking and selection tool that presents three main features distinct from other

465

models: First, it enables users to select from a set of proposed metrics, those who they help in the ontology evaluation process; and to assign weights to each one based on assumed impacts on this process. Second, it enables users to evaluate locally stored and/or searched ontologies (from different search engines). Third, it has an advanced mechanism for capturing the structural and semantic information about the user-desired domain class and relations.

A. FOEval Architecture:

Figure I shows the current architecture of FOEval.

T,,:d naI 3.l"1. 'Graphie;al

l'"p�"' ... nla tio n of rallk.d oll.ologi..s

Figure l. FOEvai Architecture

The first step "Prepare" goal is to decide which ontologies will be evaluated and ranked: (local-stored ontologies and/or searched ontologies). The second step "Metrics" goal is to decide from a set of proposed metrics, which ones will be used in the evaluation process, and to assign weights to each used metric based on assumed impacts on this process. In the "Evaluate" step candidate ontologies are evaluated for each used metric, and given a numerical score. An overall score for the ontology is then computed as a weighted sum of its per­metric scores.

B. FOEval Prepare:

The goal of this step is to decide which ontologies will be evaluated and ranked: introduced ontologies and/or searched ontologies. FOEval proposes an advanced ontology search mechanism basing on features below:

FOEval can evaluates only introduced ontologies,

FOEval can evaluates only searched ontologies,

FOEval can evaluates introduced and searched ontologies,

FOEval can request different search engines (Swoogle, Watson) for available ontologies.

Page 3: [IEEE 2011 7th International Conference on Natural Language Processing and Knowledge Engineering (NLPKE) - Tokushima, Japan (2011.11.27-2011.11.29)] 2011 7th International Conference

FOEval request can be only with keywords provided by the user; with user selected synonyms and hypernyms based on WordNet; and with extracted important class of introduced ontologies. We consider a class as important if it has a largest number of hierarchical and semantical relations, it can also considered as important if it has a largest number of other important class linked to it.

C. FOEval Metrics:

In this research, we propose to evaluate and rank candidate ontologies using a rich set of metrics that include: coverage, richness, detail-level, comprehensiveness, connectedness and computational efficiency.

• Coverage: coverage of terms consists of class coverage, and relation coverage. Class coverage represents: from searched keywords, how many class name match in the ontology; while relation coverage represents: from searched keywords, how many relation name match in the ontology [19].

COV(T,O) = wI.CCov(T,O) + w2. RCov(T,O)

CCov(T,O) = I (:IE,C [ O .

RCov(T, 0) = I C ,[ E C [ O :

Il(c,t) reT

I J (c;',cj, t) ireT

wI and w2 are sub-metric weights J(ci,cj,t)=1 ifrelation "t" between ci and cj exist else 0

c, ci, and cj are class, ° is an ontology, t and T are searched terms

• Richness: ontology richness can be measured on three different levels:

Relation richness: is the metric that reflects the diversity of relations and placement of relations in the ontology. An ontology that contains many relations other than hierarchical relations is richer than taxonomy with only hierarchical relationships.

Attribute richness: is the average number of attributes that are defmed for each class that can indicate the amount of information pertaining to instance data; the more attributes that are defmed the more knowledge the ontology conveys.

Formally, we define ontology richness (OR) as the sum of relationship richness (rR) and attribute richness (aR). The relationship richness (rR) is defined as the ratio of the number of non-hierarchical relationships sP defined in the ontology, divided by the number of all relationships P. The attribute richness (aR) is defmed as the number attributes for all class aft divided by the number of class C.

OR(O) = wl. rR(O) + w 2,aR '(0)

IsPI rR ( O) =

WI

466

lattl ,aR(O) = ICI

w I and w2 are sub-metric weights

• Detail-Level: This measure describes:

DL(T,O) = wI. Gdl(O) + w2.Sdl(T,O)

wI and w2 are sub-metric weights

The global detail level (Gdl): is a good indication of how well knowledge is grouped into different categories and subcategories in the ontology. This measure can distinguish a horizontal (flat) ontology from a vertical ontology. Formally, we define global detail level Gdl as the average number of subclasses per class.

sub(c,O) is the subclasses number of class c

The specific detail level (Sdl): is a good indication of the searched terms importance in the ontology. We consider that an ontology that contains a searched term "student" as class with many sub and upper classes is preferred than other ontology that contains the class "student" without any subclass. Formally, we define specific detail level Sdl as the the sum of four parameters. First, the average number of sub­class and upper-class for searched class. Second, the number of relations for searched class. Third, the number of relations for sub-class of searched class. Fourth, the number of relations for upper-class of searched class.

L:r<ET Isub(t, 0)1 + lupp (t, 0) 1 SdL(T,O) =

ITI

Isub(t,O)1 is the subclasses number of class t lupp(t,O)1 is the upper classes number of class t

ITI is the searched terms number

• Comprehensiveness: is the metric that assess and evaluate content comprehensiveness of ontologies. Formally, we defme ontology comprehensiveness (OC) as the average number of annotated class (Ae), the average number of annotated relations (Ar), and the average number of instance per class (Ie).

wl. A,c + w2..l,c w3, Ar OC(o) =

Ici IRI

Ac = )' . Ann.(c, 0) 'CIE-'craJ

Page 4: [IEEE 2011 7th International Conference on Natural Language Processing and Knowledge Engineering (NLPKE) - Tokushima, Japan (2011.11.27-2011.11.29)] 2011 7th International Conference

) Anlt(d., ej,O) cj�a

wl,w2,and w3 are sub-metric weights Ann(c,O)= 1 if the class c is annotated

l(c,O) is the number of instance per class c Ann(ci,cj,O)=1 if a relation between ci and cj exist

and it is annotated ICI is the number of all class in the ontology

IRI is the number of all relations in the ontology

• Computational Efficiency: this principle prospects an ontology that can be successfully/easily processed, in particular the speed that reasoners need to fulfill the required tasks, be it query answering, classification, or consistency checking, etc. The size of the ontology, class and relations number, and other parameters affect the efficiency of the ontology [4].

Formally, we define Ontology Computational Efficiency (OCE) as the sum of: the average number of class (An c), the average number of sub-class per class (Ansc), the average number of relations (Anr), the average number of relation per class (Anrc), and the average ontology size (Aos).

IC(O) I IsC(c, 0)1 + IR(O)I OCE(O) = wI,

hnC(O)1 + w2, IC(O)I

IR(O)I Size(O) +w3, ImR(O)1 + w4, mSi,z,e ( O)

wl,w2, w3 and w4 are sub-metric weights IC(O)I is the class number of the evaluated ontology

ImC(O)1 is the biggest class number of a candidate ontology IsC(c,O)1 is the subclasses number of all class of the ontology

IR(O)I is the relations number of the evaluated ontology ImR(O)1 is the biggest relations number of a candidate ontology

Size(O) is the evaluated ontology size in kilobytes mSize(O) is the maximum candidate ontology size in kilobytes

D. FOEval Evaluation:

The first FOEval evaluation feature is its specific evaluation of candidate ontologies, when users can take into the evaluation process any or all metrics depending on their needs; for each one it calculates a numerical score.

The second FOEval evaluation feature is its global and specific metric weights; where each one is globally weighted to give more or less importance to a metric than another, basing on evaluation goals and on user needs. In addition, each sub­metric has a specific weight that helps users for example, to evaluate only the relation richness of candidate ontologies rather than the global richness that include in addition the attribute richness. The default weight value is one (1), and optionally the user can change it to another value between [0, 10], zero means that this metric or sub-metric is disabled, and ten means that it is very important in the evaluation process.

467

Finally, FOEval complete the evaluation by computing an overall score of each candidate ontology, as a sum of its metric and sub-metric scores, which will be calculated using normalized values to avoid any disagreeable influence of any metric or sub-metric on another.

Formally, we defme FOEval ontology evaluation function as below:

FOEval(Ok) = kl.nCOV(Ok) + k2.nOR(Ok) + k3.nDL(Ok) +

k4.nOC(Ok} + k5.nOCE(Ok}

kl,k2, k3, k4 and k5 are global per-metric weights Ok are candidate ontologies

nMetric is the normalized value ofthis metric (min=O & max=l)

nMetrie(Ok) Metric'(Ok)

E. FOEval Results:

The last step is to show the evaluation results. We consider this part as an important task that need to many works and enhancements, because users will take decision based on this output; for this, we propose to use a textual and a graphical representation of ranked ontologies, including some helpful information like: class and relations number, size, date, path, global and per-metric result.

Ontology1 size (17K) elate (2011-01-31) Path (httpllwww.w3. 0 rgl2 0 0 011 Olswaplp i mleo nta ct) More ...

Ontology2 size (<10K) elate (2005-04-11) Path (httplicain.nbiLgovlsehemasibiocjv_resourees)

More ...

Ontology3 size (32K) elate (2002-01-25) Path ("Fi I e : IIIC : lUse rsiA elm i nistratorlO nto I ogieslp3p rdIV 1. owl ) More ...

Ontology4 size (17K) elate (2004-08-06) Path (httpllmorpheus.es.umbe.edulaks1Iontosemowl) More ...

Ontology5 size (11 K) elate (2005-02-07) Path (httpllwww.iwi-iuk.orglmateria�RDFISchemaJClassliwi) More ...

Figure 2. FOEval Textual Result

\/

" • Ptttef".le t·.e .... Figure 3. FOEval Graphical Result

Page 5: [IEEE 2011 7th International Conference on Natural Language Processing and Knowledge Engineering (NLPKE) - Tokushima, Japan (2011.11.27-2011.11.29)] 2011 7th International Conference

The graphical representation display ontology summaries based on [18] and [21]. We add in this part, two main ideas:

(I) Global Summary: that allows the user to show more or less detail degree of an ontology.

(2) Partial Summary: that allows the user to show more or less detail degree on a specific part or on a specific class. These two last ideas help users in full or partial ontology evaluation.

These two points can be very helpful for FOEval users, because, before taking decisions about the evaluation, its offer an advanced view and important information about what they need.

V. CONCLUSION AND PERSPECTIVES

In this paper we have addressed a novel classification of ontology evaluation approaches according to four issues. This classification summarized the main efforts performed in this area. We have presented the principal features of FOEval, which is tunable, requires minimal user involvement, and would be very useful in many ontology evaluation scenarios:

Evaluate local stored and/or searched ontologies from different search engines.

Evaluate ontologies to ensure their correctness and to assess their quality during their construction process.

Evaluate ontologies or versions of an ontology to assess the effect of changes during their evolution process.

Evaluate ontologies to choose the most appropriate to user needs before their reuse.

FOEval offers several benefits:

First, it strengthens the theoretical base for ontology evaluation by proposing a new model and rich metrics.

Second, it can evaluate only available ontologies, available and some searched ontologies, and it can evaluate only searched ontology.

Third, it requests search engines using searched terms, important class names of available ontologies, hypernyms and synonyms selected from WordNet according to the sense chosen by the user.

Fourth, it avoids obtaining redundant results and equivalent ontologies basing on Watson search engine mechanisms [11].

In addition, it enables ontology users to evaluate ontologies easily; to decide which metric will be used in this process; and to assign weights to each used metric and sub-metric depending on their needs.

We plan on making it a web-based tool, where users can evaluate ontologies quality using their file's path. We plan also on offering the possibility to introduce corpus or gold-standard ontology that serves as references in the evaluation process. Finally, we plan on adding other metrics, to enhance our model and tool, and to meet user requirements. In our opinion, future works in this area should focus particularly on quality evaluation, as the number of available ontologies is continuing to grow.

468

VI. REFERENCES

[1] Patel e., K. Supekar, Y. Lee, and E. K. Park. OntoKhoj: A Semantic Web Portal for Ontology Searching, Ranking and Classification. In Proceeding of the Workshop On Web Information And Data Management. ACM, 2003.

[2] Gomez-Perez. Ontology evaluation. In Steffen Staab and Rudi Studer, editors, Handbook on Ontologies, First Edition, chapter 13, pages 251-274. Springer, 2004.

[3] Ding, L., Finin, T., Joshi, A, Pan, R., Scott Cost, R., Peng, Y., Reddivari, P., Doshi, ye., and Sachs, 1.: Swoogle: A Search and Metadata Engine for the Semantic Web. In Proceedings of the 13th CIKM,2004.

[4] Gangemi, A, Catenacci, C., Massimiliano, C. and Lehmann, 1., Ontology Evaluation and Validation: An integrated formal model for the quality diagnostic task, 2005.

[5] Tarhuni Marwa, Rodolphe Meyer and Cheikh Omar Bagayoko. Master's thesis, Paris V University, 2005.

[6] Tartir, S. Arpinar, l.B., Moore, M., Sheth, AP. and Aleman-Meza, B. OntoQA: Metric-Based Ontology Quality Analysis, IEEE Workshop on Knowledge Acquisition from Distributed, Autonomous, Semantically Heterogeneous Data and Knowledge Sources, Houston, TX, USA, 2005.

[7] Cross Y and A Pal: OntoCA T: An Ontology Consumer Analysis Tool and Its Use on Product Services Categorization Standards, In Proceedings of the First International Workshop on Applications and Business Aspects of the Semantic Web. 2006

[8] Sabou Marta, Lopez V., Motta E. and Uren V. Ontology Selection: Evaluation on the Real Semantic Web, Fourth International Evaluation of Ontologies for the Web Workshop (EON2006), UK, 2006.

[9] Iones M. and Alani H., Content-based ontology ranking. In Proceedings of the 9th Int. Protege Conf. CA, 2006.

[10] Fahad, M., Qadir, M.A, Noshairwan, W. Semantic Inconsistency Errors in Ontologies. Proc. of GRC 07, Silicon Valley USA. IEEE CS. 2007.

[II] d'Aquin M., Baldassarre e., Gridinoc L., Sabou M., Angeletou S., and Motta E.' Watson: Supporting next generation semantic web applications in WWWlTnternet conference, Spain 2007.

[12] Obrst Leo, Werner Ceusters, Inderjeet Mani, Steve Ray, and Barry Smith. The evaluation of ontologies. In Christopher J.O. Baker and Kei­Hoi Cheung, editors, Revolutionizing Knowledge Discovery in the Life Sciences, chapter 7, pages 139-158. Springer, 2007.

[13] Dellschaft, K. and Staab , S. : Strategies for the Evaluation of Ontology Learning. In Buitelaar, P. and Cimiano, P., editors, Ontology Learning and Population: Bridging the Gap Between Text and Know I edge, p ages 253-272. l O S Press. 2008

[14] Djedidi Rim et Marie-Aude Aufaure : Patrons de gestion de changements OWL ; THESE preparee au sein du Departement Informatique de Supelec, 3 rue Ioliot-Curie, 91192 Gif-sur-Yvette Cedex, France, 2009.

[IS] Wei Y., 1.Chen « Ranking Ontology based on Structure Analysis» Second International Symposium on Knowledge Acquisition and Modeling IEEE, 2009

[16] d'Aquin M. and Lewen H .. Cupboard - a place to expose your ontologies to applications and the community. In The Semantic Web: Research and Applications, 6th European Semantic Web Conference, ESWC 2009.

[17] Murdock 1., Buckner e., Allen e.: « TWO METHODS FOR EVALUATING DYNAMIC ONTOLOGIES» Indiana University, Bloomington, 2010.

[18] Li N., Motta E. and d'Aquin M.: "Ontology summarization: an analysis and an evaluation". The International Workshop on Evaluation of Semantic Technologies, Shanghai, China. IWEST 2010.

[19] Sunju Oh; and Heon Y. Yeom: "User-Centered Evaluation Model for Ontology Selection" IEEE/WICIACM International Conference on Web Intelligence and Intelligent Agent Technology 2010

[20] Ohta M., Kozaki K. and Mizoguchi R.: "A Quality Assurance Framework for Ontology Construction and Refinement" Proc. of 7th Atlantic Web Intelligence Conference. (Switzerland) AWIC 2011.

[21] Cheng G., Ge W. and Qu Y.: "Generating Summaries for Ontology Search" in conference companion on World wide web, (India) 20 II.