当前位置:高等教育资讯网  >  中国高校课件下载中心  >  大学文库  >  浏览文档

《电子商务 E-business》阅读文献:Improving Ontology Recommendation and Reuse in WebCORE

资源类别:文库,文档格式:PDF,文档页数:10,文件大小:854.98KB,团购合买
点击下载完整版文档(PDF)

Improving Ontology Recommendation and Reuse in Webcore by collaborative Assessments Ivan Cantador. Miriam Fernandez. Pablo castells Escuela Politecnica Superior Universidad autonoma de madrid Campus de Cantoblanco, 28049, Madrid, Spain livan. cantador, miriam. fernandez, pablo castells]@uames ABSTRACT automatically find, share and combine information in consistent In this work, we present an of CORE (8), a tool for ways. As put by Tim Berners-Lee in 1999, "I have a dream for Collaborative Ontology reuse the Web in which computers become capable of analyzing all the an informal description of ific semantic domain and data on the Web-the content, links, and transactions behveen determines which onto es from people and computers. A 'Semantic Web, which should make this appropriate to describe the given domain. For this task, the possible, has emerge, but when it does, the day- to-day environment is divided into three modules. The first component mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines. The ' intelligent agents user to refine and enlarge it using WordNet. The second module people have touted for ages will finally materialize applies multiple automatic criteria to evaluate the ontologies of the At the core of these new technologies, ontologies are envisioned repository, and determines which ones fit best the problem as key elements to represent knowledge that can be understood, on. A ranked list of ontologies is returned for each criterio used and shared among distributed applications and machines and the lists are combined by means of rank fusion techniques However, ontological knowledge mining and development are Finally, the third component uses manual user evaluations in order difficult and costly tasks that require major engineering efforts to incorporate a human, collaborative assessment of the ontologies. Developing an ontology from scratch requires the expertise of at The new version of the system incorporates several novelties, such least two different individuals an ontology engineer that ensures as its implementation as a web application; the incorporation of a the correctness during the ontology design and development, and NLP module to manage the problem definitions, modifications on a domain expert, responsible for capturing the semantics of a ecific field into the ontology. In this context, ontology reuse plos find potential relevant terms according to previous becomes an essential need in order to exploit past and current Finally, we present some early experiments on efforts and achievements leval and evaluation, showing the benefits of our system. In this scenario, it is also important to emphasize that ontologies as well as content, do not stop evolving and growing within the Categories and subject descriptors Web. They are part of its wave of growth and evolution, and they H.3.3 [Information Storage and Retrieval]: Information Search anaged and kept up to date in distributed and Retrieval -information filtering, retrieval models, selection environments. In this perspective, the initial efforts to collect ontologies in libraries [17 are not sufficient, and novel technologies are necessary to successfully retrieve this special General Terms Algorithms, Measurement, Human Factors Novel tools have been recently developed, such as ontology search engines [24] represent an important first step towards Keywords automatically assessing and retrieving ontologies which satisfy Ontology evaluation, ontology reuse, rank fusion, collaborative additional efforts to address special needs and requirements from ontology engineers and practitioners. It is necessary to evaluate and measure specific ontology features, such as lexical 1. INTRODUCTION vocabulary, relations [11, restrictions, consistency, correctness, he Web can be considered as a live entity that grows and before making an adequate selection. Some of these features fast over time. The amount of content stored and shared can be measured automatically, but some, like the correctness web is increasing quickly and continuously. The global the level of formality, require a human judgment to be assessed ultimedia resources on the Internet is undergoing a significant In this context, the Web 2.0 is arising as a new trend where peopl growth, reaching a presence comparable to that of traditional text collaborate and share their knowledge to successfully achieve contents. The consequences of this enlargement result in well their goals. New search engines like Technorati exploit blogs known difficulties and problems, such as finding and properly with the aim of finding not only the information that the user is managing all the existing amount of sparse information. oking for, but also the experts that might better answer the To overcome these limitations the so-called "Semantic Web" users'requirements. As put by David Sifry, one of the founders of d has emerged with the aim of helping machines process nformation, enabling browsers or other software agents to I Technorati, blog search engin horath.c

Improving Ontology Recommendation and Reuse in WebCORE by Collaborative Assessments Iván Cantador, Miriam Fernández, Pablo Castells Escuela Politécnica Superior Universidad Autónoma de Madrid Campus de Cantoblanco, 28049, Madrid, Spain {ivan.cantador, miriam.fernandez, pablo.castells}@uam.es ABSTRACT In this work, we present an extension of CORE [8], a tool for Collaborative Ontology Reuse and Evaluation. The system receives an informal description of a specific semantic domain and determines which ontologies from a repository are the most appropriate to describe the given domain. For this task, the environment is divided into three modules. The first component receives the problem description as a set of terms, and allows the user to refine and enlarge it using WordNet. The second module applies multiple automatic criteria to evaluate the ontologies of the repository, and determines which ones fit best the problem description. A ranked list of ontologies is returned for each criterion, and the lists are combined by means of rank fusion techniques. Finally, the third component uses manual user evaluations in order to incorporate a human, collaborative assessment of the ontologies. The new version of the system incorporates several novelties, such as its implementation as a web application; the incorporation of a NLP module to manage the problem definitions; modifications on the automatic ontology retrieval strategies; and a collaborative framework to find potential relevant terms according to previous user queries. Finally, we present some early experiments on ontology retrieval and evaluation, showing the benefits of our system. Categories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval – information filtering, retrieval models, selection process. General Terms Algorithms, Measurement, Human Factors. Keywords Ontology evaluation, ontology reuse, rank fusion, collaborative filtering, WordNet. 1. INTRODUCTION The Web can be considered as a live entity that grows and evolves fast over time. The amount of content stored and shared on the web is increasing quickly and continuously. The global body of multimedia resources on the Internet is undergoing a significant growth, reaching a presence comparable to that of traditional text contents. The consequences of this enlargement result in well known difficulties and problems, such as finding and properly managing all the existing amount of sparse information. To overcome these limitations the so-called “Semantic Web” trend has emerged with the aim of helping machines process information, enabling browsers or other software agents to automatically find, share and combine information in consistent ways. As put by Tim Berners-Lee in 1999, “I have a dream for the Web in which computers become capable of analyzing all the data on the Web – the content, links, and transactions between people and computers. A ‘Semantic Web’, which should make this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines. The ‘intelligent agents’ people have touted for ages will finally materialize”. At the core of these new technologies, ontologies are envisioned as key elements to represent knowledge that can be understood, used and shared among distributed applications and machines. However, ontological knowledge mining and development are difficult and costly tasks that require major engineering efforts. Developing an ontology from scratch requires the expertise of at least two different individuals: an ontology engineer that ensures the correctness during the ontology design and development, and a domain expert, responsible for capturing the semantics of a specific field into the ontology. In this context, ontology reuse becomes an essential need in order to exploit past and current efforts and achievements. In this scenario, it is also important to emphasize that ontologies, as well as content, do not stop evolving and growing within the Web. They are part of its wave of growth and evolution, and they need to be managed and kept up to date in distributed environments. In this perspective, the initial efforts to collect ontologies in libraries [17] are not sufficient, and novel technologies are necessary to successfully retrieve this special kind of content. Novel tools have been recently developed, such as ontology search engines [24] represent an important first step towards automatically assessing and retrieving ontologies which satisfy user queries and requests. However, ontology reuse demands additional efforts to address special needs and requirements from ontology engineers and practitioners. It is necessary to evaluate and measure specific ontology features, such as lexical vocabulary, relations [11], restrictions, consistency, correctness, etc., before making an adequate selection. Some of these features can be measured automatically, but some, like the correctness or the level of formality, require a human judgment to be assessed. In this context, the Web 2.0 is arising as a new trend where people collaborate and share their knowledge to successfully achieve their goals. New search engines like Technorati1 exploit blogs with the aim of finding not only the information that the user is looking for, but also the experts that might better answer the users’ requirements. As put by David Sifry, one of the founders of 1 Technorati, blog search engine, http://technorati.com/

Technorati, in an interview for a Spanish newspaper, "Intern To obtain the most appropriate ontology and fulfil ontold been transformed from the great library to the engineers'requirements, search engines and libraries should be complemented with evaluation methodologies Following this aspiration, the work presented here aims to Ontology evaluation can be defined as assessing the quality and the adequacy of an ontology for being used in a specific context, automatic evaluation techniques with explicit users'opinions and for a specific goal. From our perspective, onto experiences. This work follows a previous approach for constitutes the cornerstone of ontology reuse because it faces the Collaborative Ontology Reuse and Evaluation over controlled complex task of evaluate, and consequently select the most epositories, named CORE [8]. For the work reported in this appropriate ontology on each situation. paper, the tool has been enhanced and adapted to the Web. Novel An overview of ontology evaluation approaches is presented in as AJAX, have been incorporated system for the design and implementation of the user interface. It evaluate an ontology by comparing it to a Golden Standard [11] limitations, such as handling large numbers of ontologies. The application and measuring the quality of the results that the different frameworks. Firstly, during the problem definition phase application returns [16 those that evaluate ontologies by showing other problem descriptions previously given by different documents)[5), and those based on human interaction to measure users. Secondly, during the ontology retrieval phase, the system the above approaches several evaluation levels are identified using other user evaluations and comments lexical, taxonomical, syntactic, semantic, contextual, and structural between others. Table I summarized these ideas Following Leonardo Da Vincis words, Wisdom is the daughter of experience", our tool aims to take a step forwards for helping Table 1. An overview of approaches to ontology evaluation users to be wise in exploiting other people's experience and expertise. Approach Golden Application Data Assessment The rest of the paper has been organized as follows. Section 2 standard summarizes some relevant work related to our system. Its Lexical entries, architecture is described in Section 3. Section 4 contains empirical esults obtained from early experiments done with a prototype of he system. Finally, several conclusions and future research lines Hierarc are given in Section 2 RELATED WORK 2.1 Ontology Evaluation application X Two well-known scenarios for ontology reuse have been identified in the Semantic Web area. The first one addresses the common problem of Structure specific domain. The second scenario envisions the not so architecture, design common but real situation in which Semantic Web applications Once the ontologies have been searched retrieved and evaluated need to automatically and dynamically find an ontology. In this the next step is to select the most appropriate one that fulfils user work. we focus our attention on the fist scenario. where users are or application goals. Some approaches for ontology selection have the ones who express their information needs. In this scenario, ntology reuse involves several areas such as ontology evaluation, complete study is presented to determine the connections betwee selection. search and ranking ontology selection and evaluation Several ontology libraries and search engines have been When the user and not the application is the one that demands an developed in the last few years to address the problem of ontology ontology, the selection task should be less categorical, returning search and retrieval. [6] presents a complete study of ontology ot only one but the set of the libraries (WebOnto, Ontolingua, SHOE, etc. ) where their esults according to the evaluation criteria, several ontology functionalities are evaluated attending to different criteria such ranking measures have been proposed in the literature. Some of them are presented in [2]and 3]. Both works aim to take a step ry beyond to the approaches based on the page- rank algorithm [24], olution for ontology retrieval, they suffer from the where ontologies are ranked considering the number of links limitation of not being opened to the web. In that sense, S between them, because this ranking methodology does not work [24] constitutes one of the biggest efforts carried out to for ontologies with poor connectivity and lack of referrals from index and search for ontologies distributed across the Web Garrett, J. J.(2005). AJAX. A New Approach to Web ApplicationsiNhttp://w

Technorati, in an interview for a Spanish newspaper, “Internet has been transformed from the great library to the great conversation”. Following this aspiration, the work presented here aims to enhance ontology retrieval and recommendation, combining automatic evaluation techniques with explicit users’ opinions and experiences. This work follows a previous approach for Collaborative Ontology Reuse and Evaluation over controlled repositories, named CORE [8]. For the work reported in this paper, the tool has been enhanced and adapted to the Web. Novel technologies, such as AJAX2 , have been incorporated to the system for the design and implementation of the user interface. It has also been modified and improved to overcome previous limitations, such as handling large numbers of ontologies. The collaborative capabilities have also been extended within two different frameworks. Firstly, during the problem definition phase, the system helps users to express their needs and requirements by showing other problem descriptions previously given by different users. Secondly, during the ontology retrieval phase, the system helps users to enhance the automatic system recommendations by using other user evaluations and comments. Following Leonardo Da Vinci’s words, “Wisdom is the daughter of experience”, our tool aims to take a step forwards for helping users to be wise in exploiting other people’s experience and expertise. The rest of the paper has been organized as follows. Section 2 summarizes some relevant work related to our system. Its architecture is described in Section 3. Section 4 contains empirical results obtained from early experiments done with a prototype of the system. Finally, several conclusions and future research lines are given in Section 5. 2. RELATED WORK 2.1 Ontology Evaluation Two well-known scenarios for ontology reuse have been identified in the Semantic Web area. The first one addresses the common problem of finding the most adequate ontologies for a specific domain. The second scenario envisions the not so common but real situation in which Semantic Web applications need to automatically and dynamically find an ontology. In this work, we focus our attention on the fist scenario, where users are the ones who express their information needs. In this scenario, ontology reuse involves several areas such as ontology evaluation, selection, search and ranking. Several ontology libraries and search engines have been developed in the last few years to address the problem of ontology search and retrieval. [6] presents a complete study of ontology libraries (WebOnto, Ontolingua, SHOE, etc.), where their functionalities are evaluated attending to different criteria such as ontology management, ontology adaptation and ontology standardization. Although ontology libraries are a good temporary solution for ontology retrieval, they suffer from the current limitation of not being opened to the web. In that sense, Swoogle [24] constitutes one of the biggest efforts carried out to crawl, index and search for ontologies distributed across the Web. 2 Garrett, J. J. (2005). AJAX: A New Approach to Web Applications. In http://www.adaptivepath.com/ To obtain the most appropriate ontology and fulfil ontology engineers’ requirements, search engines and libraries should be complemented with evaluation methodologies. Ontology evaluation can be defined as assessing the quality and the adequacy of an ontology for being used in a specific context, for a specific goal. From our perspective, ontology evaluation constitutes the cornerstone of ontology reuse because it faces the complex task of evaluate, and consequently select the most appropriate ontology on each situation. An overview of ontology evaluation approaches is presented in [4], where four different categories are identified: those that evaluate an ontology by comparing it to a Golden Standard [11]; those that evaluate the ontologies by plugging them in an application and measuring the quality of the results that the application returns [16]; those that evaluate ontologies by comparing them to unstructured or informal data (e.g. text documents) [5], and those based on human interaction to measure ontology features not recognizable by machines [10]. In each of the above approaches several evaluation levels are identified: lexical, taxonomical, syntactic, semantic, contextual, and structural between others. Table 1 summarized these ideas. Table 1. An overview of approaches to ontology evaluation Approach Level Golden Standard Application based Data driven Assessment by humans Lexical entries, vocabulary, concept, data X X X X Hierarchy, taxonomy X X X X Other semantic relations X X X X Context, application X X Syntactic X X Structure, architecture, design X Once the ontologies have been searched, retrieved and evaluated, the next step is to select the most appropriate one that fulfils user or application goals. Some approaches for ontology selection have been addressed in [20] and complemented in [19], where a complete study is presented to determine the connections between ontology selection and evaluation. When the user and not the application is the one that demands an ontology, the selection task should be less categorical, returning not only one but the set of the most suitable results. To sort these results according to the evaluation criteria, several ontology ranking measures have been proposed in the literature. Some of them are presented in [2] and [3]. Both works aim to take a step beyond to the approaches based on the page-rank algorithm [24], where ontologies are ranked considering the number of links between them, because this ranking methodology does not work for ontologies with poor connectivity and lack of referrals from other ontologies

As it has been shown before. current aptation. Techniques are needed to adapt the user take advantage of ontology evaluation, to new interests and forget old ones as user interests and ranking methodologies, All thes evolve with time. Again, in our approach profile adaptation advantages to the process of ontology evaluation and reuse, but is done manually(manual update of ontology evaluations) they do not exploit others related to the well known Recommender Systems [1]; is it helpful to know other users Filtering method. Items or actions are recom d to a user ions to evaluate and select the most suitable ontology? taking into account the available informatio em content descriptions and user profiles). There are three The collaboration between users has been addressed in the area of filtering approaches for making recommendation ontology design and construction [23]. In [14], the necessity of mechanisms for ontology maintenance is presented under Demographic filtering: Descriptions of people (e.g. age, scenarios like ontology-development in collaborative gender, etc)are used to learn the relationship between a environments. Moreover. works 7, present tools and single item and the type of people who like it. common shared ontologies by geographically distributed groups ontent-based filtering The user is recommended items based on the descriptions of items previously evaluated by However, despite all these common scenarios where the users other users. Content-based filtering is chosen approach in collaboration is required for ontology design and construction, the our work(the system recommends ontologies using previous use of collaborative tools for ontology evaluation is still a novel evaluations of those ontologies) and incipient approach in the literature [8] Collaborative filtering: People with similar interests are 2.2 Recommender systems matched and then recommendations are made Collaborative filtering strategies make automatic pr (filter)about the interests of a user by collecting taste information Matching method. It defines how user interests and item rom many users(collaborating). This approach usually consists of two steps: a)look for users that have a similar rating pattern to identified that of the active user(the user for whom the prediction is done), User profile matching: people with similar interests are and b)use the ratings of users found in the previous step to matched before making recommendations ompute the predictions for the active user. These predictions are specific to the user, differently to those given by more simple User profile-item matching: a direct comparison is made approaches that provide average scores for each item of interest, between the user profile and the items. The degree of for example based on its number of votes ppropriateness of the ontologies is computed by taking into Collaborative is a widely explored field. Three main ccount previous evaluations of those ontologies pects typical guish the different techniques reported in In WebCORE, a new ontology evaluation measure based on the literature [1 profile representation and management, collaborative filtering is proposed, considering users'interests and filtering method, and matching method. previous assessments of the ontologies. User profile representation and management can be divided into five different tasks 3. SYSTEM ARCHITECTURE mentioned before, WebCORE is a web ap plication for Profile representation. Accurate profiles are vital for the Collaborative Ontology Reuse and Evaluation. a user logins into ind. thanks to AjAX technol appropriate)and the collaborative component(to ensure that and the Google Web Toolkit, dynamically describes a probler users with similar profiles are in fact similar). The type of domain, searches for ontologies related to this domain,obtains profile chosen in this work is the user-item ratings matrix relevant ontologies ranked by several lexical, taxonomic and (ontology evaluations based on specific criteria) collaborative criteria, and optionally evaluates by himself those Initial profile generation. The user is not usually willing to spend too much time in defining her/his interests to create a In this section, we describe the server-side architecture of personal profile. Moreover, user interests may change WebCoRE. Figure 1 shows an overview of the system. We e. The type of initial profile generatio distinguish three different modules. The first one. the left module chosen in this work is a manual selection of values for only receives the problem description( Golden Standard) as a full text five specific evaluation criteria or as a set of initial terms. In the first case. the system uses a NLP Profile learning. User profiles can be learned or updated module to obtain the most relevant terms of the given text. The initial set of terms can also be modified and extended by the user using different sources of information that are potentially using WordNet [12]. The second one, represented in the centre of representative of user interests. In our work, profile learning the figure allows the user to select a set of ontology evaluation techniques are not used. techniques provided by the system to recover the ontologies The source of user input and feedback to infer user interes closest to the given Golden Standard. Finally, the third one, on the from information used to update user profiles. It can be right of the figure, is a collaborative module that re-ranks the list obtained in two different ways: using information explicitly of recovered ontologies, taking into consideration previous provided by the user, and using information implicit feedback and evaluations of the users bserved in the users interaction. Our system uses no feedback to update the user profiles 3 Google Web toolkit. ht

As it has been shown before, current ontology reuse approaches take advantage of ontology evaluation, search, retrieval, selection and ranking methodologies. All these areas provide different advantages to the process of ontology evaluation and reuse, but they do not exploit others related to the well known Recommender Systems [1]; is it helpful to know other users’ opinions to evaluate and select the most suitable ontology? The collaboration between users has been addressed in the area of ontology design and construction [23]. In [14], the necessity of mechanisms for ontology maintenance is presented under scenarios like “ontology-development in collaborative environments”. Moreover, works as [7], present tools and services to support the process of achieving consensus on common shared ontologies by geographically distributed groups. However, despite all these common scenarios where the user’s collaboration is required for ontology design and construction, the use of collaborative tools for ontology evaluation is still a novel and incipient approach in the literature [8]. 2.2 Recommender Systems Collaborative filtering strategies make automatic predictions (filter) about the interests of a user by collecting taste information from many users (collaborating). This approach usually consists of two steps: a) look for users that have a similar rating pattern to that of the active user (the user for whom the prediction is done), and b) use the ratings of users found in the previous step to compute the predictions for the active user. These predictions are specific to the user, differently to those given by more simple approaches that provide average scores for each item of interest, for example based on its number of votes. Collaborative filtering is a widely explored field. Three main aspects typically distinguish the different techniques reported in the literature [13]: user profile representation and management, filtering method, and matching method. User profile representation and management can be divided into five different tasks: • Profile representation. Accurate profiles are vital for the content-based component (to ensure recommendations are appropriate) and the collaborative component (to ensure that users with similar profiles are in fact similar). The type of profile chosen in this work is the user-item ratings matrix (ontology evaluations based on specific criteria). • Initial profile generation. The user is not usually willing to spend too much time in defining her/his interests to create a personal profile. Moreover, user interests may change dynamically over time. The type of initial profile generation chosen in this work is a manual selection of values for only five specific evaluation criteria. • Profile learning. User profiles can be learned or updated using different sources of information that are potentially representative of user interests. In our work, profile learning techniques are not used. • The source of user input and feedback to infer user interests from information used to update user profiles. It can be obtained in two different ways: using information explicitly provided by the user, and using information implicit observed in the user’s interaction. Our system uses no feedback to update the user profiles. • Profile adaptation. Techniques are needed to adapt the user profiles to new interests and forget old ones as user interests evolve with time. Again, in our approach profile adaptation is done manually (manual update of ontology evaluations). Filtering method. Items or actions are recommended to a user taking into account the available information (item content descriptions and user profiles). There are three main information filtering approaches for making recommendations: • Demographic filtering: Descriptions of people (e.g. age, gender, etc) are used to learn the relationship between a single item and the type of people who like it. • Content-based filtering: The user is recommended items based on the descriptions of items previously evaluated by other users. Content-based filtering is chosen approach in our work (the system recommends ontologies using previous evaluations of those ontologies). • Collaborative filtering: People with similar interests are matched and then recommendations are made. Matching method. It defines how user interests and item characteristics are compared. Two main approaches can be identified: • User profile matching: people with similar interests are matched before making recommendations. • User profile-item matching: a direct comparison is made between the user profile and the items. The degree of appropriateness of the ontologies is computed by taking into account previous evaluations of those ontologies. In WebCORE, a new ontology evaluation measure based on collaborative filtering is proposed, considering users’ interests and previous assessments of the ontologies. 3. SYSTEM ARCHITECTURE As mentioned before, WebCORE is a web application for Collaborative Ontology Reuse and Evaluation. A user logins into the system via a web browser, and, thanks to AJAX technology and the Google Web Toolkit3 , dynamically describes a problem domain, searches for ontologies related to this domain, obtains relevant ontologies ranked by several lexical, taxonomic and collaborative criteria, and optionally evaluates by himself those ontologies that he likes or dislikes most. In this section, we describe the server-side architecture of WebCORE. Figure 1 shows an overview of the system. We distinguish three different modules. The first one, the left module, receives the problem description (Golden Standard) as a full text or as a set of initial terms. In the first case, the system uses a NLP module to obtain the most relevant terms of the given text. The initial set of terms can also be modified and extended by the user using WordNet [12]. The second one, represented in the centre of the figure, allows the user to select a set of ontology evaluation techniques provided by the system to recover the ontologies closest to the given Golden Standard. Finally, the third one, on the right of the figure, is a collaborative module that re-ranks the list of recovered ontologies, taking into consideration previous feedback and evaluations of the users. 3 Google Web Toolkit, http://code.google.com/webtoolkit/

Examples TI=(genetics", NOUN, ROOT, 0). Ti is one of the root 几 ↓ expanded from any other term so its lexical parent is the empty T2=( biology”,NOUN,“ genetIcs”, HYPERNYM,1).T2isa entry of its parent is"genetics", it has been expanded by the hypernym"relation, and the number of relations that separates it rom the root term TI is I Figure 2 shows the interface of the Golden Standard Definition hase. In the left side of the screen, the current list of root terms is Figure 1. WebCORE architecture shown. The user can manually insert new root terms to this list giving their lexical entries and selecting their parts of speech. The 3.1 Golden standard definition correctness of these new insertions is controlled by verifying that all The first phase of our ontology recommender system is the the considered lexical entries belong to the WordNet repository. Golden Standard definition. as done in the first version of cOre Adding new terms, the final Golden Standard definition is [8], the user describes a domain of interest specifying a set of immediately updated the final list of (root and expanded) terms that relevant terms that will be searched through the concepts(classe represent the domain of the problem is shown in the bottom of the or instances)of the ontologies stored in the system. figure. The user can also make term expansion using WordNet. He selects one of the terms from the golden standard definition and the an improvement, WebCORE include NLP system shows him all its meanings contained in WordNet( top of the omponent that automatically retrieves the most terms figure). After he has chosen one of them, the system presents hir from a given text. moreover. we have added a three different lists with the synonyms, hyponyms and hypernyms component that continuously offers to the user of the term. The user can then selects one or more elements of thes the terms that have been used in those previous problem lists and add them to the expanded term list For each expansion, the descriptions in which a given term appears. depth of the new term is increased by one unit. This will be used 3.1.1 Term-based Problem Description later to measure the importance of the term within the Golden In our system, the Golden Standard is described by a set of initial Standard: the greater the depth of the derived term with respect to its root term the less its relevance will be terms. These terms can automatically be obtained by the Natural Language Processing(NLP)module, which uses 3.1.2 Collaborative Problem Description sitory of documents related to the specific domain in whicl In the problem definition phase a collaborate er is interested in. This NLP module accesses to the been added to the system(right side of Figure 2). This component epository of documents, and returns a list of pairs(lexical entry eads the term currently selected by the user and searches for all part of speech that roughly represents the domain of the problem the stored problem definitions that contain it. For each of these On the other hand, the list of initial (root) terms can be manually problem definitions, the rest of their terms and the number of problems in which they appear are retrieved and shown in the web The module also allows the user to WordNet [12] and some of the rel provides: hypernym, With this simple strategy the user is suggested the most popular hyponym and synonym. The new dded to the Golden terms, fact that could help him to better describe the domain in Standard using these relations might also be extended again, and which he is interested in. It is very often the case that a person has new terms car n iteratively be added to the very specific goals or interests, but does not know how to The final representation of the Golden Standard is defined as a correctly explain/describe them, and how to effectively find set of terms T(L, POS, L, R, Z)where solutions for them. With the retrieved terms. the user might iscover new ways to describe the problem domain and obtain LG is the set of lexical entries defined for the Golden better solutions in the ontology recommendation phase This follows somehow the ideas of the well known folksonomies POS corresponds to the different Parts Of Speech considered The term“ folksonomy” is a combination of“folk"and by WordNet: noun, adjective, verb and adverb ""taxonomy", and was firstly used by Thomas Vander Wal [22]in . Lu is the set of lexical entries of the golden Standard that R is the set of relations between terms of the Golden 4 Mathes, a Standard: synonmym, hypernym, hyponym and root(if a term omies: Cooperative Classification has not been obtained by expansion, but is one of the initial nd commt Shared metadata http://www.adamma om/academic/computer-mediated-

Figure 1. WebCORE architecture 3.1 Golden Standard Definition The first phase of our ontology recommender system is the Golden Standard definition. As done in the first version of CORE [8], the user describes a domain of interest specifying a set of relevant terms that will be searched through the concepts (classes or instances) of the ontologies stored in the system. As an improvement, WebCORE includes an internal NLP component that automatically retrieves the most informative terms from a given text. Moreover, we have added a new collaborative component that continuously offers to the user a ranked list with the terms that have been used in those previous problem descriptions in which a given term appears. 3.1.1 Term-based Problem Description In our system, the Golden Standard is described by a set of initial set of terms. These terms can automatically be obtained by the internal Natural Language Processing (NLP) module, which uses a repository of documents related to the specific domain in which the user is interested in. This NLP module accesses to the repository of documents, and returns a list of pairs (lexical entry, part of speech) that roughly represents the domain of the problem. On the other hand, the list of initial (root) terms can be manually specified. The module also allows the user to expand the root terms using WordNet [12] and some of the relations it provides: hypernym, hyponym and synonym. The new terms added to the Golden Standard using these relations might also be extended again, and new terms can iteratively be added to the problem definition. The final representation of the Golden Standard is defined as a set of terms T (LG, POS, LGP, R, Z) where: • LG is the set of lexical entries defined for the Golden Standard. • POS corresponds to the different Parts Of Speech considered by WordNet: noun, adjective, verb and adverb. • LGP is the set of lexical entries of the Golden Standard that have been extended. • R is the set of relations between terms of the Golden Standard: synonym, hypernym, hyponym and root (if a term has not been obtained by expansion, but is one of the initial terms). • Z is an integer number that represents the depth or distance of a term to the root term from which it has been derived. Examples: T1 = (“genetics”, NOUN, “”, ROOT, 0). T1 is one of the root terms of the Golden Standard. The lexical entry that it represents is “genetics”, its part of speech is “noun”, it has not been expanded from any other term so its lexical parent is the empty string, its relation is “root”, and its depth is 0. T2 = (“biology”, NOUN, “genetics”, HYPERNYM, 1). T2 is a term expanded from “genetics” (T1). The lexical entry it represents is “biology”, its part of speech is “noun”, the lexical entry of its parent is “genetics”, it has been expanded by the “hypernym“ relation, and the number of relations that separates it from the root term T1 is 1. Figure 2 shows the interface of the Golden Standard Definition phase. In the left side of the screen, the current list of root terms is shown. The user can manually insert new root terms to this list giving their lexical entries and selecting their parts of speech. The correctness of these new insertions is controlled by verifying that all the considered lexical entries belong to the WordNet repository. Adding new terms, the final Golden Standard definition is immediately updated: the final list of (root and expanded) terms that represent the domain of the problem is shown in the bottom of the figure. The user can also make term expansion using WordNet. He selects one of the terms from the Golden Standard definition and the system shows him all its meanings contained in WordNet (top of the figure). After he has chosen one of them, the system presents him three different lists with the synonyms, hyponyms and hypernyms of the term. The user can then selects one or more elements of these lists and add them to the expanded term list. For each expansion, the depth of the new term is increased by one unit. This will be used later to measure the importance of the term within the Golden Standard: the greater the depth of the derived term with respect to its root term, the less its relevance will be. 3.1.2 Collaborative Problem Description In the problem definition phase a collaborative component has been added to the system (right side of Figure 2). This component reads the term currently selected by the user, and searches for all the stored problem definitions that contain it. For each of these problem definitions, the rest of their terms and the number of problems in which they appear are retrieved and shown in the web browser. With this simple strategy the user is suggested the most popular terms, fact that could help him to better describe the domain in which he is interested in. It is very often the case that a person has very specific goals or interests, but does not know how to correctly explain/describe them, and how to effectively find solutions for them. With the retrieved terms, the user might discover new ways to describe the problem domain and obtain better solutions in the ontology recommendation phase. This follows somehow the ideas of the well known folksonomies4 . The term “folksonomy” is a combination of “folk” and “taxonomy”, and was firstly used by Thomas Vander Wal [22] in 4 Mathes, A. (2004). Folksonomies: Cooperative Classification and Communication through Shared Metadata. http://www.adammathes.com/academic/computer-mediated￾communication/folksonomies.html

②⊙a9日C一的⊙心 PROBLEM DEFINITION SYSTEM RECONDMLENDATION USER EVALLATION Related tetms te gene LAddroottem Root terms Figure 2. WebcoRe problem definition phase a discussion on a mailing list about the system of zation user for each of the terms to be developed in Delicious'and Flickr. It is associated to those ontologies. In our system, these information retrieval methodologies consisting of collaboratively assigned considering the depth measure o generated, open-ended labels that categorize content included in the Golden Standard Although they suffer from problems of imprecision and Let T be the set of all terms defined in the golden ambiguity, techniques employing free-form tagging encourage definition phase. Let d, be the depth measure information in their own w actively term t, E T. Let q be query vector extract nteract with the system Standard definition, and let w be the weight 3.2 Automatic Ontology recommendation these terms, where for each; E T, w E [0, 1. Then, the weight w, is calculated as. Once the user has selected the most appropriate set of terms to describe the problem domain, the tool performs the processes ontology retrieval and ranking. These processes play a key role within the system, since they provide the first level of information This measure gives more relevance to the terms explicitly to the user. To enhance the previous approaches of CORE, an expressed by the user, and less importance to those ones extended adaptation of traditional Information Retrieval techniques have or derived from previously selected terms. An interesting future popularity, or other more complex strategies as terms frequency retrieval techniques(21), where textual documents are replaced by analysis ontologies To carry out the process of ontology retrieval, the approach is 3.2.1 Query encoding and ontology retrieval focused on the lexical level, retrieving those ontologies that The queries supported by our model are expressed using the terms contain a subset of the terms expressed by the user during the selected during the Golden Standard definition phase Golden Standard definition. To compute the matching, two In classic keyword-based vector-space models for on difterent options are available within the tool: search for exact etrieval [21], each of the query keywords is assigned on matches and search for matches based on the Levenshtein distance that represents the importance of the keyword in the between two terms ed expressed by the query, or its discriminating In both cases, the query execution returns a set of ontologies that discerning relevant from irrelevant documents user requirements. Considering that not all the retrieved logously, in our model, the terms included in the Golden gies fulfil the same level of satisfaction, it is the system task dard can be weighted to indicate the relative interest of the them and present the ranked list to the user delicio.us-socialbookmarkinghttp://del.icio.us 6fliCkr-photosharinghttp://www.flickr.com/

a discussion on a mailing list about the system of organization developed in Delicious5 and Flickr6 . It is associated to those information retrieval methodologies consisting of collaboratively generated, open-ended labels that categorize content. Although they suffer from problems of imprecision and ambiguity, techniques employing free-form tagging encourage users to organize information in their own ways and actively interact with the system. 3.2 Automatic Ontology Recommendation Once the user has selected the most appropriate set of terms to describe the problem domain, the tool performs the processes of ontology retrieval and ranking. These processes play a key role within the system, since they provide the first level of information to the user. To enhance the previous approaches of CORE, an adaptation of traditional Information Retrieval techniques have been integrated into the system. Our novel strategy to ontology retrieval can be seen as an evolution of classic keyword-based retrieval techniques [21], where textual documents are replaced by ontologies. 3.2.1 Query encoding and ontology retrieval The queries supported by our model are expressed using the terms selected during the Golden Standard definition phase. In classic keyword-based vector-space models for information retrieval [21], each of the query keywords is assigned a weight that represents the importance of the keyword in the information need expressed by the query, or its discriminating power for discerning relevant from irrelevant documents. Analogously, in our model, the terms included in the Golden Standard can be weighted to indicate the relative interest of the 5 del.icio.us - social bookmarking, http://del.icio.us/ 6 Flickr - photo sharing, http://www.flickr.com/ user for each of the terms to be explicitly mentioned in the ontologies. In our system, these weights are automatically assigned considering the depth measure of each of the terms included in the Golden Standard. Let T be the set of all terms defined in the Golden Standard definition phase. Let di be the depth measure associate with each term ti ∈ T. Let q be query vector extracted from the Golden Standard definition, and let wi be the weight associated to each of these terms, where for each ti ∈ T, wi ∈ [0,1]. Then, the weight wi is calculated as: 1 1 i i w d = + This measure gives more relevance to the terms explicitly expressed by the user, and less importance to those ones extended or derived from previously selected terms. An interesting future work could be to enhance and refine the query, e.g. based on terms popularity, or other more complex strategies as terms frequency analysis. To carry out the process of ontology retrieval, the approach is focused on the lexical level, retrieving those ontologies that contain a subset of the terms expressed by the user during the Golden Standard definition. To compute the matching, two different options are available within the tool: search for exact matches and search for matches based on the Levenshtein distance between two terms. In both cases, the query execution returns a set of ontologies that satisfy user requirements. Considering that not all the retrieved ontologies fulfil the same level of satisfaction, it is the system task to sort them and present the ranked list to the user. Figure 2. WebCORE problem definition phase

0a9日日“m?GE I ORLIDFITTION SYSTEM RFCOMDBENDATION USER PVALLATION Ontology evalation regia 1291698334 amdaleey overview NucleicAcid Lexical ONT Leveal KB C LnSiHybrdrate 0nn345 vencelas Figure 3. WebCORE system recommendation phase 3.2.2 Ontology ranking Hence, the similarity measure between an ontology o and the Once the list of ontologies is formed, the ontology-search engine query q is simply compute as follows similarity value between the query and each sm(q,o,)=0 ntology We represent each ontology in the search vector 0, eO, where O is the mean of the 3. 2. 3 Combination with knowledge base retrieval erm t; similarities with all the matched entities in the ontology if any matching exists, and zero otherwise ranking algorithm performs very poorly. Queries will return less The components oj are calculated as esults than expected, the relevant ontologies will not be retrieved or will get a much lower similarity value than it should. For Instance, if there are ontologies about“ restaurants";and“ dishes” v(m2) (KB), a user searching for ontologies in this domain may be s are expressed as instances in the corresponding Knowledge Ba interested in the instances and literals contained in the Kb. to where My is the set of matches of the term 4; in the ontology cope with this issue, our ranking model combines the similarity oi,w(m/ represents the similarities between the term t; and the obtained from the terms that belong to the ontology with the entities of the ontology o; that matches with it, M, is the set of similarity obtained from the terms that belong to the Kb using the matches of the term I, within all the ontologies and w(m adaptation of the vector space model explained before weights of each of On the other hand, the combination of outputs of several search For example, if we define in the Golden Standard a term"acid", engines has been a widely addressed research topic in the this term may return several matches in the same ontology with Information Retrieval field [9]. After testing several approaches difterent entities as: "acid,"amino acid, etc. In order to we have selected the so-called Comb-MNZ strategy. This establish the appropriate weight in the ontology vector, Oi, the goal is to compute the number of matches of one term in the technique has been shown in prior works as one of the simplest nd most effective rank aggregation techniques, and consists of whole repository of ontologies and give more relevance to those computing a combined ranking score by a linear combination of ontologies that have matched that specific term more times the input scores with additional factors that measure the relevance Due to the way in which the vector o; is constructed, each of each score in the final ranking. In our case, the relevancies of between n ent Oy contains specific information about the similarity the scores. i.e. the relevancies of the similarity computation the ontology and the corresponding term f. To compute within the ontology and within the knowledge base, are given by the final similarity between the query vector q and the ontology the user. He can select a value v, E [1, 5] for each kind of search o.the vectorial model calculates the cosine measure and this value is then mapped to a corresponding value s; using both vectors. However. if we follow the traditiona the following normalization. vectorial model, we will only be considering the difference between the query and the ontology vectors according to the angle hey form, but not taking into account their dimensions. Thus, to overcome this limitation, the above cosine measure used in the Following this idea, the final score is computed as vectorial model has been replaced by the simple dot product

3.2.2 Ontology ranking Once the list of ontologies is formed, the ontology-search engine computes a semantic similarity value between the query and each ontology as follows. We represent each ontology in the search space as an ontology vector oj ∈ O, where oji is the mean of the term ti similarities with all the matched entities in the ontology if any matching exists, and zero otherwise. The components oji are calculated as: ( ) ( ) ji i ji M ji ji i M w m o M w m = ∑ ∑ where Mji is the set of matches of the term ti in the ontology oj, w(mji) represents the similarities between the term ti and the entities of the ontology oj that matches with it, Mi is the set of matches of the term ti within all the ontologies and w(mi) represents the weights of each of these matches. For example, if we define in the Golden Standard a term “acid”, this term may return several matches in the same ontology with different entities as: “acid”, “amino acid”, etc. In order to establish the appropriate weight in the ontology vector, oij, the goal is to compute the number of matches of one term in the whole repository of ontologies and give more relevance to those ontologies that have matched that specific term more times. Due to the way in which the vector oj is constructed, each component oij contains specific information about the similarity between the ontology and the corresponding term ti. To compute the final similarity between the query vector q and the ontology vector oj, the vectorial model calculates the cosine measure between both vectors. However, if we follow the traditional vectorial model, we will only be considering the difference between the query and the ontology vectors according to the angle they form, but not taking into account their dimensions. Thus, to overcome this limitation, the above cosine measure used in the vectorial model has been replaced by the simple dot product. Hence, the similarity measure between an ontology oj and the query q is simply compute as follows: j j sim(q,o ) =q ⋅o 3.2.3 Combination with Knowledge Base Retrieval If the knowledge in the ontology is incomplete, the ontology ranking algorithm performs very poorly. Queries will return less results than expected, the relevant ontologies will not be retrieved, or will get a much lower similarity value than it should. For instance, if there are ontologies about “restaurants”, and “dishes” are expressed as instances in the corresponding Knowledge Base (KB), a user searching for ontologies in this domain may be also interested in the instances and literals contained in the KB. To cope with this issue, our ranking model combines the similarity obtained from the terms that belong to the ontology with the similarity obtained from the terms that belong to the KB using the adaptation of the vector space model explained before. On the other hand, the combination of outputs of several search engines has been a widely addressed research topic in the Information Retrieval field [9]. After testing several approaches, we have selected the so-called Comb-MNZ strategy. This technique has been shown in prior works as one of the simplest and most effective rank aggregation techniques, and consists of computing a combined ranking score by a linear combination of the input scores with additional factors that measure the relevance of each score in the final ranking. In our case, the relevancies of the scores, i.e., the relevancies of the similarity computation within the ontology and within the knowledge base, are given by the user. He can select a value vi ∈ [1, 5] for each kind of search, and this value is then mapped to a corresponding value si using the following normalization. 5 i i v s = Following this idea, the final score is computed as: s sim(q,o) s sim(q,kb) O × + kb × Figure 3. WebCORE system recommendation phase

PROHLFM DEFNITION SISTYM RECOMMFNDATION USER FVALUATION Uxes evalation ef 129169833 ontology Nuclei Arid InSirmHybaadnarien Gement Linkage Map FxessedseeueneeT吨 中y5中印 Figure 4. WebCoRE user evaluation ph For future work, we are considering to set s, using statistical focused ric types of tasks or activities) and information about the knowledge contained in the ontologies the gies (for ontologies describing a domain knowledge contained in the KBs and the information requested by dependent manner). the user during the golden Standard definition phase The above criteria can have discrete numeric or non-numeric Figure 3 shows the system recommendation interface. At the left alues. The users interests are expressed like a subset of these side the user can select the matching methodology (fuzzy or criteria, and their respective values, meaning thresholds or exact), the search spaces (ontology entities and knowledge base restrictions to be satisfied by user evaluations. Thus, a numeric entities), and the weight or importance given to each of the riterion will be satisfied if an evaluation value is equal or great previously selected search spaces. In the right part the user can than that expressed by its interest threshold, while a non-numeric isualize the ontology and navigate across it. Finally, the middle riterion will be satisfied only when the evaluation is exactly the of the interface presents the list of ontologies selected for the user given threshold (i.e. in a boolean or yes/no manner). to be evaluated during the collaborative evaluation phase According to both types of user evaluation and interest criteria, 3.3 Collaborative Ontology Evaluation numeric and Boolean, the recommendation algorithm wi The third and last phase of the system is compound easure the degree in which each user restriction is satisfied by ontology recommendation algorithm that exploits th the evaluations, and will recommend a ranked ontology list of Collaborative Filtering [1], exploring the manual according to similarity measures between the thresholds and the stored in the system to rank the set of ontologies that best fulfils collaborative evaluations. To create the final ranked ontology list the user's interests the recommender module follows two phases. In the first one it calculates the similarity degrees between all the user evaluations In WebCORE, user evaluations are represented as a set of five different criteria [15 and their respective values, manually and the specified user interest criteria thresholds. In the second determined by the users who made the evaluations one it combines the similarity measures of the evaluations generating the overall rankings of the ontologies. Correctness: specifies whether the information stored in the Figure 4 shows all the previous definitions and ideas, locating ntology is true, independently of the domain of interest. them in the graphical interface of the system. On the left side of Readability: indicates the non-ambiguous interpretation of the screen, the user introduces the thresholds for the the meaning of the concept names. recommendations and obtains the final collaborative ontology Flexibility: points out the adaptability or capability of the ontologies and checks evaluations given by the rest of the usere ranking. On the right side, the user adds new evaluations for 3.3. Collaborative Evaluation Measures Level of formality: highly informal, semi-informal, sem As mentioned before, a user evaluates an ontology considering formal, rigorously-formal five different criteria that can be divided different groups Type of model: upper-level(for ontologies describir eneral, domain-independent concepts ), core-ontologies(for which take discrete numeric values [1, 2, 3, 4, 5], where 1 the ontology does not fulfil the criterion, and 5 means the logies that contain the most important concepts on a ontology completely satisfies the criterion b) boolean broadly describe a domain), task-ontologies(for ontologies criteria (level of formality' and type of

For future work, we are considering to set si using statistical information about the knowledge contained in the ontologies, the knowledge contained in the KBs and the information requested by the user during the Golden Standard definition phase. Figure 3 shows the system recommendation interface. At the left side the user can select the matching methodology (fuzzy or exact), the search spaces (ontology entities and knowledge base entities), and the weight or importance given to each of the previously selected search spaces. In the right part the user can visualize the ontology and navigate across it. Finally, the middle of the interface presents the list of ontologies selected for the user to be evaluated during the collaborative evaluation phase. 3.3 Collaborative Ontology Evaluation The third and last phase of the system is compound of a novel ontology recommendation algorithm that exploits the advantages of Collaborative Filtering [1], exploring the manual evaluations stored in the system to rank the set of ontologies that best fulfils the user’s interests. In WebCORE, user evaluations are represented as a set of five different criteria [15] and their respective values, manually determined by the users who made the evaluations. • Correctness: specifies whether the information stored in the ontology is true, independently of the domain of interest. • Readability: indicates the non-ambiguous interpretation of the meaning of the concept names. • Flexibility: points out the adaptability or capability of the ontology to change. • Level of formality: highly informal, semi-informal, semi￾formal, rigorously-formal. • Type of model: upper-level (for ontologies describing general, domain-independent concepts), core-ontologies (for ontologies that contain the most important concepts on a specific domain), domain-ontologies (for ontologies that broadly describe a domain), task-ontologies (for ontologies focused on generic types of tasks or activities) and application-ontologies (for ontologies describing a domain in an application-dependent manner). The above criteria can have discrete numeric or non-numeric values. The user’s interests are expressed like a subset of these criteria, and their respective values, meaning thresholds or restrictions to be satisfied by user evaluations. Thus, a numeric criterion will be satisfied if an evaluation value is equal or greater than that expressed by its interest threshold, while a non-numeric criterion will be satisfied only when the evaluation is exactly the given threshold (i.e. in a Boolean or yes/no manner). According to both types of user evaluation and interest criteria, numeric and Boolean, the recommendation algorithm will measure the degree in which each user restriction is satisfied by the evaluations, and will recommend a ranked ontology list according to similarity measures between the thresholds and the collaborative evaluations. To create the final ranked ontology list the recommender module follows two phases. In the first one it calculates the similarity degrees between all the user evaluations and the specified user interest criteria thresholds. In the second one it combines the similarity measures of the evaluations, generating the overall rankings of the ontologies. Figure 4 shows all the previous definitions and ideas, locating them in the graphical interface of the system. On the left side of the screen, the user introduces the thresholds for the recommendations and obtains the final collaborative ontology ranking. On the right side, the user adds new evaluations for the ontologies and checks evaluations given by the rest of the users. 3.3.1 Collaborative Evaluation Measures As mentioned before, a user evaluates an ontology considering five different criteria that can be divided in two different groups: a) numeric criteria (‘correctness’, ‘readability’ and ‘flexibility’), which take discrete numeric values [1, 2, 3, 4, 5], where 1 means the ontology does not fulfil the criterion, and 5 means the ontology completely satisfies the criterion, and, b) Boolean criteria (‘level of formality’ and ‘type of model’), which are Figure 4. WebCORE user evaluation phase

represented by specific non-numeric values that can be or not isfied by the ontology. similarin(criterion.) subset of the above criteria and their respective values This measure will also return values between 0 ande Taking into account the previous definitions, user interests will be [0 epresenting the set of thresholds that should be reached by the returning a similarity value between 0 and 2 is inspire ontologies. Given a set of user interests, the system will size up all collaborative matching measures [18] to not manage the stored evaluations, and will calculate their similarity measures. numbers and facilitate. as we shall show in the next sul negative To explain these similarities we shall use a simple example of six coherent calculation of the final ontology rankings different evaluations(E1, E2, E3, E4, Es and E) of a given The similarity assessment is based on the distance between the ontology. In the explanation we shall distinguish between the value of the criterion n in the evaluation m, and the threshold numeric and the boolean criteria. We start with the boolean ones ndicated in the users interests for that criterion. The more the assuming two different criteria, CI and C2, with three possible value of the criterion n in evaluation m overcomes the threshold values:“A",“B”and“C”: In Table 1 we show the threshold the greater the similarity value shall be values established by a user for these two criteria, "A"for CI and B for C,, and the six evaluations stored in the system. Specifically, following the expression below, if the difference dif revaluation - threshold) is equal or greater than 0, we assign a Table 2. Thresholds and evaluations for Boolean criteria Ci and C2 Evaluations difference maxDif=(maxvalue- threshold) we can achieve wit the given threshold; and else, if the difference dif is lower than 0, nresholds EE, we give a negative similarity in [1, 0), punishing the distance of C1 the value with the threshold “A”林A ∈(0,1]ifdf≥0 use of the threshold of a criterion n is satisfied or similarity(criterion)= 1+ marDi not by evaluation m, their corresponding similarity measur 0 if they have the same value, and 2 otherwise. ∈[-1,0)ifdf<0 hreshold similarity(criterion)= Table 5 summarizes the similarity* values for the three numeric The similarity results for the boolean criteria of the example are Table 5. Similarity* values for numeric criteria C] C and C shown in Table 3 Evaluations Table 3. Similarity values for Boolean criteria C and C3 Criteria Thresholds E, E, E E4 E4 Es E Criteria Thresholds E, Ex E: E Es Es 162/65/6 C 0 0 Comparing the evaluation values of Table 4 with the similarity For the numeric criteria, the evaluations can overcome the values of Table 5, the reader may notice several important facts thresholds to different degrees. Table 4 shows the thresholds 1. Evaluation E4 satisfies criteria Ca and Cs with evaluations of 5 established for criteria C3, Ca and Cs, and their six available Applying the above expression, these criteria receive the same evaluations. Note that E1, E2, E3 and E4 satisfy all the criteria, while similarity of 1. However, criterion Ca has a threshold of 0, and Es and e do not reach some of the corresponding thresholds. Cs has a threshold equal to 5. As it is more difficult to satisfy the restriction imposed to Cs, this one should have a greater Table 4. Thresholds and evaluations for numeric criteria C, Ca and C influence in the final rankins 2. Evaluation E gives an evaluation of o to criteria C3 and Cs,not Criteria Thresholds El E E3 E4 Es Es satisfying either of them and generating the same similarity value of-1. Again, because of their different thresholds, we 0 should distinguish their corresponding relevance degrees in Ca the rankings For these reasons, a threshold penalty is applied, reflecting how difficult it is to overcome the given thresholds The more difficul In this case, the similarity measure has to take into account two to surpass a threshold, the lower the penalty value shall be different issues: the degree of satisfaction of the threshold, and the difficulty of achieving its value. Thus, the similarity between the 1+ threshold value of criterion n in the evaluation m. and the threshold of interest penalty(threshold) ∈(0,1 is divided into two factors: 1)a similarity factor that considers whether the threshold is surpassed or not, and, 2)a penalty factor Table 6 shows the threshold penalty values for the three numeric which penalizes those thresholds that are easier to be satisfied criteria and the six evaluations of the example

represented by specific non-numeric values that can be or not satisfied by the ontology. Taking into account the previous definitions, user interests will be a subset of the above criteria and their respective values representing the set of thresholds that should be reached by the ontologies. Given a set of user interests, the system will size up all the stored evaluations, and will calculate their similarity measures. To explain these similarities we shall use a simple example of six different evaluations (E1, E2, E3, E4, E5 and E6) of a given ontology. In the explanation we shall distinguish between the numeric and the Boolean criteria. We start with the Boolean ones, assuming two different criteria, C1 and C2, with three possible values: “A”, “B” and “C”. In Table 1 we show the threshold values established by a user for these two criteria, “A” for C1 and “B” for C2, and the six evaluations stored in the system. Table 2. Thresholds and evaluations for Boolean criteria C1 and C2 Evaluations Criteria Thresholds E1 E2 E3 E4 E5 E6 C1 “A” “A” “B” “A” “C” “A” “B” C2 “B” “A” “A” “B” “C” “A” “A” In this case, because of the threshold of a criterion n is satisfied or not by a certain evaluation m, their corresponding similarity measure is simply 0 if they have the same value, and 2 otherwise. 0 if ( ) 2 if mn mn bool mn mn mn evaluation threshold similarity criterion evaluation threshold ≠ = = ⎧ ⎨ ⎩ The similarity results for the Boolean criteria of the example are shown in Table 3. Table 3. Similarity values for Boolean criteria C1 and C2 Evaluations Criteria Thresholds E1 E2 E3 E4 E5 E6 C1 “A” 2 0 2 0 2 0 C2 “B” 0 0 2 0 0 0 For the numeric criteria, the evaluations can overcome the thresholds to different degrees. Table 4 shows the thresholds established for criteria C3, C4 and C5, and their six available evaluations. Note that E1, E2, E3 and E4 satisfy all the criteria, while E5 and E6 do not reach some of the corresponding thresholds. Table 4. Thresholds and evaluations for numeric criteria C3,C4 and C5 Evaluations Criteria Thresholds E1 E2 E3 E4 E5 E6 C3 ≥ 3 3 4 5 5 2 0 C4 ≥ 0 0 1 4 5 0 0 C5 ≥ 5 5 5 5 5 4 0 In this case, the similarity measure has to take into account two different issues: the degree of satisfaction of the threshold, and the difficulty of achieving its value. Thus, the similarity between the value of criterion n in the evaluation m, and the threshold of interest is divided into two factors: 1) a similarity factor that considers whether the threshold is surpassed or not, and, 2) a penalty factor which penalizes those thresholds that are easier to be satisfied. * ( ) 1 ( )· ( ) [0, 2] num mn num num mn mn similarity criterion similarity criterion penalty threshold = =+ ∈ This measure will also return values between 0 and 2. The idea of returning a similarity value between 0 and 2 is inspired on other collaborative matching measures [18] to not manage negative numbers, and facilitate, as we shall show in the next subsection, a coherent calculation of the final ontology rankings. The similarity assessment is based on the distance between the value of the criterion n in the evaluation m, and the threshold indicated in the user’s interests for that criterion. The more the value of the criterion n in evaluation m overcomes the threshold, the greater the similarity value shall be. Specifically, following the expression below, if the difference dif = (evaluation – threshold) is equal or greater than 0, we assign a positive similarity in (0,1] that depends on the maximum difference maxDif = (maxValue – threshold) we can achieve with the given threshold; and else, if the difference dif is lower than 0, we give a negative similarity in [-1,0), punishing the distance of the value with the threshold. * 1 (0,1] if 0 1 ( ) [ 1, 0) if 0 num mn dif dif maxDif similarity criterion dif dif threshold + ∈ ≥ + = ∈− < ⎧ ⎪ ⎪ ⎨ ⎪ ⎪⎩ Table 5 summarizes the similarity* values for the three numeric criteria and the six evaluations of the example. Table 5. Similarity* values for numeric criteria C3, C4 and C5 Evaluations Criteria Thresholds E1 E2 E3 E4 E5 E6 C3 ≥ 3 1/4 2/4 3/4 3/4 -1/3 -1 C4 ≥ 0 1/6 2/6 5/6 1 1/6 1/6 C5 ≥ 5 1 1 1 1 -1/5 -1 Comparing the evaluation values of Table 4 with the similarity values of Table 5, the reader may notice several important facts: 1. Evaluation E4 satisfies criteria C4 and C5 with evaluations of 5. Applying the above expression, these criteria receive the same similarity of 1. However, criterion C4 has a threshold of 0, and C5 has a threshold equal to 5. As it is more difficult to satisfy the restriction imposed to C5, this one should have a greater influence in the final ranking. 2. Evaluation E6 gives an evaluation of 0 to criteria C3 and C5, not satisfying either of them and generating the same similarity value of -1. Again, because of their different thresholds, we should distinguish their corresponding relevance degrees in the rankings. For these reasons, a threshold penalty is applied, reflecting how difficult it is to overcome the given thresholds. The more difficult to surpass a threshold, the lower the penalty value shall be. 1 ( ) (0,1] 1 num threshold penalty threshold maxValue + = ∈ + Table 6 shows the threshold penalty values for the three numeric criteria and the six evaluations of the example

Table 6. Threshold penalty values for numeric criteria C3, Ca and Cs 4. EXPERIMENTS Evaluation In this section, we present some early experiments that attempt to Criteria Thresholds El E? E3 E4 Es E6 measure: a) the gain of efficiency and effectiveness, and the b) increment of users' satisfaction obtained with the use of our system when searching ontologies within a specific domain. The scenario of the experiments was the following. A repository of thirty ontologies was considered and eighteen subjects participated in the evaluations. They were Computer Science Ph. D. students of our department, all of them with some expertise Finally, the similarity results for the numeric criteria of the in modeling and exploitation of ontologies. They were asked example are shown in Table 7 search and evaluate ontologies with WebCoRE in three different tasks. For each task and each student. one of the following Table 7 ty values for numeric criteria C3, Ca and Cs problem domains was selected: Evaluations Family. Search for ontologies including family members Criteria Thresholds EE? E: E4 Es E mother, father, daughter, son, etc ≥31.171331.5150.780.33 Genetics. Search for ontologies containing specific 01.031.051141.171.031.03 vocabulary of Genetics: genes, proteins, amino acids, etc Restaurant. Search for ontologies with vocabulary related to restaurants: food. drinks. waiters etc. As a preliminary approach, we calculate the similarity between an In the repository, there were six different ont ntology evaluation and the user's requirements as the average of related to ts n criteria similarities each of the above domains, and twelve ontologi no related knowledge areas. No information he domains and the existent ontolo similarity(eraluation.) similarity(criterion.) Tasks 1 and 2 were performed first without the help of the A weighted average could be even more appropriate, and might collaborative modules of the system, i.e., the term recommender make the collaborative recommender module more sophisticated of the problem definition phase and the collaborative ranking of and adjustable to user needs. This will be considered for a the user evaluation phase. After all users finished the previous ossible enhancement of the system in the continuation of our ontology searches and evaluations. task 3 was done with the collaborative components activated. For each task and each student, we measured the time expended, and the number of 3.3. 2 Collaborative Ontology Ranking ontologies retrieved and selected (reused,). We also asked the Once the similarities are calculated taking into account the users users about their satisfaction(in a 1-5 rating scale) about each of interests and the evaluations stored in the system. a ranking is the selected ontologies and the collaborative modules. assigned to the ontologies Tables 8 and 9 contain a summary of the obtained results. Note The ranking of a specific ontology is measured as the average of that measures of task I are not shown. We have decided not to its M evaluation similarities. again, we do not consider different consider them for evaluation purposes because we discern the first priorities in the evaluations of several users. We have planned to task as a learning process of the use of the tool, and its time include in the system personalized user appreciations about the executions and number of selected ontologies as skewed no opinions of the rest of the users. Thus, for a certain user some objective measures evaluations will have more relevance than others. according to the users that made it To evaluate the enhancements in terms of efficiency and effectiveness, we present in Table 8 the average number of reused ontologies and the average execution times for task 2 and 3. The results show a significant improvement when the collaborative modules of the system were activated. In all the cases. the students made use of the terms and evaluations suggested by ∑∑ similanity(eon-) others, accelerating the processes of problem definition and elevant ontology retrieval in case of ties, the collaborative ranking mechanism sorts ogies taking into account not only the average Table 8. Average number of reused ontologies and execution times(in the ontologies and the evaluations stored in th 四 minutes) for tasks 2 and 3 but also the total number of evaluations of each Task 2 roviding thus more relevance to those ontologies that have beer collaborative ated more times 4.35 7 time

Table 6. Threshold penalty values for numeric criteria C3, C4 and C5 Evaluations Criteria Thresholds E1 E2 E3 E4 E5 E6 C3 ≥ 3 4/6 4/6 4/6 4/6 4/6 4/6 C4 ≥ 0 1/6 1/6 1/6 1/6 1/6 1/6 C5 ≥ 5 1 1 1 1 1 1 Finally, the similarity results for the numeric criteria of the example are shown in Table 7. Table 7. Similarity values for numeric criteria C3, C4 and C5 Evaluations Criteria Thresholds E1 E2 E3 E4 E5 E6 C3 ≥ 3 1.17 1.33 1.5 1.5 0.78 0.33 C4 ≥ 0 1.03 1.05 1.14 1.17 1.03 1.03 C5 ≥ 5 2 2 2 2 0.5 0 As a preliminary approach, we calculate the similarity between an ontology evaluation and the user’s requirements as the average of its N criteria similarities. 1 1 ( ) () m mn N n similarity evaluation similarity criterion N = = ∑ A weighted average could be even more appropriate, and might make the collaborative recommender module more sophisticated and adjustable to user needs. This will be considered for a possible enhancement of the system in the continuation of our research. 3.3.2 Collaborative Ontology Ranking Once the similarities are calculated taking into account the user’s interests and the evaluations stored in the system, a ranking is assigned to the ontologies. The ranking of a specific ontology is measured as the average of its M evaluation similarities. Again, we do not consider different priorities in the evaluations of several users. We have planned to include in the system personalized user appreciations about the opinions of the rest of the users. Thus, for a certain user some evaluations will have more relevance than others, according to the users that made it. 1 1 1 1 () ( ) 1 ( ) m mn M m M N m n ranking ontology similarity evaluation M similarity criterion MN = = = = = ∑ ∑∑ Finally, in case of ties, the collaborative ranking mechanism sorts the ontologies taking into account not only the average similarity between the ontologies and the evaluations stored in the system, but also the total number of evaluations of each ontology, providing thus more relevance to those ontologies that have been rated more times. ( ) total M ranking ontology M 4. EXPERIMENTS In this section, we present some early experiments that attempt to measure: a) the gain of efficiency and effectiveness, and the b) increment of users’ satisfaction obtained with the use of our system when searching ontologies within a specific domain. The scenario of the experiments was the following. A repository of thirty ontologies was considered and eighteen subjects participated in the evaluations. They were Computer Science Ph.D. students of our department, all of them with some expertise in modeling and exploitation of ontologies. They were asked to search and evaluate ontologies with WebCORE in three different tasks. For each task and each student, one of the following problem domains was selected: • Family. Search for ontologies including family members: mother, father, daughter, son, etc. • Genetics. Search for ontologies containing specific vocabulary of Genetics: genes, proteins, amino acids, etc. • Restaurant. Search for ontologies with vocabulary related to restaurants: food, drinks, waiters, etc. In the repository, there were six different ontologies related to each of the above domains, and twelve ontologies describing other no related knowledge areas. No information about the domains and the existent ontologies was given to the students. Tasks 1 and 2 were performed first without the help of the collaborative modules of the system, i.e., the term recommender of the problem definition phase and the collaborative ranking of the user evaluation phase. After all users finished the previous ontology searches and evaluations, task 3 was done with the collaborative components activated. For each task and each student, we measured the time expended, and the number of ontologies retrieved and selected (‘reused’). We also asked the users about their satisfaction (in a 1-5 rating scale) about each of the selected ontologies and the collaborative modules. Tables 8 and 9 contain a summary of the obtained results. Note that measures of task 1 are not shown. We have decided not to consider them for evaluation purposes because we discern the first task as a learning process of the use of the tool, and its time executions and number of selected ontologies as skewed no objective measures. To evaluate the enhancements in terms of efficiency and effectiveness, we present in Table 8 the average number of reused ontologies and the average execution times for task 2 and 3. The results show a significant improvement when the collaborative modules of the system were activated. In all the cases, the students made use of the terms and evaluations suggested by others, accelerating the processes of problem definition and relevant ontology retrieval. Table 8. Average number of reused ontologies and execution times (in minutes) for tasks 2 and 3 Task 2 (without collaborative modules) Task 3 (with collaborative modules) % improvement # reused ontologies 3.45 4.35 26.08 execution time 9.3 7.1 23.8

On the other hand. table 9 H. Analysis of multiple evidence satisfaction revealed by the users about the Proceedings of the 20th ACM Int. Conference on Research and the collaborative modules evIdence and Development in IR (SIGIR'97). New York, 1997 [10]Lozano-Tel Table 9. Average satisfactions values(1-5 rating scale) for ontologies lethod to choose the appropriate ontology. Journal of reused in tasks 2 and 3, collaborative recommendations and rankings Task 2 Task Initial term Final ontology [11]Maedche, A, and Staab, S. Measuring similarity between vement ontologies. Proceedings of the 13th European Conference on Knowledge Acquisition and Management(EKAW 2002 Madrid, Spain, 2002 5 CONCLUSIONS AND FUTURE WORK [12 Miller, G.A. WordNet: A lexical database for English.Neww In this paper, a web application for ontology evaluation and horizons in commercial and industrial Artificial intelligence has been presented. The novel aspects of our proposal include the Communications of the Association for Computing use of WordNet to help users to define the Golden Standard; a Machinery,38(11):39-41,1995. new ontology retrieval technique based on traditional Information [13]Montaner, M, Lopez, B, and De la Rosa, J. L. A Taxonomy Retrieval models; rank fusion techniques to combine different of Recommended Agents on the Internet. Artificial ntology evaluation measures, and two collaborative modules intelligence Review 19: 285-330. 2003 one that suggests the most popular terms for a given domain, and one that recommends lists of ontologies with a multi-criteria [14]Noy, N. F, Chugh, A, Liu, W,and Musen,M.A:A strategy that takes into account user opinions about ontology Framework for Ontology Evolution in Collaborative features that can only be assessed by humans. Environments. Proceedings of the 5th Int. Semantic Web Conference(ISwC06) Athens, Georgia, USA, 2006 6. ACKNOWLEDGMENTS [15 Paslaru, E. Using Context Information to Improve Ontolog ah reseatico was zopo ed s th punish Mamistry of scienc euse. Doctoral Workshop at the 17th Conference on Advanced Information Systems Engineering(CAISEO 7. REFERENCES [1 Adomavicius, G, and Tuzhilin, A Toward the Nex [16 Porzel, R, and Malaka, R. A task Generation of Recommender Systems: A Survey of the State on Artificial Intelligence(ECAl'04). Valencia, Spain, 200c ontology evaluation. Proc. of the 16th European Conferenc of-the-Art and Possible Extensions. IEEE Transactions on Knowledge and Data Engineering 17(6): 734-749, 2005 [17] Protege OWL ontology Repository [2 Alani, H, and Brewster, C. Metrics for Ranking Ontologies ings of the 4th Int. Workshop on Evaluation of [18]Resnick, P, lacovo, N, Suchak, M, Bergstrom, P, and gies for the Web(eon06), at the 15th Int. World roupLens: An Open Architecture for eb Conference(www06). Edinburgh, UK, 2006 Collaborative Filtering of Netnews. Internal Research [3 Ontologies with AKTiveRank.Proc.of the Sth IntSemantic Report, MIT Center for Coordination Science, 1994 ani, H, Brewster, C and Shadbolt, N. Ranking [19] Sabo V Web Conference(ISwC06). Athens, Georgia, USA, 2006 on the Real semantic Web. Proceedings [4 Brank J, Grobelnik M, and Mladenic D A Survey of shop on Evaluation of Ontologies for at the 15th Int. world wide Web Ontology Evaluation Techniques. Proceedings of the 4th Conference on Data Mining and Data Warehouses www 06). Edinburgh, UK, 2006 iKDD.05), at the 7th Int Multi-conference on Information 1., Lopez, V, Motta, E, and Uren, V: Ontology Society(Is'05) Ljubljana, Slovenia, 2005 for the Real Semantic Web: How to cover the 5 Brewster, C, Alani, H, Dasmahapatra, Sand Wilks, YData Birthday Dinner? Proc. of the 15th International driven ontology evaluation. Proc. of the 4th Int. Conf.on Conference on Knowledge Engineering and Knowledge Language Resources and Evaluation(LRECO4) Lisbon 2004 Management(EKAw06) Podebrady, Czech Republic, 2006 [6 Ding, Y, and Fensel, D. Ontology Library Systems: The key to 21 Salton, G, and McGill, M: Introduction to Modern Information Retrieval. McGraw-Hill, New York, 1983 Working Symposium(Swws'O1) Stanford, CA, USA, 2001 22 mip,l a o Atomig: Folksonomy: Social Classification. 2004 7 Farquhar, A, Fikes, R, and Rice, J. The Ontolingua server: tomi. org/archives/2004/08/folksonomy_ social class A tool for collaborative ontology construction. Technical report, Stanford KSL 96-26, 1996 [23] Erdmann, M, Angele, J, Staab, S, Studer, R, and 18 Fernandez, M, Cantador, I, and Castells, P CORE. A Tool D: OntoEdit: Collaborative Ontology Development for the Semantic Web. Proceedings of the lst International for Collaborative Ontology Reuse and evaluation Proceedings of the 4th Int. Workshop on Evaluation of Semantic Web Conference(IS"02), Sardinia, Italy, 2002 Ontologies for the Web(EON06), at the 15th Int. Worl 24 Swoogle- Semantic Web Search Engine Wide Web Conference(www06) Edinburgh, UK, 2006 http://swoogle.umbc.edu

On the other hand, table 9 shows the average degrees of satisfaction revealed by the users about the retrieved ontologies and the collaborative modules. Again, the results evidence positive applications of our approach. Table 9. Average satisfactions values (1-5 rating scale) for ontologies reused in tasks 2 and 3, collaborative recommendations and rankings Task 2 Task 3 % improvement Initial term recommendation Final ontology ranking 3.34 3.56 6.58 4.7 4.4 5. CONCLUSIONS AND FUTURE WORK In this paper, a web application for ontology evaluation and reuse has been presented. The novel aspects of our proposal include the use of WordNet to help users to define the Golden Standard; a new ontology retrieval technique based on traditional Information Retrieval models; rank fusion techniques to combine different ontology evaluation measures; and two collaborative modules: one that suggests the most popular terms for a given domain, and one that recommends lists of ontologies with a multi-criteria strategy that takes into account user opinions about ontology features that can only be assessed by humans. 6. ACKNOWLEDGMENTS This research was supported by the Spanish Ministry of Science and Education (TIN2005-06885 and FPU program). 7. REFERENCES [1] Adomavicius, G., and Tuzhilin, A.: Toward the Next Generation of Recommender Systems: A Survey of the State￾of-the-Art and Possible Extensions. IEEE Transactions on Knowledge and Data Engineering 17(6): 734-749, 2005. [2] Alani, H., and Brewster, C.: Metrics for Ranking Ontologies. Proceedings of the 4th Int. Workshop on Evaluation of Ontologies for the Web (EON’06), at the 15th Int. World Wide Web Conference (WWW’06). Edinburgh, UK, 2006. [3] Alani, H., Brewster, C., and Shadbolt, N.: Ranking Ontologies with AKTiveRank. Proc.. of the 5th Int. Semantic Web Conference (ISWC’06). Athens, Georgia, USA, 2006. [4] Brank J., Grobelnik M., and Mladenic D.: A Survey of Ontology Evaluation Techniques. Proceedings of the 4th Conference on Data Mining and Data Warehouses (SiKDD‘05), at the 7th Int. Multi-conference on Information Society (IS’05). Ljubljana, Slovenia, 2005. [5] Brewster, C., Alani, H., Dasmahapatra, S. and Wilks, Y. Data driven ontology evaluation. Proc. of the 4th Int. Conf. on Language Resources and Evaluation (LREC04). Lisbon 2004 [6] Ding, Y., and Fensel, D.: Ontology Library Systems: The key to successful Ontology Reuse. Proc. of the 1st Semantic Web Working Symposium (SWWS’01). Stanford, CA, USA, 2001. [7] Farquhar, A., Fikes, R., and Rice, J.: The Ontolingua server: A tool for collaborative ontology construction. Technical report, Stanford KSL 96-26, 1996. [8] Fernández, M., Cantador, I., and Castells, P. CORE: A Tool for Collaborative Ontology Reuse and Evaluation. Proceedings of the 4th Int. Workshop on Evaluation of Ontologies for the Web (EON’06), at the 15th Int. World Wide Web Conference (WWW’06). Edinburgh, UK, 2006. [9] Lee, J. H.: Analysis of multiple evidence combination. Proceedings of the 20th ACM Int. Conference on Research and Development in IR (SIGIR’97). New York, 1997. [10]Lozano-Tello, A., and Gómez-Pérez, A.: Ontometric: A method to choose the appropriate ontology. Journal of Database Management, 15(2):1–18, 2004. [11]Maedche, A., and Staab, S.: Measuring similarity between ontologies. Proceedings of the 13th European Conference on Knowledge Acquisition and Management (EKAW 2002). Madrid, Spain, 2002. [12]Miller, G. A.: WordNet: A lexical database for English. New horizons in commercial and industrial Artificial Intelligence. Communications of the Association for Computing Machinery, 38(11): 39-41, 1995. [13]Montaner, M., López, B., and De la Rosa, J.L.: A Taxonomy of Recommended Agents on the Internet. Artificial intelligence Review 19: 285-330, 2003. [14] Noy, N. F., Chugh, A., Liu, W., and Musen, M. A.: A Framework for Ontology Evolution in Collaborative Environments. Proceedings of the 5th Int. Semantic Web Conference (ISWC’06). Athens, Georgia, USA, 2006. [15] Paslaru, E.: Using Context Information to Improve Ontology Reuse. Doctoral Workshop at the 17th Conference on Advanced Information Systems Engineering (CAiSE’05). Porto, Portugal, 2005. [16] Porzel, R., and Malaka, R.: A task-based approach for ontology evaluation. Proc. of the 16th European Conference on Artificial Intelligence (ECAI’04). Valencia, Spain, 2004. [17]Protégé OWL ontology Repository. http://protege.stanford.edu/download/ontologies.html [18]Resnick, P., Iacovou, N., Suchak, M., Bergstrom, P., and Riedl, J.: GroupLens: An Open Architecture for Collaborative Filtering of Netnews. Internal Research Report, MIT Center for Coordination Science, 1994. [19] Sabou, M., López, V., Motta, E., and Uren, V.: Ontology Evaluation on the Real Semantic Web. Proceedings of the 4th Int. Workshop on Evaluation of Ontologies for the Web (EON’06), at the 15th Int. World Wide Web Conference (WWW’06). Edinburgh, UK, 2006. [20] Sabou, M., López, V., Motta, E., and Uren, V.: Ontology Selection for the Real Semantic Web: How to cover the Queen’s Birthday Dinner? Proc. of the 15th International Conference on Knowledge Engineering and Knowledge Management (EKAW’06). Podebrady, Czech Republic, 2006. [21] Salton, G., and McGill, M.: Introduction to Modern Information Retrieval. McGraw-Hill, New York, 1983. [22] Smith, G.: Atomiq: Folksonomy: Social Classification. 2004. http://atomiq.org/archives/2004/08/folksonomy_social_class ification.html [23] Sure, Y., Erdmann, M., Angele, J., Staab, S., Studer, R., and Wenke, D.: OntoEdit: Collaborative Ontology Development for the Semantic Web. Proceedings of the 1st International Semantic Web Conference (ISWC ‘02), Sardinia, Italy, 2002. [24] Swoogle - Semantic Web Search Engine. http://swoogle.umbc.edu

点击下载完整版文档(PDF)VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
注册用户24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
已到末页,全文结束
相关文档

关于我们|帮助中心|下载说明|相关软件|意见反馈|联系我们

Copyright © 2008-现在 cucdc.com 高等教育资讯网 版权所有