正在加载图片...
Table 6. Threshold penalty values for numeric criteria C3, Ca and Cs 4. EXPERIMENTS Evaluation In this section, we present some early experiments that attempt to Criteria Thresholds El E? E3 E4 Es E6 measure: a) the gain of efficiency and effectiveness, and the b) increment of users' satisfaction obtained with the use of our system when searching ontologies within a specific domain. The scenario of the experiments was the following. A repository of thirty ontologies was considered and eighteen subjects participated in the evaluations. They were Computer Science Ph. D. students of our department, all of them with some expertise Finally, the similarity results for the numeric criteria of the in modeling and exploitation of ontologies. They were asked example are shown in Table 7 search and evaluate ontologies with WebCoRE in three different tasks. For each task and each student. one of the following Table 7 ty values for numeric criteria C3, Ca and Cs problem domains was selected: Evaluations Family. Search for ontologies including family members Criteria Thresholds EE? E: E4 Es E mother, father, daughter, son, etc ≥31.171331.5150.780.33 Genetics. Search for ontologies containing specific 01.031.051141.171.031.03 vocabulary of Genetics: genes, proteins, amino acids, etc Restaurant. Search for ontologies with vocabulary related to restaurants: food. drinks. waiters etc. As a preliminary approach, we calculate the similarity between an In the repository, there were six different ont ntology evaluation and the user's requirements as the average of related to ts n criteria similarities each of the above domains, and twelve ontologi no related knowledge areas. No information he domains and the existent ontolo similarity(eraluation.) similarity(criterion.) Tasks 1 and 2 were performed first without the help of the A weighted average could be even more appropriate, and might collaborative modules of the system, i.e., the term recommender make the collaborative recommender module more sophisticated of the problem definition phase and the collaborative ranking of and adjustable to user needs. This will be considered for a the user evaluation phase. After all users finished the previous ossible enhancement of the system in the continuation of our ontology searches and evaluations. task 3 was done with the collaborative components activated. For each task and each student, we measured the time expended, and the number of 3.3. 2 Collaborative Ontology Ranking ontologies retrieved and selected (reused,). We also asked the Once the similarities are calculated taking into account the users users about their satisfaction(in a 1-5 rating scale) about each of interests and the evaluations stored in the system. a ranking is the selected ontologies and the collaborative modules. assigned to the ontologies Tables 8 and 9 contain a summary of the obtained results. Note The ranking of a specific ontology is measured as the average of that measures of task I are not shown. We have decided not to its M evaluation similarities. again, we do not consider different consider them for evaluation purposes because we discern the first priorities in the evaluations of several users. We have planned to task as a learning process of the use of the tool, and its time include in the system personalized user appreciations about the executions and number of selected ontologies as skewed no opinions of the rest of the users. Thus, for a certain user some objective measures evaluations will have more relevance than others. according to the users that made it To evaluate the enhancements in terms of efficiency and effectiveness, we present in Table 8 the average number of reused ontologies and the average execution times for task 2 and 3. The results show a significant improvement when the collaborative modules of the system were activated. In all the cases. the students made use of the terms and evaluations suggested by ∑∑ similanity(eon-) others, accelerating the processes of problem definition and elevant ontology retrieval in case of ties, the collaborative ranking mechanism sorts ogies taking into account not only the average Table 8. Average number of reused ontologies and execution times(in the ontologies and the evaluations stored in th 四 minutes) for tasks 2 and 3 but also the total number of evaluations of each Task 2 roviding thus more relevance to those ontologies that have beer collaborative ated more times 4.35 7 timeTable 6. Threshold penalty values for numeric criteria C3, C4 and C5 Evaluations Criteria Thresholds E1 E2 E3 E4 E5 E6 C3 ≥ 3 4/6 4/6 4/6 4/6 4/6 4/6 C4 ≥ 0 1/6 1/6 1/6 1/6 1/6 1/6 C5 ≥ 5 1 1 1 1 1 1 Finally, the similarity results for the numeric criteria of the example are shown in Table 7. Table 7. Similarity values for numeric criteria C3, C4 and C5 Evaluations Criteria Thresholds E1 E2 E3 E4 E5 E6 C3 ≥ 3 1.17 1.33 1.5 1.5 0.78 0.33 C4 ≥ 0 1.03 1.05 1.14 1.17 1.03 1.03 C5 ≥ 5 2 2 2 2 0.5 0 As a preliminary approach, we calculate the similarity between an ontology evaluation and the user’s requirements as the average of its N criteria similarities. 1 1 ( ) () m mn N n similarity evaluation similarity criterion N = = ∑ A weighted average could be even more appropriate, and might make the collaborative recommender module more sophisticated and adjustable to user needs. This will be considered for a possible enhancement of the system in the continuation of our research. 3.3.2 Collaborative Ontology Ranking Once the similarities are calculated taking into account the user’s interests and the evaluations stored in the system, a ranking is assigned to the ontologies. The ranking of a specific ontology is measured as the average of its M evaluation similarities. Again, we do not consider different priorities in the evaluations of several users. We have planned to include in the system personalized user appreciations about the opinions of the rest of the users. Thus, for a certain user some evaluations will have more relevance than others, according to the users that made it. 1 1 1 1 () ( ) 1 ( ) m mn M m M N m n ranking ontology similarity evaluation M similarity criterion MN = = = = = ∑ ∑∑ Finally, in case of ties, the collaborative ranking mechanism sorts the ontologies taking into account not only the average similarity between the ontologies and the evaluations stored in the system, but also the total number of evaluations of each ontology, providing thus more relevance to those ontologies that have been rated more times. ( ) total M ranking ontology M 4. EXPERIMENTS In this section, we present some early experiments that attempt to measure: a) the gain of efficiency and effectiveness, and the b) increment of users’ satisfaction obtained with the use of our system when searching ontologies within a specific domain. The scenario of the experiments was the following. A repository of thirty ontologies was considered and eighteen subjects participated in the evaluations. They were Computer Science Ph.D. students of our department, all of them with some expertise in modeling and exploitation of ontologies. They were asked to search and evaluate ontologies with WebCORE in three different tasks. For each task and each student, one of the following problem domains was selected: • Family. Search for ontologies including family members: mother, father, daughter, son, etc. • Genetics. Search for ontologies containing specific vocabulary of Genetics: genes, proteins, amino acids, etc. • Restaurant. Search for ontologies with vocabulary related to restaurants: food, drinks, waiters, etc. In the repository, there were six different ontologies related to each of the above domains, and twelve ontologies describing other no related knowledge areas. No information about the domains and the existent ontologies was given to the students. Tasks 1 and 2 were performed first without the help of the collaborative modules of the system, i.e., the term recommender of the problem definition phase and the collaborative ranking of the user evaluation phase. After all users finished the previous ontology searches and evaluations, task 3 was done with the collaborative components activated. For each task and each student, we measured the time expended, and the number of ontologies retrieved and selected (‘reused’). We also asked the users about their satisfaction (in a 1-5 rating scale) about each of the selected ontologies and the collaborative modules. Tables 8 and 9 contain a summary of the obtained results. Note that measures of task 1 are not shown. We have decided not to consider them for evaluation purposes because we discern the first task as a learning process of the use of the tool, and its time executions and number of selected ontologies as skewed no objective measures. To evaluate the enhancements in terms of efficiency and effectiveness, we present in Table 8 the average number of reused ontologies and the average execution times for task 2 and 3. The results show a significant improvement when the collaborative modules of the system were activated. In all the cases, the students made use of the terms and evaluations suggested by others, accelerating the processes of problem definition and relevant ontology retrieval. Table 8. Average number of reused ontologies and execution times (in minutes) for tasks 2 and 3 Task 2 (without collaborative modules) Task 3 (with collaborative modules) % improvement # reused ontologies 3.45 4.35 26.08 execution time 9.3 7.1 23.8
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有