当前位置:高等教育资讯网  >  中国高校课件下载中心  >  大学文库  >  浏览文档

materials_5-语义web_Where are the Semantics in the Semantic Web

资源类别:文库,文档格式:PDF,文档页数:15,文件大小:271.24KB,团购合买
点击下载完整版文档(PDF)

Where are the semantics in the semantic Web?8 Michael Uschold The Boeing Comp PO BoX 3707 MS 7L-40 Seattle. Wa98124 USA +1425865-3605 michaelfuschold@boeing.com ABSTRACT The most widely accepted defining feature of the Semantic Web is machine-usable content. By this definition, the Semantic Web is already manifest in shopping agents that automatically access and use Web content to find the lowest air fares, or book prices. But where are the semantics? Most people regard the Semantic Web as a vision, not a reality-so shopping agents should not"count". To use Web content, machines need to know what to do when they encounter it. This in turn, requires the machine to"know what the content means (i.e. its semantics). The challenge of developing the Semantic Web is how to put his knowledge into the machine. The manner in which this is done is at the heart of the confusion about the Semantic Web. The goal of this paper is to clear up some of this confusion e proceed by describing a variety of meanings of the term"semantics", noting various things that can be said to have semantics of various kinds. We introduce a semantic continuum ranging from implicit semantics, which are only in the heads of the people who use the terms, to formal semantics for machine processing. We list some core requirements for enabling machines to use Web content, and we consider various issues such as hardwiring, agreements, clarity of semantics specifications, and public declaration of semantics. In light of these requirements and issues in conjunction with our semantic continuum, it is useful to collectively regard shopping agents as a degenerate case of the Semantic Web Shopping agents work in the complete absence of any explicit account of the semantics of Web content because the meaning of the Web content that the agents are expected to encounter can be determined by the human programmers who hardwire it into the Web application software We note various shortcomings of this approach, which give rise to some ideas about how the Semantic Web should evolve. We argue that this evolution will take place by(1)moving along the semantic continuum from implicit semantics to formal semantics for machine processing, (2)reducing the amount of Web content semantics that is hardwired, (3) increasing the amount of agreements and standards, and (4) developing semantic mapping and translation capabilities where differences remain. eywor Semantic Web, Software Agents, Semantic Heterogeneity, Ontologies The content of this paper was first presented as an invited talk at the Ontologies in Agent Systems workshop held at the Autonomous Agents Conference in Montreal, June 2001. This paper is a significantly revised and extended version of a short paper that appeared in a special issue of the Knowledge Engineering Review for papers from that workshop

Where are the Semantics in the Semantic Web?✤ Michael Uschold The Boeing Company PO Box 3707 MS 7L-40 Seattle, WA 98124 USA +1 425 865-3605 michael.f.uschold@boeing.com ABSTRACT The most widely accepted defining feature of the Semantic Web is machine-usable content. By this definition, the Semantic Web is already manifest in shopping agents that automatically access and use Web content to find the lowest air fares, or book prices. But where are the semantics? Most people regard the Semantic Web as a vision, not a reality—so shopping agents should not “count”. To use Web content, machines need to know what to do when they encounter it. This in turn, requires the machine to “know” what the content means (i.e. its semantics). The challenge of developing the Semantic Web is how to put this knowledge into the machine. The manner in which this is done is at the heart of the confusion about the Semantic Web. The goal of this paper is to clear up some of this confusion. We proceed by describing a variety of meanings of the term “semantics”, noting various things that can be said to have semantics of various kinds. We introduce a semantic continuum ranging from implicit semantics, which are only in the heads of the people who use the terms, to formal semantics for machine processing. We list some core requirements for enabling machines to use Web content, and we consider various issues such as hardwiring, agreements, clarity of semantics specifications, and public declarations of semantics. In light of these requirements and issues in conjunction with our semantic continuum, it is useful to collectively regard shopping agents as a degenerate case of the Semantic Web. Shopping agents work in the complete absence of any explicit account of the semantics of Web content because the meaning of the Web content that the agents are expected to encounter can be determined by the human programmers who hardwire it into the Web application software. We note various shortcomings of this approach, which give rise to some ideas about how the Semantic Web should evolve. We argue that this evolution will take place by (1) moving along the semantic continuum from implicit semantics to formal semantics for machine processing, (2) reducing the amount of Web content semantics that is hardwired, (3) increasing the amount of agreements and standards, and (4) developing semantic mapping and translation capabilities where differences remain. Keywords Semantic Web, Software Agents, Semantic Heterogeneity, Ontologies ✤ The content of this paper was first presented as an invited talk at the Ontologies in Agent Systems workshop held at the Autonomous Agents Conference in Montreal, June 2001. This paper is a significantly revised and extended version of a short paper that appeared in a special issue of the Knowledge Engineering Review for papers from that workshop

Where are the Semantics in the Semantic Web 1 Introduction The current evolution of the Web can be characterized from various perspectives Jasper Uschold 20011 Locating Resources: The way people find things on the Web is evolving from simple free text and keyword search to more sophisticated semantic techniques both for search and navigation Users: Web resources are evolving from being primarily intended for human consumption to being intended for use both by humans and machines Web Tasks and Services: The Web is evolving from being primarily a place to find things to being a place to do things as well [Smith 20011 All of these new capabilities for the Web depend in a fundamental way on the idea of semantics. This gives rise to a fourth perspective along which the Web evolution may be viewed Semantics-The Web is evolving from containing information resources that have little or no explicit semantics to having a rich semantic infrastructure Despite the widespread use of the term"Semantic Web, it does not yet exist except in isolated environments, mainly in research labs. In the w3C Semantic Web Activity Statement we are told that he Semantic Web is a vision: the idea of having data on the Web defined and linked in a way that it can be used by machines not just for display purposes, but for automation, integration and reuse of data across various applications [W3C 2001][emphasis mine] As envisioned by Tim Berners-Lee The Semantic Web is an extension of the current Web in which information is given well-defined mean better enabling computers and people to work in cooperation. [Berners-Lee et al 2001][emphasis min [ SOmething has semantics when it can be processed and understood by a computer, such as how a bill can be processed by a package such as Quicken. " [Trippe 2001 There is no widespread agreement on exactly what the Semantic Web is, nor exactly what it is for. From the above descriptions, there is clear emphasis on the information content of the Web being machine usable and associated with Note that"machine" refers to computers(or computer programs)that perform tasks on the Web. These programs are commonly referred to as software agents, or sofbots and are found in Web applications Machine usable content presumes that the machine knows what to do with information on the Web. One way for this to happen is for the machine to read and process a machine-sensible specification of the semantics of the information. This is a robust and very challenging approach, and largely beyond the current state of the art. A much simpler alternative is for the human Web application developers to hardwire the knowledge into the software so that when the machine runs the software. it does the correct hing with the information. In this second situation, machines already use information on the Web. There are electronic broker agents in routine use that make use of the meaning associated with Web content words Ich as"price, weight, destination, and"airport, to name a few. Armed with a built-in with the lowest price for a book or the lowest air fare between two given cities. So, we still lack an Sites understanding of these terms, these so-called shopping agents automatically peruse the Web to find adequate characterization of what distinguishes the future Semantic Web from what exists today Because RDF (Resource Description Framework) [w3C 1999] is hailed by the w3C as a Semantic Web language, some people seem to have the view that if an application uses rdf, then it is a Semantic Web application. This is reminiscent of the "If it is programmed in Lisp or Prolog, then it must be Ar"sentiment Final Draft Submitted to AI Magazine

Where are the Semantics in the Semantic Web Final Draft Submitted to AI Magazine Page 2 1 Introduction The current evolution of the Web can be characterized from various perspectives [Jasper & Uschold 2001]: Locating Resources: The way people find things on the Web is evolving from simple free text and keyword search to more sophisticated semantic techniques both for search and navigation. Users: Web resources are evolving from being primarily intended for human consumption to being intended for use both by humans and machines . Web Tasks and Services: The Web is evolving from being primarily a place to find things to being a place to do things as well [Smith 2001]. All of these new capabilities for the Web depend in a fundamental way on the idea of semantics. This gives rise to a fourth perspective along which the Web evolution may be viewed: • Semantics—The Web is evolving from containing information resources that have little or no explicit semantics to having a rich semantic infrastructure. Despite the widespread use of the term “Semantic Web,” it does not yet exist except in isolated environments, mainly in research labs. In the W3C Semantic Web Activity Statement we are told that: “The Semantic Web is a vision: the idea of having data on the Web defined and linked in a way that it can be used by machines not just for display purposes, but for automation, integration and reuse of data across various applications.[W3C 2001] ” [emphasis mine] As envisioned by Tim Berners-Lee: “The Semantic Web is an extension of the current Web in which information is given well-defined meaning, better enabling computers and people to work in cooperation.” [Berners-Lee et al 2001] [emphasis mine] “[S]omething has semantics when it can be ‘processed and understood by a computer,’ such as how a bill can be processed by a package such as Quicken.” [Trippe 2001] There is no widespread agreement on exactly what the Semantic Web is, nor exactly what it is for. From the above descriptions, there is clear emphasis on the information content of the Web being: • machine usable, and • associated with more meaning. Note that “machine” refers to computers (or computer programs) that perform tasks on the Web. These programs are commonly referred to as software agents, or sofbots and are found in Web applications. Machine usable content presumes that the machine knows what to do with information on the Web. One way for this to happen is for the machine to read and process a machine-sensible specification of the semantics of the information. This is a robust and very challenging approach, and largely beyond the current state of the art. A much simpler alternative is for the human Web application developers to hardwire the knowledge into the software so that when the machine runs the software, it does the correct thing with the information. In this second situation, machines already use information on the Web. There are electronic broker agents in routine use that make use of the meaning associated with Web content words such as “price,” “weight,” “destination,” and “airport,” to name a few. Armed with a built-in “understanding” of these terms, these so-called shopping agents automatically peruse the Web to find sites with the lowest price for a book or the lowest air fare between two given cities. So, we still lack an adequate characterization of what distinguishes the future Semantic Web from what exists today. Because RDF (Resource Description Framework) [W3C 1999] is hailed by the W3C as a Semantic Web language, some people seem to have the view that if an application uses RDF, then it is a Semantic Web application. This is reminiscent of the “If it is programmed in Lisp or Prolog, then it must be AI” sentiment

Where are the semantics in the Semantic Web that was sometimes evident in the early days of Artificial Intelligence. There is also confusion about what constitutes a legitimate Semantic Web application. Some seem to have the view that an rdF tool such as CWM is one. This is true only in the same sense that KEe and art were Al applications. They were certainly generating income for the vendors, but that is different from the companies using the tools to develop applications that help their bottom line. The lack of an adequate definition of the Semantic Web, however, is no reason to stop pursuing its development any more than an inadequate definition of Al was a reason to cease Al research. Quite the opposite, new ideas al ways need an incubation period The research community, industrial participants, and software vendors are working with the w3C to make the Semantic Web vision a reality([Berners-Lee et al 2001], [DAML 2001, [w3C 20011). It will be layered, extensible, and composable. A major part of this will entail representing and reasoning with emantic metadata, and/or providing semantic markup in the information resources. Fundamental to the semantic infrastructure are ontologies, knowledge bases, and agents along with inference, proof, and sophisticated semantic querying capability The main intent of the Semantic Web is to give machines much better access to information resources so they can be information intermediaries in support of humans. According to the vision described Berners-Lee et al 2001], agents will be pervasive on the Web, carrying out a multitude of everyday tasks Hendler describes many of the important technical issues that this entails, emphasizing the interdependence of agent technology and ontologies [Hendler 2001]. In order to carry out their required tasks, intelligent agents must communicate and understand meaning. They must advertise their capabilities, and recognize the capabilities of other agents. They must locate meaningful information resources on the Web and combine them in meaningful ways to perform tasks. They need to recognize, interpret, and respond to communication acts from other agents In other words, when agents communicate with each other, there needs to be some way to ensure that the meaning of what one agent"says"is accurately conveyed to the other agent. There are two extremes, in principal, for handling this problem. The simplest(and perhaps the most common) approach, is to ignore the problem altogether. That is, just assume that all agents are using the same terms to mean the same things. In practice, this will usually be an assumption built into the application. The assumption could be implicit and informal, or it could be an explicit agreement among all parties to commit to using the same terms in a pre-defined manner. This only works, however, when one has full control over what agents exist and what they might communicate. In reality, agents need to interact in a much wider world, where it cannot be assumed that other agents will use the same terms, or if they do, it cannot be assumed that the terms will mean the same thing The moment we accept the problem and grant that agents may not use the same terms to mean the same things, we need a way for an agent to discover what another agent means when it communicates. In order for this to happen, every agent will need to publicly declare exactly what terms it is using and what they mean. This specification is commonly referred to as the agents ontology gruber 1993. If it were written only for people to understand, this specification could be just a glossary. However, meaning must be accessible to other software agents. This requires that the meaning be encoded in some kind of formal otfe age. This will enable a given agent to use automated reasoning to accurately determine the meaning or pointer to Agent 1s ontology. Agent 2 can then look in Agent I's ontology to see what the terms mean, the message is successfully communicated, and the agents task is successfully performed. At least this is the theory. In practice there is a plethora of difficulties. The holy grail is for this to happen consistently eliably, and fully automatically. Most of these difficulties arise from various sources of heterogeneity For example, there are many different ontology representation languages, different modeling styles and inconsistent use of terminology, to name a few. This is explored further in section 3 ClosedWorldMachinehttp://infomesh.net/2001/cv Final Draft Submitted to AI Magazine

Where are the Semantics in the Semantic Web Final Draft Submitted to AI Magazine Page 3 that was sometimes evident in the early days of Artificial Intelligence. There is also confusion about what constitutes a legitimate Semantic Web application. Some seem to have the view that an RDF tool such as CWM1 is one. This is true only in the same sense that KEE and ART were AI applications. They were certainly generating income for the vendors, but that is different from the companies using the tools to develop applications that help their bottom line. The lack of an adequate definition of the Semantic Web, however, is no reason to stop pursuing its development any more than an inadequate definition of AI was a reason to cease AI research. Quite the opposite, new ideas always need an incubation period. The research community, industrial participants, and software vendors are working with the W3C to make the Semantic Web vision a reality ([Berners-Lee et al 2001], [DAML 2001], [W3C 2001]). It will be layered, extensible, and composable. A major part of this will entail representing and reasoning with semantic metadata, and/or providing semantic markup in the information resources. Fundamental to the semantic infrastructure are ontologies, knowledge bases, and agents along with inference, proof, and sophisticated semantic querying capability. The main intent of the Semantic Web is to give machines much better access to information resources so they can be information intermediaries in support of humans. According to the vision described in [Berners-Lee et al 2001], agents will be pervasive on the Web, carrying out a multitude of everyday tasks. Hendler describes many of the important technical issues that this entails, emphasizing the interdependence of agent technology and ontologies [Hendler 2001]. In order to carry out their required tasks, intelligent agents must communicate and understand meaning. They must advertise their capabilities, and recognize the capabilities of other agents. They must locate meaningful information resources on the Web and combine them in meaningful ways to perform tasks. They need to recognize, interpret, and respond to communication acts from other agents. In other words, when agents communicate with each other, there needs to be some way to ensure that the meaning of what one agent “says” is accurately conveyed to the other agent. There are two extremes, in principal, for handling this problem. The simplest (and perhaps the most common) approach, is to ignore the problem altogether. That is, just assume that all agents are using the same terms to mean the same things. In practice, this will usually be an assumption built into the application. The assumption could be implicit and informal, or it could be an explicit agreement among all parties to commit to using the same terms in a pre-defined manner. This only works, however, when one has full control over what agents exist and what they might communicate. In reality, agents need to interact in a much wider world, where it cannot be assumed that other agents will use the same terms, or if they do, it cannot be assumed that the terms will mean the same thing. The moment we accept the problem and grant that agents may not use the same terms to mean the same things, we need a way for an agent to discover what another agent means when it communicates. In order for this to happen, every agent will need to publicly declare exactly what terms it is using and what they mean. This specification is commonly referred to as the agent’s ontology [Gruber 1993]. If it were written only for people to understand, this specification could be just a glossary. However, meaning must be accessible to other software agents. This requires that the meaning be encoded in some kind of formal language. This will enable a given agent to use automated reasoning to accurately determine the meaning of other agents’ terms. For example, suppose Agent 1 sends a message to Agent 2 and in this message is a pointer to Agent 1’s ontology. Agent 2 can then look in Agent 1's ontology to see what the terms mean, the message is successfully communicated, and the agent’s task is successfully performed. At least this is the theory. In practice there is a plethora of difficulties. The holy grail is for this to happen consistently, reliably, and fully automatically. Most of these difficulties arise from various sources of heterogeneity. For example, there are many different ontology representation languages, different modeling styles and inconsistent use of terminology, to name a few. This is explored further in section 3. 1 Closed World Machine http://infomesh.net/2001/cwm/

Where are the Semantics in the Semantic Web 2 Semantics: A Many-Splendored Thing The core meaning of the word"semantics" is: meaning itself. Yet there is no agreement as to how this applies to the term "Semantic Web. In what follows, we characterize the many things that one might mean but rather to make some important distinctions that people can use to communicate more clearly when when talking about semantics as it pertains to the Semantic Web. It is not our intention to define the term talking about the Semantic Web need for agents to understand the meaning of the information being exchanged between agents, and the meaning of the content of various information sources that agents require in order to perform their tasks We focus attention on the questions of what kinds of semantics there are, what kinds of things have semantics, where the semantics are and how they are used. We identify a kind of semantic continuum ranging from the kind of semantics that exist on the Web today to a rich semantic infrastructure on the Semantic Web of the future Real World Semantics-Real world semantics" are concerned with the"mapping of objects in the model or computational world onto the real world. [and] issues that involve human interpretation, or meaning and use of data or information. [Ouksel Sheth 1999] In this context, we talk about the semantics of ar item", which might be a tag or a term, or possibly a complex expression in some language. We may also peak of the semantics of a possibly large set of expressions, which collectively are intended to represent some real world domain. The real world semantics correspond to the concepts in the real world that the items or sets of items refer to Agent Communication Language Performatives-In the context of the Semantic Web, there are special items that require semantics to ensure that agents communicate effectively. These are performatives such as request or inform in agent communication languages [Smith et al. 98] Axiomatic Semantics--An axiomatic semantics for a language specifies"a mapping of a set of descriptions in [that] language into a logical theory expressed in first-order predicate calculus. " The basic idea is that "the logical theory produced by the mapping.of a set of such descriptions is logically equivalent to the intended meaning of that set of descriptions"[Fikes McGuinness 2001]. Axiomatic semantics have been given for the Resource Description Framework(RDF), RDF Schema(RDF-S), and DAML+OIL. The axiomatic semantics for a language helps to ascribe a real world semantics expressions in that language, in that it limits the possible models or interpretations that the set of axioms Model-Theoretic Semantics"A model-theoretic semantics for a language assumes that the language refers to a world, and describes the minimal conditions that a world must satisfy in order to assign an appropriate meaning for every expression in the language.[W3C 2002a] It is used as a technical tool for determining when proposed operations on the language preserve meaning. In particular, it characterizes what conclusions can validly be drawn from a given set of expressions, independently from what the ntended vs. Actual Meaning-a key to the successful operation of the Semantic Web is that the intended meaning of Web content be accurately conveyed to potential users of that content. In the case of shopping agents, the meaning of terms like"price"is conveyed based on human consensus. However mistakes are al ways possible, due to inconsistency of natural language usage. When formal languages are used, an author attempts to communicate meaning by specifying axioms in a logical theory. In this case we can talk about intended versus actual models of the theory. There is normally just one intended model. It corresponds to what the author wanted the axioms to represent. The actual models correspond to what the author actually has represented They consist of all the objects and relationships, etc, in the real world that n This term is commonly used in the literature on semantic integration of da Final Draft Submitted to AI Magazine

Where are the Semantics in the Semantic Web Final Draft Submitted to AI Magazine Page 4 2 Semantics: A Many-Splendored Thing The core meaning of the word “semantics” is: meaning itself. Yet there is no agreement as to how this applies to the term “Semantic Web.” In what follows, we characterize the many things that one might mean when talking about semantics as it pertains to the Semantic Web. It is not our intention to define the term, but rather to make some important distinctions that people can use to communicate more clearly when talking about the Semantic Web. In the context of achieving successful communication among agents on the Web, we are talking about the need for agents to understand the meaning of the information being exchanged between agents, and the meaning of the content of various information sources that agents require in order to perform their tasks. We focus attention on the questions of what kinds of semantics there are, what kinds of things have semantics, where the semantics are and how they are used. We identify a kind of semantic continuum ranging from the kind of semantics that exist on the Web today to a rich semantic infrastructure on the Semantic Web of the future. Real World Semantics—Real world semantics2 are concerned with the “mapping of objects in the model or computational world onto the real world … [and] issues that involve human interpretation, or meaning and use of data or information.” [Ouksel & Sheth 1999] In this context, we talk about the semantics of an “item”, which might be a tag or a term, or possibly a complex expression in some language. We may also speak of the semantics of a possibly large set of expressions, which collectively are intended to represent some real world domain. The real world semantics correspond to the concepts in the real world that the items or sets of items refer to. Agent Communication Language Performatives—In the context of the Semantic Web, there are special items that require semantics to ensure that agents communicate effectively. These are performatives such as request or inform in agent communication languages [Smith et al. 98]. Axiomatic Semantics—An axiomatic semantics for a language specifies “a mapping of a set of descriptions in [that] language into a logical theory expressed in first-order predicate calculus.” The basic idea is that “the logical theory produced by the mapping … of a set of such descriptions is logically equivalent to the intended meaning of that set of descriptions” [Fikes & McGuinness 2001]. Axiomatic semantics have been given for the Resource Description Framework (RDF), RDF Schema (RDF-S), and DAML+OIL. The axiomatic semantics for a language helps to ascribe a real world semantics to expressions in that language, in that it limits the possible models or interpretations that the set of axioms may have. Model-Theoretic Semantics— “A model-theoretic semantics for a language assumes that the language refers to a 'world', and describes the minimal conditions that a world must satisfy in order to assign an appropriate meaning for every expression in the language”. [W3C 2002a] It is used as a technical tool for determining when proposed operations on the language preserve meaning. In particular, it characterizes what conclusions can validly be drawn from a given set of expressions, independently from what the symbols mean. Intended vs. Actual Meaning— A key to the successful operation of the Semantic Web is that the intended meaning of Web content be accurately conveyed to potential users of that content. In the case of shopping agents, the meaning of terms like “price” is conveyed based on human consensus. However, mistakes are always possible, due to inconsistency of natural language usage. When formal languages are used, an author attempts to communicate meaning by specifying axioms in a logical theory. In this case we can talk about intended versus actual models of the theory. There is normally just one intended model. It corresponds to what the author wanted the axioms to represent. The actual models correspond to what the author actually has represented. They consist of all the objects and relationships, etc., in the real world that 2 This term is commonly used in the literature on semantic integration of databases

Where are the Semantics in the Semantic Web are consistent with the axioms. The goal is to create a set of axioms such that the actual models only include the intended model(s) We believe that the idea of real world semantics, as described above captures the essence of the main use of the term"semantics" in a Semantic Web context. However, it is only loosely defined. The ideas of axiomatic and model-theoretic semantics are being used to make the idea of real world semantics for the Semantic Web more concrete From this discussion, it is clear that several things have semantics 1. Terms or expressions, referring to the real world subject matter of Web content(e.g, semantic markup) 2. Terms or expressions in an agent communication language(e.g, inform); 3. A language for representing the above information(e.g, the semantics of DAML+OIL or RDF 2.1 A semantic continuum We ask three questions about how semantics may be specified 1. Are the semantics explicit or implicit? 2. Are the semantics expressed informally or formally? 3. Are the semantics intended for human processing, or machine processing? These give rise to four kinds of semantics 2. Explicit and informal, 3. Explicit and formal for human processing 4. Explicit and formal for machine processing We define these to be four somewhat arbitrary points along a semantic continuum(see Figure 1).At one extreme, there are no semantics at all, except what is in the minds of the people who use the terms. At the other extreme, we have formal and explicit semantics that are fully automated. The further we move along the continuum, the less ambiguity there is and the more likely we are to have robust, correctly functioning and easy to maintain Web applications. We consider these four points on our semantic continuum, in turn Note that there are likely to be many cases that are not clear cut and thus arguably may fall somewhere between 2.1.1 Implicit Semantics In the simplest case, the semantics are implicit only. Meaning is conveyed based on a shared understanding derived from human consensus. A common example of this case is the typical use of XML tags, such as tags mean[Cover 98]. However, if there is an implicit shared consensus about what the tags mean, then ese price, address, or delivery date. Nowhere in an XML document, or DTD or Schema, does it say what these people can hardwire this implicit semantics into web application programs, using screen-scrapers and wrappers. This is how one implements shopping agents that search Web sites for the best deals. From th perspective of mature ial applications that automatically use Web content as conceived by Semantic Web visionaries, this is at or near the current state of the art. The disadvantage of implicit semantics is that they are rife with ambiguity. People often do disagree about the meaning of a term. For example, prices come in different currencies and they may or may not include various taxes or shipping costs. The removal of ambiguity is the major motivation for the use of specialized language used in legal contracts. The costs of identifying and removing ambiguity are very high 2.1.2 Informal Semantics At a further point along the continuum, the semantics are explicit and are expressed in an informal manner, e.g., a glossary or a text specification document. Given the complexities of natural language, machines have an extremely limited ability to make direct use of informally expressed semantics. This is mainly for humans. There are many examples of informal semantics, usually found in text specification documents The meaning of tags in hTml such as which means second level header Final Draft Submitted to AI Magazine

Where are the Semantics in the Semantic Web Final Draft Submitted to AI Magazine Page 5 are consistent with the axioms. The goal is to create a set of axioms such that the actual models only include the intended model(s). We believe that the idea of real world semantics, as described above captures the essence of the main use of the term “semantics” in a Semantic Web context. However, it is only loosely defined. The ideas of axiomatic and model-theoretic semantics are being used to make the idea of real world semantics for the Semantic Web more concrete. From this discussion, it is clear that several things have semantics: 1. Terms or expressions, referring to the real world subject matter of Web content (e.g., semantic markup); 2. Terms or expressions in an agent communication language (e.g., inform); 3. A language for representing the above information (e.g., the semantics of DAML+OIL or RDF). 2.1 A semantic continuum We ask three questions about how semantics may be specified: 1. Are the semantics explicit or implicit? 2. Are the semantics expressed informally or formally? 3. Are the semantics intended for human processing, or machine processing? These give rise to four kinds of semantics: 1. Implicit; 2. Explicit and informal; 3. Explicit and formal for human processing; 4. Explicit and formal for machine processing. We define these to be four somewhat arbitrary points along a semantic continuum (see Figure 1). At one extreme, there are no semantics at all, except what is in the minds of the people who use the terms. At the other extreme, we have formal and explicit semantics that are fully automated. The further we move along the continuum, the less ambiguity there is and the more likely we are to have robust, correctly functioning and easy to maintain Web applications. We consider these four points on our semantic continuum, in turn. Note that there are likely to be many cases that are not clear cut and thus arguably may fall somewhere in between. 2.1.1 Implicit Semantics In the simplest case, the semantics are implicit only. Meaning is conveyed based on a shared understanding derived from human consensus. A common example of this case is the typical use of XML tags, such as price, address, or delivery date. Nowhere in an XML document, or DTD or Schema, does it say what these tags mean [Cover 98]. However, if there is an implicit shared consensus about what the tags mean, then people can hardwire this implicit semantics into web application programs, using screen-scrapers and wrappers. This is how one implements shopping agents that search Web sites for the best deals. From the perspective of mature commercial applications that automatically use Web content as conceived by Semantic Web visionaries, this is at or near the current state of the art. The disadvantage of implicit semantics is that they are rife with ambiguity. People often do disagree about the meaning of a term. For example, prices come in different currencies and they may or may not include various taxes or shipping costs. The removal of ambiguity is the major motivation for the use of specialized language used in legal contracts. The costs of identifying and removing ambiguity are very high. 2.1.2 Informal Semantics At a further point along the continuum, the semantics are explicit and are expressed in an informal manner, e.g., a glossary or a text specification document. Given the complexities of natural language, machines have an extremely limited ability to make direct use of informally expressed semantics. This is mainly for humans. There are many examples of informal semantics, usually found in text specification documents. • The meaning of tags in HTML such as , which means second level header;

Where are the semantics in the semantic web device for (superclasses Shared human Text desc riptions. Semantics hardi Semantics processed consensus used at runtim and used at runtime Implicit Informal Formal Formal (explicit) (for humans (for machines) nicate and build Web applications. They may also be explicit and informal, or they may be formal. The we move along the continuum, the less ambiguity there is and the more likely it is to have robust orrectly functioning Web applications. For implicit and informal semantics, there is no alternative to hardwiring the semantics into Web application sofware. In the case offormal semantics, hardwiring remains an option, in which case the formal semantics serve the important role in reducing ambiguity in specifying Web application behavior, compared to implicit or informal semantics. There is also the new possibility of using automated inference to process the semantics at runtime. This would allow for much more robust Web something about the meaning of ter The meaning of expressions in modeling languages such as UML (Unified Modeling Language) [OMG 2000, and the original specification of RDF Schema [w3C 1999; The meaning of terms in the dublin Core [Weible Miller 20001 Typically, the semantics expressed in informal documents are hardwired by humans in working software Compiler writers use language definition specifications to write compilers. The specifications for RDF and UML are used to develop modeling tools such as CWM and Rational rose The main disadvantage of implicit semantics is that there is still much room for ambiguity. This decreases one's confidence that two different implementations(say of RDF Schema)will be consistent and compatible Implementations may differ in subtle ways. Users may notice"features"and start depending on other reasons, informal specifications are sometimes inadequate. This motivates efforts to create forma/ o them. This can result in problems if interoperability is required or if implementations change. For these mantics,e.g, for UML [Evans et al 1998, RDF [W3C 2002a] and DAML+oiL [van Harmlen et al 2001 2.1.3 Formal Semantics for Human Processing Yet further along the continuum, we have explicit semantics expressed in a formal language. However hey are intended for human processing only. We can think of this as formal documentation, or as formal pecifications of meaning. Some examples of this are 1. Modal logic is used to define the semantics of ontological categories such as rigidity and identity Guarino et al. 1994]. These are for the benefit of humans, to reduce or eliminate ambiguity in what is meant by these ideas 2. Modal logic is used to define the semantics of performatives such as inform and request in agent ommunication languages(ACL) [Smith et al. 98]. Humans use the formal definitions to understand evaluate, and compare alternative ACLs. They are also used to implement agent software systems that support these notions Final Draft Submitted to AI Magazine

Where are the Semantics in the Semantic Web Final Draft Submitted to AI Magazine Page 6 • The meaning of expressions in modeling languages such as UML (Unified Modeling Language) [OMG 2000], and the original specification of RDF Schema [W3C 1999]; • The meaning of terms in the Dublin Core [Weible & Miller 2000] Typically, the semantics expressed in informal documents are hardwired by humans in working software. Compiler writers use language definition specifications to write compilers. The specifications for RDF and UML are used to develop modeling tools such as CWM and Rational Rose. The main disadvantage of implicit semantics is that there is still much room for ambiguity. This decreases one’s confidence that two different implementations (say of RDF Schema) will be consistent and compatible. Implementations may differ in subtle ways. Users may notice “features” and start depending on them. This can result in problems if interoperability is required or if implementations change. For these and other reasons, informal specifications are sometimes inadequate. This motivates efforts to create formal semantics, e.g., for UML [Evans et al 1998], RDF [W3C 2002a] and DAML+OIL [van Harmlen et.al. 2001]. 2.1.3 Formal Semantics for Human Processing Yet further along the continuum, we have explicit semantics expressed in a formal language. However, they are intended for human processing only. We can think of this as formal documentation, or as formal specifications of meaning. Some examples of this are: 1. Modal logic is used to define the semantics of ontological categories such as rigidity and identity [Guarino et al. 1994]. These are for the benefit of humans, to reduce or eliminate ambiguity in what is meant by these ideas. 2. Modal logic is used to define the semantics of performatives such as inform and request in agent communication languages (ACL) [Smith et al. 98]. Humans use the formal definitions to understand, evaluate, and compare alternative ACLs. They are also used to implement agent software systems that support these notions. (pump has (superclasses (…)) Pump : “a device for moving a gas or liquid from one place or container to another” Formal (for humans) Informal (explicit) Implicit Formal (for machines) Shared human consensus. Text descriptions. Semantics hardwired; used at runtime. Semantics processed and used at runtime. (pump has (superclasses (…)) Pump : “a device for moving a gas or liquid from one place or container to another” Formal (for humans) Informal (explicit) Implicit Formal (for machines) Formal (for humans) Informal (explicit) Implicit Formal (for machines) Shared human consensus. Text descriptions. Semantics hardwired; used at runtime. Semantics processed and used at runtime. Figure 1: Semantic Continuum —Semantics may be implicit, existing only in the minds of the humans who communicate and build Web applications. They may also be explicit and informal, or they may be formal. The further we move along the continuum, the less ambiguity there is and the more likely it is to have robust correctly functioning Web applications. For implicit and informal semantics, there is no alternative to hardwiring the semantics into Web application software. In the case of formal semantics, hardwiring remains an option, in which case the formal semantics serve the important role in reducing ambiguity in specifying Web application behavior, compared to implicit or informal semantics. There is also the new possibility of using automated inference to process the semantics at runtime. This would allow for much more robust Web applications, in which agents automatically learn something about the meaning of terms at runtime

Where are the Semantics in the Semantic Web 3. Many axioms and definitions in the Enterprise Ontology [Uschold et al. 1998] were created without the expectation that they would be used for automated inferencing(although that remained a possibility). The primary purpose was to help communicate the intended meaning to people Formal semantics for human processing can go a long way to eliminating ambiguity, but because there is still a human in the loop, there is ample scope for errors 2.1.4 Formal Semantics for Machine Processing Finally, there is the possibility of explicit, formally specified semantics that are intended for machines to directly process using automated inference. The idea is that when new terms are encountered, it is possible to automatically infer something about their meaning and thus how to use them. Inference engines can be used to derive new information for a wide variety of purposes. We will explore this topic in depth in the next section 3 Machine Processible Semantics The defining feature of the Semantic Web is machine usable content. This implies that the machine knows what to do with the Web content it encounters. This does not imply that there is any explicit account of the mantics. Instead, the semantics( whether implicit, informal, or formal)can be hardwired into the Web applications. A more robust approach is to formally represent the semantics and allow the machine to process it to dynamically discover what the content means and how to use it-we call this machine this discussion to the following specific question: How can a machine(i.e, sofhvare agent) leam estrict processible semantics. This may be an impossible goal to achieve in its full generality, so we will something about the meaning of a term that it has never before encountered? One way to look at this is from a procedural perspective. For example, how does a compiler know how to interpret a symbol like"+ in a computer language? Or, how does an agent system know what to do when it encounters the performative inform"? The possibly informal semantics of these symbols are hardwired into a procedure by a human beforehand, and it is intended for machine processing. When the compiler encounters the symbol, it places a call to the appropriate procedure. The meaning of the symbol is: what happens when the procedure is executed. The"agent"determines the meaning of the symbol by calling the appropriate procedure. So, in some sense this may be viewed as machine processible semantics We are instead focusing on a declarative view. From this perspective, we ask how an agent can learn the meaning of a new term from a formal, declarative specification of its semantics. Ideally, we would like to do this without making any assumptions at all. In this case, all symbols might as well be in a never-before seen script from a long-extinct intelligent species on Mars. We have no knowledge of the meaning of the symbols, the rules of syntax for the language, nor do we have any information on the semantics of the language. This general case is the most challenging kind of cryptography. It is extremely difficult for humans, never mind machines. So, we have to start making some assumptions 3.1 Issues and Assumptions 3.1.1 Language Heterogeneity Different ontology languages are often based on different underlying paradigms(e.g, description logic, first-order logic, frame-based representation, taxonomy, semantic net, and thesaurus). Some ontology languages are very expressive and some are not. Some ontology languages have a formally defined semantics and some do not. Some ontology languages have inference support and some do not. If we are to allow all these different languages, then we are faced with the very challenging problem of tras yaiofrom between them. For simplicity then, we will assume that the expressions encountered by our agent are a single language whose syntax and semantics are already known to the agent, e.g, RDF Schema, DAML+OIL Final Draft Submitted to AI Magazine

Where are the Semantics in the Semantic Web Final Draft Submitted to AI Magazine Page 7 3. Many axioms and definitions in the Enterprise Ontology [Uschold et al. 1998] were created without the expectation that they would be used for automated inferencing (although that remained a possibility). The primary purpose was to help communicate the intended meaning to people. Formal semantics for human processing can go a long way to eliminating ambiguity, but because there is still a human in the loop, there is ample scope for errors. 2.1.4 Formal Semantics for Machine Processing Finally, there is the possibility of explicit, formally specified semantics that are intended for machines to directly process using automated inference. The idea is that when new terms are encountered, it is possible to automatically infer something about their meaning and thus how to use them. Inference engines can be used to derive new information for a wide variety of purposes. We will explore this topic in depth in the next section. 3 Machine Processible Semantics The defining feature of the Semantic Web is machine usable content. This implies that the machine knows what to do with the Web content it encounters. This does not imply that there is any explicit account of the semantics. Instead, the semantics (whether implicit, informal, or formal) can be hardwired into the Web applications. A more robust approach is to formally represent the semantics and allow the machine to process it to dynamically discover what the content means and how to use it—we call this machine processible semantics. This may be an impossible goal to achieve in its full generality, so we will restrict this discussion to the following specific question: How can a machine (i.e., software agent) learn something about the meaning of a term that it has never before encountered? One way to look at this is from a procedural perspective. For example, how does a compiler know how to interpret a symbol like “+” in a computer language? Or, how does an agent system know what to do when it encounters the perfomative “inform”? The possibly informal semantics of these symbols are hardwired into a procedure by a human beforehand, and it is intended for machine processing. When the compiler encounters the symbol, it places a call to the appropriate procedure. The meaning of the symbol is: what happens when the procedure is executed. The “agent” determines the meaning of the symbol by calling the appropriate procedure. So, in some sense this may be viewed as machine processible semantics. We are instead focusing on a declarative view. From this perspective, we ask how an agent can learn the meaning of a new term from a formal, declarative specification of its semantics. Ideally, we would like to do this without making any assumptions at all. In this case, all symbols might as well be in a never-before seen script from a long-extinct intelligent species on Mars. We have no knowledge of the meaning of the symbols, the rules of syntax for the language, nor do we have any information on the semantics of the language. This general case is the most challenging kind of cryptography. It is extremely difficult for humans, never mind machines. So, we have to start making some assumptions. 3.1 Issues and Assumptions 3.1.1 Language Heterogeneity Different ontology languages are often based on different underlying paradigms (e.g., description logic, first-order logic, frame-based representation, taxonomy, semantic net, and thesaurus). Some ontology languages are very expressive and some are not. Some ontology languages have a formally defined semantics and some do not. Some ontology languages have inference support and some do not. If we are to allow all these different languages, then we are faced with the very challenging problem of translating between them. For simplicity then, we will assume that the expressions encountered by our agent are from a single language whose syntax and semantics are already known to the agent, e.g., RDF Schema, or DAML+OIL

Where are the semantics in the semantic web 3.1.2 Incompatible Conceptualizations Even with a uniform language, there may still be incompatible assumptions in the conceptualization. For example, in[Hayes 96] it is shown that two representations for time, one based on time intervals and another based on time points, are fundamentally incompatible. That is, an agent whose time ontology is based on time points can never incorporate the axioms of another agent whose ontology for time is based on time intervals. From a logic perspective, the two representations are like oil and water. So, we shall further assume that the conceptualizations are compatible 3.1.3 Term Heterogeneity and Different Modeling Styles Even if we assume a shared language and compatible conceptualizations, it is still possible, indeed likely hat different people will build different ontologies for the same domain. two different terms may have the same meaning and the same term may have two different meanings. The same concept may be modeled different levels of detail. A given idea may be modeled using different primitives in the language. For example, is the idea of being red modeled by having the attribute color with value red, or is modeled as a lass called something like RedThings? Or is it both, where either(1)they are independent or(2) RedThings is a derived class defined in terms of the attribute color and the value red? Even if the exact same language is used, and if there is substantial similarity in the underlying onceptualications and assumptions, the inference required to determine whether two terms actually mean the same thing is intractable at best, and may be impossible In section 2, we spoke of the intended vs actual models of a logical theory. Respectively, these correspond to what the author of the theory wanted to represent, vs. what they actually did represent. The actual models consist of all the objects and relationships, etc, in the real world that are consistent with the axioms whether two logical theories are equivalent, and thus whether the semantics of two terms are identica ne Because the machine has access to the axioms, it may in principle be possible for a computer to determine That would be true, for example, if the two theories had the same actual models. However, to compute this is, in general, intractable For a computer to automatically determine the intended meaning of a given term in an ontology is an impossible task, in principle. This would require seeing into the mind of the author. Therefore, a computer cannot determine whether the intended meaning of two terms is the same. This is analogous to formal pecifications for software. The specification is what the author actually said he or she wanted the program to do. It may be possible to verify that a computer program conforms to this specification, but it will never be possible to verify that a program does what the author actually wanted it to do. a much more detailed discussion of these formal issues may be found in Gruninger and Uschold 2002 To reduce the problems of term heterogeneity and different modeling styles, we further assume that the agent encounters a term that explicitly corresponds to a publicly declared concept that it already knows bout(e.g via markup) 3.2 An EXample We now consider a simple example of how we can use machine processing of formal semantics to do something practical using today's technology. As we can see, automatic machine processing of formal semantics is fraught with difficulties. We have made the following simplifying assumptions 1. All parties agree to use the presentation language 2. The conceptualizations are logically compatible 3. There are publicly declared concepts that different agents can use to agree on meaning Suppose that an agent is tasked with discovering information about a variety of mechanical devices. It encounters a Web page with the text:"FUEL PUMP"(see Figure 2). Lacking natural language understanding capability, the term is completely ambiguous. We can reduce the ambiguity by associating the text"FUEL PUMP with a formally defined term fuel-pump(this is called semantic markup). The agent may never have encountered this concept before. In this case, the definition for the new term is Final Draft Submitted to AI Magazine

Where are the Semantics in the Semantic Web Final Draft Submitted to AI Magazine Page 8 3.1.2 Incompatible Conceptualizations Even with a uniform language, there may still be incompatible assumptions in the conceptualization. For example, in [Hayes 96] it is shown that two representations for time, one based on time intervals and another based on time points, are fundamentally incompatible. That is, an agent whose time ontology is based on time points can never incorporate the axioms of another agent whose ontology for time is based on time intervals. From a logic perspective, the two representations are like oil and water. So, we shall further assume that the conceptualizations are compatible. 3.1.3 Term Heterogeneity and Different Modeling Styles Even if we assume a shared language and compatible conceptualizations, it is still possible, indeed likely, that different people will build different ontologies for the same domain. Two different terms may have the same meaning and the same term may have two different meanings. The same concept may be modeled at different levels of detail. A given idea may be modeled using different primitives in the language. For example, is the idea of being red modeled by having the attribute color with value red, or is modeled as a class called something like RedThings? Or is it both, where either (1) they are independent or (2) RedThings is a derived class defined in terms of the attribute color and the value red? Even if the exact same language is used, and if there is substantial similarity in the underlying conceptualizations and assumptions, the inference required to determine whether two terms actually mean the same thing is intractable at best, and may be impossible. In section 2, we spoke of the intended vs. actual models of a logical theory. Respectively, these correspond to what the author of the theory wanted to represent, vs. what they actually did represent. The actual models consist of all the objects and relationships, etc., in the real world that are consistent with the axioms. Because the machine has access to the axioms, it may in principle be possible for a computer to determine whether two logical theories are equivalent, and thus whether the semantics of two terms are identical. That would be true, for example, if the two theories had the same actual models. However, to compute this is, in general, intractable. For a computer to automatically determine the intended meaning of a given term in an ontology is an impossible task, in principle. This would require seeing into the mind of the author. Therefore, a computer cannot determine whether the intended meaning of two terms is the same. This is analogous to formal specifications for software. The specification is what the author actually said he or she wanted the program to do. It may be possible to verify that a computer program conforms to this specification, but it will never be possible to verify that a program does what the author actually wanted it to do. A much more detailed discussion of these formal issues may be found in [Gruninger and Uschold 2002]. To reduce the problems of term heterogeneity and different modeling styles, we further assume that the agent encounters a term that explicitly corresponds to a publicly declared concept that it already knows about (e.g. via markup). 3.2 An Example We now consider a simple example of how we can use machine processing of formal semantics to do something practical using today’s technology. As we can see, automatic machine processing of formal semantics is fraught with difficulties. We have made the following simplifying assumptions: 1. All parties agree to use the same representation language; 2. The conceptualizations are logically compatible; 3. There are publicly declared concepts that different agents can use to agree on meaning. Suppose that an agent is tasked with discovering information about a variety of mechanical devices. It encounters a Web page with the text: “FUEL PUMP” (see Figure 2). Lacking natural language understanding capability, the term is completely ambiguous. We can reduce the ambiguity by associating the text “FUEL PUMP” with a formally defined term fuel-pump (this is called semantic markup). The agent may never have encountered this concept before. In this case, the definition for the new term is

Where are the semantics in the Semantic Web defined in terms of the term pump that in turn is defined in an external Shared Hydraulics Ontology. The agent can learn that fuel-pump is a subclass of pump, which in turn is a subclass of mechanical-device. The hared Hydraulics Onto SHO superclasses(mechanical-deyice)) ( text-d( A device for…) (e (physteai-parts Riston, valve cylinder) reviw i to device-purpose (Numping-A Fluid) 如 the Operatio i What does it mean? Figure 2: Formal Semantics for Machine Processing -An agent is searching for information about mechanical devices, as defined in a public ontology (SHO). A document contains the term "FUEL PUMP,"which the agent has never encountered. Semantic markup reveals that it refers to the concept fuel-pump, which is a kind of pump, "which is in turn defined in SHO as a kind of mechanical device. The agent infers that the document is agent now knows that fuel-pump is not a typewriter or a space ship, because they are not kinds of pumps The agent has no knowledge of what kind of pump it is, only that it is some kind of pump. However, this is sufficient to allow the agent to return this document as being relevant to mechanical devices, even though it has never before heard of the term fuel-pump. It is possible to do this with todays technology using research tools that have been developed [Decker et al. 99, Jasper Uschold 2001. There are also attempts to commercialize this technology, e.g., [Ontoprise 2001. Scale remains a huge barrier to commercia This example illustrates the importance of semantic markup and the sharing of ontologies. It also demonstrates the importance of formal ontologies and automated inference. Inference engines can be used to derive new information for a wide variety of purposes; in particular, a formally specified ontology allows agents to use theorem proving and consistency checking techniques to determine whether or not they have agreement on the semantics of their terminology he ability of the agent to infer something about the meaning of fuel-pump depends on the existence of a formal semantics for ontology language such as DAML+OIL. The language semantics also allow the agent to infer the meaning of complex expressions built up using language primitives. The semantics of the language are not machine processible; they are written for humans only. People use them to write inference engines or other software to correctly interpret and manipulate expressions in the language Note that todays spectacularly impressive search engines by and large do not use formal semantics approaches. Overall it remains an unproven conjecture that such approaches will enhance search capabilities, or have significant impact anywhere else on the Web. For example, there appear to be sufficient business drivers to motivate venture capitalists to heavily invest in Semantic Web companies Fortunately, the w3C is moving forward on this issue by identifying a wide variety of use cases to drive Final Draft Submitted to AI Magazine

Where are the Semantics in the Semantic Web Final Draft Submitted to AI Magazine Page 9 defined in terms of the term pump that in turn is defined in an external Shared Hydraulics Ontology. The agent can learn that fuel-pump is a subclass of pump, which in turn is a subclass of mechanical-device. The has (superclasses SHO: pump)) ( Semantic Markup has (superclasses SHO: pump)) ( has (superclasses SHO: pump)) ( fuel-pump The purpose of this review is to remind operators of the existence of the Operations Manual Bulletin 80-1, which provides information regarding flight operations with low fuel quantities, and to provide supplementary information regarding main tank boost pump low pressure indications. 797 FUEL PUMP LOW PRESSURE INDICATIONS When operating 797 airplanes with low fuel quantities for short Shared Hydraulics Ontology (SHO) (pump has (superclasses (mechanical-device)) (text-def (“A device for …”))) (every pump has (physical-parts (piston, valve, cylinder)) (device-purpose (Pumping-A-Fluid))) What does it mean? Hey, I know this ontology Hey, I know FUEL PUMP has (superclasses SHO: pump)) ( Semantic Markup has (superclasses SHO: pump)) ( has (superclasses SHO: pump)) ( fuel-pump The purpose of this review is to remind operators of the existence of the Operations Manual Bulletin 80-1, which provides information regarding flight operations with low fuel quantities, and to provide supplementary information regarding main tank boost pump low pressure indications. 797 FUEL PUMP LOW PRESSURE INDICATIONS When operating 797 airplanes with low fuel quantities for short Shared Hydraulics Ontology (SHO) (pump has (superclasses (mechanical-device)) (text-def (“A device for …”))) (every pump has (physical-parts (piston, valve, cylinder)) (device-purpose (Pumping-A-Fluid))) What does it mean? Hey, I know this ontology Hey, I know FUEL PUMP Figure 2: Formal Semantics for Machine Processing —An agent is searching for information about mechanical devices, as defined in a public ontology (SHO). A document contains the term “FUEL PUMP,” which the agent has never encountered. Semantic markup reveals that it refers to the concept fuel-pump, which is a kind of “pump,” which is in turn defined in SHO as a kind of mechanical device. The agent infers that the document is relevant. agent now knows that fuel-pump is not a typewriter or a space ship, because they are not kinds of pumps. The agent has no knowledge of what kind of pump it is, only that it is some kind of pump. However, this is sufficient to allow the agent to return this document as being relevant to mechanical devices, even though it has never before heard of the term fuel-pump. It is possible to do this with today’s technology using research tools that have been developed [Decker et al. 99; Jasper & Uschold 2001]. There are also attempts to commercialize this technology, e.g., [Ontoprise 2001]. Scale remains a huge barrier to commercial success. This example illustrates the importance of semantic markup and the sharing of ontologies. It also demonstrates the importance of formal ontologies and automated inference. Inference engines can be used to derive new information for a wide variety of purposes; in particular, a formally specified ontology allows agents to use theorem proving and consistency checking techniques to determine whether or not they have agreement on the semantics of their terminology. The ability of the agent to infer something about the meaning of fuel-pump depends on the existence of a formal semantics for ontology language such as DAML+OIL. The language semantics also allow the agent to infer the meaning of complex expressions built up using language primitives. The semantics of the language are not machine processible; they are written for humans only. People use them to write inference engines or other software to correctly interpret and manipulate expressions in the language. Note that today’s spectacularly impressive search engines by and large do not use formal semantics approaches. Overall it remains an unproven conjecture that such approaches will enhance search capabilities, or have significant impact anywhere else on the Web. For example, there appear to be insufficient business drivers to motivate venture capitalists to heavily invest in Semantic Web companies. Fortunately, the W3C is moving forward on this issue by identifying a wide variety of use cases to drive

Where are the Semantics in the Semantic Web the requirements for a standard web ontology language [w3C 2002b] For further discussion of inference on the Semantic Web, see([Horrocks 2002], Jasper Tyler 2001]) 4 Why do Web Shopping Agents Work? We have taken some time to consider what people might mean when they talk about the Semantic Web There appears to be consensus that the key defining feature is machine usable Web content. However, we argue that by this definition there is an important sense in which the Semantic Web already exists. The best examples of this are travel and bookseller shopping agents that automatically access Web pages looking for good deals. We shall not quibble about whether or not this should"count, nor how the definition of Semantic Web might need to be modified accordingly. It is more useful to regard these examples collectively as a degenerate case of the Semantic Web. In this section, we examine why Web shopping agents work, what their limitations are, and what we can expect in the future 4.1 Requirements for Machine Usable Content The following requirements are fundamental for enabling machines to make use of Web content. Requirement 1: The machine needs to know what to do with the content that it encounters For example, it needs to recognize that it has found the content it is looking for and to execute the appropriate procedures when it has been found. Ultimately it is humans that write the programs that enable the machines to do the right thing. So Requirement 2: Humans must know what to do with the content that the program is expected to encounter. This further requires Requirement 3: Humans know the meaning of the expected content, or are able to encode a procedure that can learn that meaning In determining what makes the Web shopping agent examples work, we consider the following questions Question 1: What is hardwired and what isnt? Question 2: How much agreement is there among different Web sites in their use of terminology and n the similarity of the concepts being referred to? Question 3: To what exter clearly specified? Is t, explici and informal or formal? Question 4: Are agreements and/or semantics publicly declared? 4.1.1 Hardwiring The general case of automatically determining the meaning of Web content is somewhere between intractable and impossible. Thus, a human will always be hardwiring some of the semantics into Web applications. The question is what is hardwired and what is not? The shopping agent applications essentially hardwire the meaning of all the terms and procedures. The hardwiring enables the machine to know how to use the content. The hardwiring approach is not robust to changes in Web content The alternative to hardwiring is allowing the machine to process the semantics specifications directly. In the meaning of every term. Instead, we hardwire the semantics of the representation language inferene c our simple fuel pump example, we have an additional degree of flexibility because we need not hardwir procedures To make this work, we made many assumptions. For example, by assuming(1) that only one epresentation language is used, (2)that the conceptualizations are logically compatible, and(3)that ere Final Draft Submitted to AI Magazine

Where are the Semantics in the Semantic Web Final Draft Submitted to AI Magazine Page 10 the requirements for a standard web ontology language [W3C 2002b]. For further discussion of inference on the Semantic Web, see ([Horrocks 2002] ,[Jasper & Tyler 2001]). 4 Why do Web Shopping Agents Work? We have taken some time to consider what people might mean when they talk about the Semantic Web. There appears to be consensus that the key defining feature is machine usable Web content. However, we argue that by this definition there is an important sense in which the Semantic Web already exists. The best examples of this are travel and bookseller shopping agents that automatically access Web pages looking for good deals. We shall not quibble about whether or not this should “count,” nor how the definition of Semantic Web might need to be modified accordingly. It is more useful to regard these examples collectively as a degenerate case of the Semantic Web. In this section, we examine why Web shopping agents work, what their limitations are, and what we can expect in the future. 4.1 Requirements for Machine Usable Content The following requirements are fundamental for enabling machines to make use of Web content. Requirement 1: The machine needs to know what to do with the content that it encounters. For example, it needs to recognize that it has found the content it is looking for and to execute the appropriate procedures when it has been found. Ultimately it is humans that write the programs that enable the machines to do the right thing. So: Requirement 2: Humans must know what to do with the content that the program is expected to encounter. This further requires that: Requirement 3: Humans know the meaning of the expected content, or are able to encode a procedure that can learn that meaning. In determining what makes the Web shopping agent examples work, we consider the following questions: Question 1: What is hardwired and what isn’t? Question 2: How much agreement is there among different Web sites in their use of terminology and in the similarity of the concepts being referred to? Question 3: To what extent are the semantics of the content clearly specified? Is it implicit, explicit and informal, or formal? Question 4: Are agreements and/or semantics publicly declared? 4.1.1 Hardwiring The general case of automatically determining the meaning of Web content is somewhere between intractable and impossible. Thus, a human will always be hardwiring some of the semantics into Web applications. The question is what is hardwired and what is not? The shopping agent applications essentially hardwire the meaning of all the terms and procedures. The hardwiring enables the machine to “know” how to use the content. The hardwiring approach is not robust to changes in Web content. The alternative to hardwiring is allowing the machine to process the semantics specifications directly. In our simple fuel pump example, we have an additional degree of flexibility because we need not hardwire the meaning of every term. Instead, we hardwire the semantics of the representation language inference procedures. To make this work, we made many assumptions. For example, by assuming (1) that only one representation language is used, (2) that the conceptualizations are logically compatible, and (3) that there

点击下载完整版文档(PDF)VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
共15页,试读已结束,阅读完整版请下载
相关文档

关于我们|帮助中心|下载说明|相关软件|意见反馈|联系我们

Copyright © 2008-现在 cucdc.com 高等教育资讯网 版权所有