正在加载图片...
Where are the semantics in the Semantic Web that was sometimes evident in the early days of Artificial Intelligence. There is also confusion about what constitutes a legitimate Semantic Web application. Some seem to have the view that an rdF tool such as CWM is one. This is true only in the same sense that KEe and art were Al applications. They were certainly generating income for the vendors, but that is different from the companies using the tools to develop applications that help their bottom line. The lack of an adequate definition of the Semantic Web, however, is no reason to stop pursuing its development any more than an inadequate definition of Al was a reason to cease Al research. Quite the opposite, new ideas al ways need an incubation period The research community, industrial participants, and software vendors are working with the w3C to make the Semantic Web vision a reality([Berners-Lee et al 2001], [DAML 2001, [w3C 20011). It will be layered, extensible, and composable. A major part of this will entail representing and reasoning with emantic metadata, and/or providing semantic markup in the information resources. Fundamental to the semantic infrastructure are ontologies, knowledge bases, and agents along with inference, proof, and sophisticated semantic querying capability The main intent of the Semantic Web is to give machines much better access to information resources so they can be information intermediaries in support of humans. According to the vision described Berners-Lee et al 2001], agents will be pervasive on the Web, carrying out a multitude of everyday tasks Hendler describes many of the important technical issues that this entails, emphasizing the interdependence of agent technology and ontologies [Hendler 2001]. In order to carry out their required tasks, intelligent agents must communicate and understand meaning. They must advertise their capabilities, and recognize the capabilities of other agents. They must locate meaningful information resources on the Web and combine them in meaningful ways to perform tasks. They need to recognize, interpret, and respond to communication acts from other agents In other words, when agents communicate with each other, there needs to be some way to ensure that the meaning of what one agent"says"is accurately conveyed to the other agent. There are two extremes, in principal, for handling this problem. The simplest(and perhaps the most common) approach, is to ignore the problem altogether. That is, just assume that all agents are using the same terms to mean the same things. In practice, this will usually be an assumption built into the application. The assumption could be implicit and informal, or it could be an explicit agreement among all parties to commit to using the same terms in a pre-defined manner. This only works, however, when one has full control over what agents exist and what they might communicate. In reality, agents need to interact in a much wider world, where it cannot be assumed that other agents will use the same terms, or if they do, it cannot be assumed that the terms will mean the same thing The moment we accept the problem and grant that agents may not use the same terms to mean the same things, we need a way for an agent to discover what another agent means when it communicates. In order for this to happen, every agent will need to publicly declare exactly what terms it is using and what they mean. This specification is commonly referred to as the agents ontology gruber 1993. If it were written only for people to understand, this specification could be just a glossary. However, meaning must be accessible to other software agents. This requires that the meaning be encoded in some kind of formal otfe age. This will enable a given agent to use automated reasoning to accurately determine the meaning or pointer to Agent 1s ontology. Agent 2 can then look in Agent I's ontology to see what the terms mean, the message is successfully communicated, and the agents task is successfully performed. At least this is the theory. In practice there is a plethora of difficulties. The holy grail is for this to happen consistently eliably, and fully automatically. Most of these difficulties arise from various sources of heterogeneity For example, there are many different ontology representation languages, different modeling styles and inconsistent use of terminology, to name a few. This is explored further in section 3 ClosedWorldMachinehttp://infomesh.net/2001/cv Final Draft Submitted to AI MagazineWhere are the Semantics in the Semantic Web Final Draft Submitted to AI Magazine Page 3 that was sometimes evident in the early days of Artificial Intelligence. There is also confusion about what constitutes a legitimate Semantic Web application. Some seem to have the view that an RDF tool such as CWM1 is one. This is true only in the same sense that KEE and ART were AI applications. They were certainly generating income for the vendors, but that is different from the companies using the tools to develop applications that help their bottom line. The lack of an adequate definition of the Semantic Web, however, is no reason to stop pursuing its development any more than an inadequate definition of AI was a reason to cease AI research. Quite the opposite, new ideas always need an incubation period. The research community, industrial participants, and software vendors are working with the W3C to make the Semantic Web vision a reality ([Berners-Lee et al 2001], [DAML 2001], [W3C 2001]). It will be layered, extensible, and composable. A major part of this will entail representing and reasoning with semantic metadata, and/or providing semantic markup in the information resources. Fundamental to the semantic infrastructure are ontologies, knowledge bases, and agents along with inference, proof, and sophisticated semantic querying capability. The main intent of the Semantic Web is to give machines much better access to information resources so they can be information intermediaries in support of humans. According to the vision described in [Berners-Lee et al 2001], agents will be pervasive on the Web, carrying out a multitude of everyday tasks. Hendler describes many of the important technical issues that this entails, emphasizing the interdependence of agent technology and ontologies [Hendler 2001]. In order to carry out their required tasks, intelligent agents must communicate and understand meaning. They must advertise their capabilities, and recognize the capabilities of other agents. They must locate meaningful information resources on the Web and combine them in meaningful ways to perform tasks. They need to recognize, interpret, and respond to communication acts from other agents. In other words, when agents communicate with each other, there needs to be some way to ensure that the meaning of what one agent “says” is accurately conveyed to the other agent. There are two extremes, in principal, for handling this problem. The simplest (and perhaps the most common) approach, is to ignore the problem altogether. That is, just assume that all agents are using the same terms to mean the same things. In practice, this will usually be an assumption built into the application. The assumption could be implicit and informal, or it could be an explicit agreement among all parties to commit to using the same terms in a pre-defined manner. This only works, however, when one has full control over what agents exist and what they might communicate. In reality, agents need to interact in a much wider world, where it cannot be assumed that other agents will use the same terms, or if they do, it cannot be assumed that the terms will mean the same thing. The moment we accept the problem and grant that agents may not use the same terms to mean the same things, we need a way for an agent to discover what another agent means when it communicates. In order for this to happen, every agent will need to publicly declare exactly what terms it is using and what they mean. This specification is commonly referred to as the agent’s ontology [Gruber 1993]. If it were written only for people to understand, this specification could be just a glossary. However, meaning must be accessible to other software agents. This requires that the meaning be encoded in some kind of formal language. This will enable a given agent to use automated reasoning to accurately determine the meaning of other agents’ terms. For example, suppose Agent 1 sends a message to Agent 2 and in this message is a pointer to Agent 1’s ontology. Agent 2 can then look in Agent 1's ontology to see what the terms mean, the message is successfully communicated, and the agent’s task is successfully performed. At least this is the theory. In practice there is a plethora of difficulties. The holy grail is for this to happen consistently, reliably, and fully automatically. Most of these difficulties arise from various sources of heterogeneity. For example, there are many different ontology representation languages, different modeling styles and inconsistent use of terminology, to name a few. This is explored further in section 3. 1 Closed World Machine http://infomesh.net/2001/cwm/
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有