LOGICAL AGENTS In which we design agents that can form representat ons of the world.use apro cess of inference to This chapter introduces knowledge-based agents.The concepts that we discu -the repre icial intelligenc processes that bring knowiedge to m are centrar Humans, now things and do rea ey enabl behaviors that would be very leve otherwis solving agents to perform well in complex environments A reflex agents could ony find its way from Arad to Bucharest by dumb luck.The knowledge of problem-solving agents is. however,very specific and inflexible.A chess program can calculate the legal moves of its king.but does not know in any useful sense that no piece can be on two different squares at the same time.Knowledge-based agents can benefit from knowledge expressed in very general forms,combining and recombining information to suit myriad purposes.Often,this process can be quite far removed from the needs of the moment-as when a mathematician proves a theorem or an astronomer calculates the earth's life expectancy. Knowledge and reasoning also play a crucial role in dealing with partially observable environments.A knowledge-based agent can combine general knowledge with current per- cepts to infer hidden aspects of the current state prior to selecting actions.For example,a physician diagnoses a patient-that is,infers a disease state that is not directly observable- prior to choosing a treatment.Some of the knowledge that the physician uses in the form of rules learned from textbooks and teachers,and some is in the form of patterns of association that the physician may not be able to consciously describe.If its inside the physician's head, it counts as knowledge. Understanding natural language also requires inferring hidden state,namely,the inten- tion of the speaker.When we hear,"John saw the diamond through the window and coveted it,"we know "it"refers to the diamond and not the window-we reason,perhaps uncon- sciously,with our knowledge of relative value.Similarly,when we hear,"John threw the brick through the window and broke it,"we know"it"refers to the window.Reasoning allows 194
7 LOGICAL AGENTS In which we design agents that can form representations of the world, use a process of inference to derive new representations about the world, and use these new representations to deduce what to do. This chapter introduces knowledge-based agents. The concepts that we discuss—the representation of knowledge and the reasoning processes that bring knowledge to life—are central to the entire field of artificial intelligence. Humans, it seems, know things and do reasoning. Knowledge and reasoning are also important for artificial agents because they enable successful behaviors that would be very hard to achieve otherwise. We have seen that knowledge of action outcomes enables problemsolving agents to perform well in complex environments. A reflex agents could only find its way from Arad to Bucharest by dumb luck. The knowledge of problem-solving agents is, however, very specific and inflexible. A chess program can calculate the legal moves of its king, but does not know in any useful sense that no piece can be on two different squares at the same time. Knowledge-based agents can benefit from knowledge expressed in very general forms, combining and recombining information to suit myriad purposes. Often, this process can be quite far removed from the needs of the moment—as when a mathematician proves a theorem or an astronomer calculates the earth’s life expectancy. Knowledge and reasoning also play a crucial role in dealing with partially observable environments. A knowledge-based agent can combine general knowledge with current percepts to infer hidden aspects of the current state prior to selecting actions. For example, a physician diagnoses a patient—that is, infers a disease state that is not directly observable— prior to choosing a treatment. Some of the knowledge that the physician uses in the form of rules learned from textbooks and teachers, and some is in the form of patterns of association that the physician may not be able to consciously describe. If its inside the physician’s head, it counts as knowledge. Understanding natural language also requires inferring hidden state, namely, the intention of the speaker. When we hear, “John saw the diamond through the window and coveted it,” we know “it” refers to the diamond and not the window—we reason, perhaps unconsciously, with our knowledge of relative value. Similarly, when we hear, “John threw the brick through the window and broke it,” we know “it” refers to the window. Reasoning allows 194
Section 7.1.Knowledge-Based Agents 195 ving agents have difficulty with this tudying knc ems exponentia eir flexibility.They are able to ac cept ne form of explicitly des d goals,they can achieve competence quickly by being told or learning new knowledge about the enviroment,and they can adapt to changes in the environment by updating the relevar t know dge We begin in Section 7.1 with the overall agent design.Section 72 introduces a simple new environment,the wumpus world,and illustrates the operation of a knowledge-base agent without going into any technical detail.Then,in Section 7.3,we explain the general principles of logic.Logic will be the primary vehicle for representing knowledge throughout Part IlI of the book.The knowledge of logical agents is always definite-each proposition is either true or false in the world,although the agent may be agnostic about some propositions Logic has the pedagogical advantage of being simple example of a representation for knowledge-based agents,but logic has some severe limitations.Clearly,a large portion of the reasoning carried out by humans and other agents in partially observable environments de- pends on handling knowledge that is uncertain.Logic cannot represent this uncertainty well, so in Part V we cover probability,which can.In Part VI and Part VII we cover many repre- sentations,including some based on continuous mathematics such as mixtures of Gaussians, neural networks,and other representations. Section 7.4 of this chapter defines a simple logic called propositional logic.While much less expressive than first-order logic(Chapter 8).propositional logic serves to illustrate all the basic concepts of logic.There is also a well-developed technology for reasoning in propositional logic,which we describe in sections 7.5 and 7.6.Finally,Section 7.7 combines the concept of logical agents with the technology of propositional logic to build some simple agents for the wumpus world.Certain shortcomings in propositional logic are identified, motivating the development of more powerful logics in subsequent chapters. 7.1 KNOWLEDGE-BASED AGENTS KNos The central component of a knowledge-based agent is its knowledge base,or KB.Informally. SENTENCE a knowledge base is a set of sentences.(Here"sentence"is used as a technical term.It is related but is not identical to the sentences of English and other natural languages.)Each sen- o tence is expressed in a language called a knowledge representation language and represents some assertion about the world. There must be a way to add new sentences to the knowledge base and a way to query what is known The standard names for these tasks are Tell and ask respectively Both NFERENCE tasks may involve inferencethat is.deriving new sentences from old.In logical agents. LOGICAL AGENTS which are the main subject of study in this chapter,inference must obey the fundamental requirement that when one Asks a question of the knowledge base.the answer should follow from what has been told (or rather,TELLed)to the knowledge base previously.Later in the
Section 7.1. Knowledge-Based Agents 195 us to cope with the virtually infinite variety of utterances using a finite store of commonsense knowledge. Problem-solving agents have difficulty with this kind of ambiguity because their representation of contingency problems is inherently exponential. Our final reason for studying knowledge-based agents is their flexibility. They are able to accept new tasks in the form of explicitly described goals, they can achieve competence quickly by being told or learning new knowledge about the environment, and they can adapt to changes in the environment by updating the relevant knowledge. We begin in Section 7.1 with the overall agent design. Section 7.2 introduces a simple new environment, the wumpus world, and illustrates the operation of a knowledge-based agent without going into any technical detail. Then, in Section 7.3, we explain the general principles of logic. Logic will be the primary vehicle for representing knowledge throughout Part III of the book. The knowledge of logical agents is always definite—each proposition is either true or false in the world, although the agent may be agnostic about some propositions. Logic has the pedagogical advantage of being simple example of a representation for knowledge-based agents, but logic has some severe limitations. Clearly, a large portion of the reasoning carried out by humans and other agents in partially observable environments depends on handling knowledge that is uncertain. Logic cannot represent this uncertainty well, so in Part V we cover probability, which can. In Part VI and Part VII we cover many representations, including some based on continuous mathematics such as mixtures of Gaussians, neural networks, and other representations. Section 7.4 of this chapter defines a simple logic called propositional logic. While much less expressive than first-order logic (Chapter 8), propositional logic servesto illustrate all the basic concepts of logic. There is also a well-developed technology for reasoning in propositional logic, which we describe in sections 7.5 and 7.6. Finally, Section 7.7 combines the concept of logical agents with the technology of propositional logic to build some simple agents for the wumpus world. Certain shortcomings in propositional logic are identified, motivating the development of more powerful logics in subsequent chapters. 7.1 KNOWLEDGE-BASED AGENTS KNOWLEDGE BASE The central component of a knowledge-based agent is its knowledge base, or KB. Informally, SENTENCE a knowledge base is a set of sentences. (Here “sentence” is used as a technical term. It is related but is not identical to the sentences of English and other natural languages.) Each sentence is expressed in a language called a knowledge representation language and represents KNOWLEDGE REPRESENTATION LANGUAGE some assertion about the world. There must be a way to add new sentences to the knowledge base and a way to query what is known. The standard names for these tasks are TELL and ASK, respectively. Both INFERENCE tasks may involve inference—that is, deriving new sentences from old. In logical agents, LOGICAL AGENTS which are the main subject of study in this chapter, inference must obey the fundamental requirement that when one ASKs a question of the knowledge base, the answer should follow from what has been told (or rather, TELLed) to the knowledge base previously. Later in the
196 Chapter 7.Logical Agents ept)returns an action static KB a know ,a counter,initially0,indicating time TELL(KB.MAKE-PERCEPT-SENTENCE(percept,t)) .MAKEACTION-SENTENCEL -+1 return action Figure 7.1 A generic knowledge-based agen be more precise about the crucial word“follow:”For now,take it to mean th ager program. Like all our age t es a p cep s input an d retur s an actic The agent ma dge base,K 6%29 me knowl program ng First,it TELLS know it perceiv it Asks the know ledge base what actio erform In the p answe this query,e reas oning may be don about th curre state orld,about th outcomes of】 lon sequence so on e the a cho en.the agen The second TEL is necessary to le the knowledge base know that the hypothetical action ha s actually been executed The details of the representation language are hidden inside two functions that imple ment the interface between the sensors and actuators and the core representation and reason ing system.MAKE-PERCEPT-SENTENCE takes a percept and a time and returns a sentence asserting that the agent perceived the percept at the given time.MAKE-ACTION-QUERY takes a time as input and returns a sentence that asks what action should be performed at that time.The details of the inference mechanisms are hidden inside TELL and ASK.Later sections will reveal these details. The agent in Figure 7.1 appears quite similar to the agents with internal state described in Chapter 2.Because of the definitions of TELL and ASK.however,the knowledge-based agent is not an arbitrary program for calculating actions.It is amenable to a description at the KNOWLEDGE LEVEL knowledge level,where we need specify only what the agent knows and what its goals are, in order to fix its behavior.For example,an automated taxi might have the goal of delivering a passenger to Marin County and might know that it is in San Francisco and that the Golden Gate Bridge is the only link between the two locations.Then we can expect it to cross the Golden Gate Bridge because it knows that that will achieve its goal.Notice that this analysis is independent of how the taxi works at the implementation level.It doesn't matter whether its geographical knowledge is implemented as linked lists or pixel maps,or whether it reasons by manipulating strings of symbols stored in registers or by propagating noisy signals in a network of neurons
196 Chapter 7. Logical Agents function KB-AGENT(percept) returns an action static: KB, a knowledge base t, a counter, initially 0, indicating time TELL(KB, MAKE-PERCEPT-SENTENCE(percept,t)) action ← ASK(KB, MAKE-ACTION-QUERY(t)) TELL(KB, MAKE-ACTION-SENTENCE(action,t)) t ←t + 1 return action Figure 7.1 A generic knowledge-based agent. chapter, we will be more precise about the crucial word “follow.” For now, take it to mean that the inference process should not just make things up as it goes along. Figure 7.1 shows the outline of a knowledge-based agent program. Like all our agents, it takes a percept as input and returns an action. The agent maintains a knowledge base, KB, which may initially contain some background knowledge. Each time the agent program is BACKGROUND KNOWLEDGE called, it does two things. First, it TELLs the knowledge base what it perceives. Second, it ASKs the knowledge base what action it should perform. In the process of answering this query, extensive reasoning may be done about the current state of the world, about the outcomes of possible action sequences, and so on. Once the action is chosen, the agent records its choice with TELL and executes the action. The second TELL is necessary to let the knowledge base know that the hypothetical action has actually been executed. The details of the representation language are hidden inside two functions that implement the interface between the sensors and actuators and the core representation and reasoning system. MAKE-PERCEPT-SENTENCE takes a percept and a time and returns a sentence asserting that the agent perceived the percept at the given time. MAKE-ACTION-QUERY takes a time as input and returns a sentence that asks what action should be performed at that time. The details of the inference mechanisms are hidden inside TELL and ASK. Later sections will reveal these details. The agent in Figure 7.1 appears quite similar to the agents with internal state described in Chapter 2. Because of the definitions of TELL and ASK, however, the knowledge-based agent is not an arbitrary program for calculating actions. It is amenable to a description at the KNOWLEDGE LEVEL knowledge level, where we need specify only what the agent knows and what its goals are, in order to fix its behavior. For example, an automated taxi might have the goal of delivering a passenger to Marin County and might know that it is in San Francisco and that the Golden Gate Bridge is the only link between the two locations. Then we can expect it to cross the Golden Gate Bridge because it knows that that will achieve its goal. Notice that this analysis is independent of how the taxi works at the implementation level. It doesn’t matter whether IMPLEMENTATION LEVEL its geographical knowledge is implemented as linked lists or pixel maps, or whether it reasons by manipulating strings of symbols stored in registers or by propagating noisy signals in a network of neurons
Section 7.2. The Wumpus World 197 Aswe mentioned in the chapter. agent simply by TELLing it wha agent initia program, s is built by add ing one by one the sent that represen signer's kno environm signing the repre anguage easy to expres this knowl dge in the form of sentence s simplifies the struction prol ECLARATIVE enormously.This is called the declarative approach to system building. In con rast,the procedural approach encodes de ctly as program co minimizing th role of explicit representation and reasoning can result in a much more efficient system will see agents of both kinds in Section 7.7.In the 1970s and 1980s.advocates of the two approaches engaged in heated debates. We now understand that a successful agent must combine both declarative and procedural elements in its design. In addition to TELLing it what it needs to know,we can provide a knowledge-based agent with mechanisms that allow it to learn for itself.These mechanisms,which are dis cussed in Chapter 18,create general knowledge about the environment out of a series of percepts.This knowledge can be incorporated into the agent's knowledge base and used for decision making.In this way,the agent can be fully autonomous. All these capabilities-representation,reasoning,and learning-rest on the centuries- long development of the theory and technology of logic.Before explaining that theory and technology,however,we will create a simple world with which to illustrate them. 7.2 THE WUMPUS WORLD UMPUS WORLD The wumpus world isa cave consisting of s.Lurking some wumpus,a b ats anyone enter he wun an a he ag one arow oms contain bott tha Tap anyone wh wan into oms cept for the wumpus,which h in). he only m tigating feature of liv ring in th nent Is e possibility ing a h ap of gold Although the wumpus world her tame by modern co it makes an excellent testbed environment for intelligent agents.Michae th was the first to suggest this. A sample wumpus world is shown in Figure 7.2. The precise e definition of the task environment is given,as suggested in Chapter 2,by the PEAS description: easure:+0 or picking up the gold.or or being eaten by the w vumpus,-1 for each n taken and-10 for using up the arrow Environment:A 4x4 grid of rooms.The agent always starts in the square labeled [1,1].facing to the right.The locations of the gold and the wumpus are chosen ran- domly,with a uniform distribution,from the squares other than the start square.In addition.each square other than the start can be a pit.with probability 0.2. ◇Actuators:The agent can move forward,tumn left by 90°,or turn right by 90°.Thc agent dies a miserable death if it enters a square containing a pit or a live wumpus.(It is safe,albeit smelly,to enter a square with a dead wumpus.)Moving forward has no
Section 7.2. The Wumpus World 197 As we mentioned in the introduction to the chapter, one can build a knowledge-based agent simply by TELLing it what it needs to know. The agent’s initial program, before it starts to receive percepts, is built by adding one by one the sentences that represent the designer’s knowledge of the environment. Designing the representation language to make it easy to express this knowledge in the form of sentences simplifies the construction problem DECLARATIVE enormously. This is called the declarative approach to system building. In contrast, the procedural approach encodes desired behaviors directly as program code; minimizing the role of explicit representation and reasoning can result in a much more efficient system. We will see agents of both kinds in Section 7.7. In the 1970s and 1980s, advocates of the two approaches engaged in heated debates. We now understand that a successful agent must combine both declarative and procedural elements in its design. In addition to TELLing it what it needs to know, we can provide a knowledge-based agent with mechanisms that allow it to learn for itself. These mechanisms, which are discussed in Chapter 18, create general knowledge about the environment out of a series of percepts. This knowledge can be incorporated into the agent’s knowledge base and used for decision making. In this way, the agent can be fully autonomous. All these capabilities—representation, reasoning, and learning—rest on the centurieslong development of the theory and technology of logic. Before explaining that theory and technology, however, we will create a simple world with which to illustrate them. 7.2 THE WUMPUS WORLD WUMPUS WORLD The wumpus world is a cave consisting of rooms connected by passageways. Lurking somewhere in the cave is the wumpus, a beast that eats anyone who enters its room. The wumpus can be shot by an agent, but the agent has only one arrow. Some rooms contain bottomless pits that will trap anyone who wanders into these rooms (except for the wumpus, which is too big to fall in). The only mitigating feature of living in this environment is the possibility of finding a heap of gold. Although the wumpus world is rather tame by modern computer game standards, it makes an excellent testbed environment for intelligent agents. Michael Genesereth was the first to suggest this. A sample wumpus world is shown in Figure 7.2. The precise definition of the task environment is given, as suggested in Chapter 2, by the PEAS description: ♦ Performance measure: +1000 for picking up the gold, –1000 for falling into a pit or being eaten by the wumpus, –1 for each action taken and –10 for using up the arrow. ♦ Environment: A 4 × 4 grid of rooms. The agent always starts in the square labeled [1,1], facing to the right. The locations of the gold and the wumpus are chosen randomly, with a uniform distribution, from the squares other than the start square. In addition, each square other than the start can be a pit, with probability 0.2. ♦ Actuators: The agent can move forward, turn left by 90◦ , or turn right by 90◦ . The agent dies a miserable death if it enters a square containing a pit or a live wumpus. (It is safe, albeit smelly, to enter a square with a dead wumpus.) Moving forward has no
198 Chapter 7.Logical Agents effect if there is a wall in front of the agent.The action Grab can be used to pick up an hiect that is in the c as the The can be ed to g.The vall The age one arrov so only the first has anye Sensors:The agent has five sensors,each of which gives a single bit of information: -In the square containing the wumpus and in the directly (not diagonally)adjacent squares the agent will perceive a stench. In the squares directly adjacent to a pit,the agent will perceive a breeze In the square where the gold is.the agent will perceive a glitter When an agent walks into a wall.it will perceive a bump. -When the wumpus is killed,it emits a woeful scream that can be perceived any- where in the cave The percepts will be given to the agent in the form of a list of five symbols;for example if there is a stench and a breeze,but no glitter,bump,or scream,the agent will receive the percept [Stench,Breeze,None,None,Nonel. Exercise 7.1 asks you to define the wumpus environment along the various dimensions given in Chapter 2.The principal difficulty for the agent is its initial ignorance of the configuration of the environment,overcoming this ignorance seems to require logical reasoning.In most instances of the wumpus world,it is possible for the agent to retrieve the gold safely.Occa- sionally,the agent must choose between going home empty-handed and risking death to find the gold.About 21%of the environments are utterly unfair,because the gold is in a pit or surrounded by pits. Let us wa tch a knowledge-based wumpus agent exploring the environment shown in Figure 7.2.The agent's initial knowledge base contains the rules of the environment.as listed PIT PIT 1 2 3 4 Figure 7.2 A typical wumpus world.The agent is in the bottom left comer
198 Chapter 7. Logical Agents effect if there is a wall in front of the agent. The action Grab can be used to pick up an object that is in the same square as the agent. The action Shoot can be used to fire an arrow in a straight line in the direction the agent is facing. The arrow continues until it either hits (and hence kills) the wumpus or hits a wall. The agent only has one arrow, so only the first Shoot action has any effect. ♦ Sensors: The agent has five sensors, each of which gives a single bit of information: – In the square containing the wumpus and in the directly (not diagonally) adjacent squares the agent will perceive a stench. – In the squares directly adjacent to a pit, the agent will perceive a breeze. – In the square where the gold is, the agent will perceive a glitter. – When an agent walks into a wall, it will perceive a bump. – When the wumpus is killed, it emits a woeful scream that can be perceived anywhere in the cave. The percepts will be given to the agent in the form of a list of five symbols; for example, if there is a stench and a breeze, but no glitter, bump, or scream, the agent will receive the percept [Stench, Breeze, None, None, None]. Exercise 7.1 asks you to define the wumpus environment along the various dimensions given in Chapter 2. The principal difficulty for the agent is its initial ignorance of the configuration of the environment; overcoming this ignorance seems to require logical reasoning. In most instances of the wumpus world, it is possible for the agent to retrieve the gold safely. Occasionally, the agent must choose between going home empty-handed and risking death to find the gold. About 21% of the environments are utterly unfair, because the gold is in a pit or surrounded by pits. Let us watch a knowledge-based wumpus agent exploring the environment shown in Figure 7.2. The agent’s initial knowledge base contains the rules of the environment, as listed PIT 1 2 3 4 1 2 3 4 START Stench Stench Breez e Gold PIT PIT Breez e Breez e Breez e Breez e Breez e Stench Figure 7.2 A typical wumpus world. The agent is in the bottom left corner
Section 7.2.The Wumpus World 199 previously.in particular it knows that it is in[]and that []isasafe square.Wewill see ge evo tions are tak The first perc ept is None, e from which the agent can con clude that itsn Ing squares Figure s the agent's state of knowledge at this point e list (some of)the sentences in the knov dge base using letters such as b (breezy)an K(safe,neither pit nor wumpus)marked in the appropriate squares.Fig- ure 7.2,on the other hand,depicts the world itself 囚=Agen g 23 12 22 32 4 1.2 22 32 p 42 OK OK 2,1 3.1 1.1 2.1 3.1r 47 OK OK OK OK (b) Figure 7.3 The firs n e) N From the fact that there was no stench or breeze in [1,1],the agent can infer that [12] and 2,1]are free of dangers.They are marked with an OK to indicate this.A cautious agent will move only into a square that it knows is OK.Let us suppose the agent decides to move forward to [2,1],giving the scene in Figure 7.3(b). The agent detects a breeze in 2,1],so there must be a pit in a neighboring square.The pit cannot be in [1,1],by the rules of the game,so there must be a pit in [2.2]or [3,1]or both. The notation P?in Figure 7.3(b)indicates a possible pit in those squares.At this point,there is only one known square that is OK and has not been visited yet.So the prudent agent will turn around,go back to [,1],and then proceed to [1,2]. The new percept in [1.2]is [Stench,None,None,None,Nonel.resulting in the state of knowledge shown in Figure 7.4(a).The stench in [1.2]means that there must be a wumpus nearby.But the wumpus cannot be in [1.1].by the rules of the game,and it cannot be in [2.2] (or the agent would have detected a stench when it was in [2.1D).Therefore,the agent can infer that the wumpus is in [1.31.The notation W!indicates this.Moreover,the lack of a Breeze in [12]implies that there is no pit in 221.Yet we already inferred that there must be a pit in either 22]or 3.1].so this means it must be in 3.1].This is a fairly difficult nfer ence,because it combines knowledge gained at different times in different places and
Section 7.2. The Wumpus World 199 previously; in particular, it knows that it is in [1,1] and that [1,1] is a safe square. We will see how its knowledge evolves as new percepts arrive and actions are taken. The first percept is [None, None, None, None, None], from which the agent can conclude that its neighboring squares are safe. Figure 7.3(a) shows the agent’s state of knowledge at this point. We list (some of) the sentences in the knowledge base using letters such as B (breezy) and OK (safe, neither pit nor wumpus) marked in the appropriate squares. Figure 7.2, on the other hand, depicts the world itself. A B G P S W = Agent = Breeze = Glitter, Gold = Pit = Stench = Wumpus OK = Safe square V = Visited A OK 1,1 2,1 3,1 4,1 1,2 2,2 3,2 4,2 1,3 2,3 3,3 4,3 1,4 2,4 3,4 4,4 OK OK B P? A P? OK OK OK 1,1 2,1 3,1 4,1 1,2 2,2 3,2 4,2 1,3 2,3 3,3 4,3 1,4 2,4 3,4 4,4 V (a) (b) Figure 7.3 The first step taken by the agent in the wumpus world. (a) The initial situation, after percept [None, None, None, None, None]. (b) After one move, with percept [None, Breeze, None, None, None]. From the fact that there was no stench or breeze in [1,1], the agent can infer that [1,2] and [2,1] are free of dangers. They are marked with an OK to indicate this. A cautious agent will move only into a square that it knows is OK. Let us suppose the agent decides to move forward to [2,1], giving the scene in Figure 7.3(b). The agent detects a breeze in [2,1], so there must be a pit in a neighboring square. The pit cannot be in [1,1], by the rules of the game, so there must be a pit in [2,2] or [3,1] or both. The notation P? in Figure 7.3(b) indicates a possible pit in those squares. At this point, there is only one known square that is OK and has not been visited yet. So the prudent agent will turn around, go back to [1,1], and then proceed to [1,2]. The new percept in [1,2] is [Stench, None, None, None, None], resulting in the state of knowledge shown in Figure 7.4(a). The stench in [1,2] means that there must be a wumpus nearby. But the wumpus cannot be in [1,1], by the rules of the game, and it cannot be in [2,2] (or the agent would have detected a stench when it was in [2,1]). Therefore, the agent can infer that the wumpus is in [1,3]. The notation W! indicates this. Moreover, the lack of a Breeze in [1,2] implies that there is no pit in [2,2]. Yet we already inferred that there must be a pit in either [2,2] or [3,1], so this means it must be in [3,1]. This is a fairly difficult inference, because it combines knowledge gained at different times in different places and
200 Chapter 7.Logical Agents 2 3 14 2A 34 fe squar 3.3 ▣ 33m 4.3 2 22 2 22 32 OK OK 2,1 31 OK (a) at the third ith (Stench.Breeze.Glitter.None.Nonel. relies on the lack of a percept to make one crucial step.The inference is beyond the abilities of most animals but it is typical of the kind of reasoning that a logical agent d The agent has nov agcnt's pit oved to itself that there isn nor a in[221s it is O to move there.We will not show the 12.21:we ssume that the agent turns and moves to [2.3].giving us Figure 7.4(b).In 2.3],the agent letects a glitter.so it should grab the gold and th nd th vailable information tha This i butidIogcalg of logical In the describe how t that。 ssary mn f at w ibed in the preceding paragraphs 7.3 LOGIC This section provides an overview of all the fundamental concepts of logical representation and reasoning.We postpone the technical details of any particular form of logic until the next section.We will instead use informal examples from the wumpus world and from the familiar realm of arithmetic.We adopt this rather unusual approach because the ideas of logic are far more general and beautiful than is commonly supposed. In Section 7.1,we said that knowledge bases consist of sentences.These sentences are expressed according to the syntax of the representation language,which specifies all the sentences that are well formed.The notion of syntax is clear enough in ordinary arithmetic: "+y=4"is a well-formed sentence,whereas"r2y+="is not.The syntax of logical
200 Chapter 7. Logical Agents B P! B A OK OK OK 1,1 2,1 3,1 4,1 1,2 2,2 3,2 4,2 1,3 2,3 3,3 4,3 1,4 2,4 3,4 4,4 V OK W! V P! A OK OK OK 1,1 2,1 3,1 4,1 1,2 2,2 3,2 4,2 1,3 2,3 3,3 4,3 1,4 2,4 3,4 4,4 V S OK W! V V V B S G P? P? (a) (b) S A B G P S W = Agent = Breeze = Glitter, Gold = Pit = Stench = Wumpus OK = Safe square V = Visited Figure 7.4 Two later stages in the progress of the agent. (a) After the third move, with percept [Stench, None, None, None, None]. (b) After the fifth move, with percept [Stench, Breeze, Glitter , None, None]. relies on the lack of a percept to make one crucial step. The inference is beyond the abilities of most animals, but it is typical of the kind of reasoning that a logical agent does. The agent has now proved to itself that there is neither a pit nor a wumpus in [2,2], so it is OK to move there. We will not show the agent’s state of knowledge at [2,2]; we just assume that the agent turns and moves to [2,3], giving us Figure 7.4(b). In [2,3], the agent detects a glitter, so it should grab the gold and thereby end the game. In each case where the agent draws a conclusion from the available information, that conclusion is guaranteed to be correct if the available information is correct. This is a fundamental property of logical reasoning. In the rest of this chapter, we describe how to build logical agents that can represent the necessary information and draw the conclusions that were described in the preceding paragraphs. 7.3 LOGIC This section provides an overview of all the fundamental concepts of logical representation and reasoning. We postpone the technical details of any particular form of logic until the next section. We will instead use informal examples from the wumpus world and from the familiar realm of arithmetic. We adopt this rather unusual approach because the ideas of logic are far more general and beautiful than is commonly supposed. In Section 7.1, we said that knowledge bases consist of sentences. These sentences SYNTAX are expressed according to the syntax of the representation language, which specifies all the sentences that are well formed. The notion of syntax is clear enough in ordinary arithmetic: “x + y = 4” is a well-formed sentence, whereas “x2y+ =” is not. The syntax of logical
Section 7.3.Logic 201 ere are rally do rent sy some with lots rs and exoti rams with arrows an bubblcs.and some with rather visualy appealne dge pase arc ever,sentences in an agent's knowl rea configurations of(parts of)the agent.Reasoning will involve generating and manipulating those configurations SEMANTICS ogic mu t also define the semantics of the language.Loosely speaking.semantic has to do with the“meaning”of sentences. In logic,the definition is more precise. AUTH POSSIBLE WORLD "+y=4"is true in a world where x is 2 and y is 2.but false in a world where z is 1 and In standard logics.very sentene must be citherre r false in each possible world-there is no"in between MODEL When we need to be precise,we will use the term model in place of"possible world. (We will also use the phrase"m is a model of a"to mean that sentence a is true in model m.)Whereas possible worlds might be thought of as(potentially)real environments that the agent might or might not be in,models are mathematical abstractions,each of which simply fixes the truth or falsehood of every relevant sentence.Informally,we may think of r and y as the number of men and women sitting at a table playing bridge,for example,and the sentence+=4 is true when there are four in total;formally,the possible models are just all possible assignments of numbers to the variables and y.Each such assignment fixes the truth of any sentence of arithmetic whose variables are r and y. Now that we have a notion of truth,we are ready to talk about logical reasoning.This involves the relation of logical entailment between sentences-the idea that a sentence fol- lows logically from another sentence.In mathematical notation,we write as a卡3 to mean that the sentence o entails the sentence 3.The formal definition of entailment is this:B if and only if,in every model in which o is true,B is also true.Another way to say this is that if a is true,then B must also be true.Informally,the truth ofB is"contained" in the truth of o.The relation of entailment is familiar from arithmetic;we are happy with the idea that the sentence r+y=4 entails the sentence 4=+y.Obviously,in any model where+y=4-such as the model in which x is 2 and y is 2-it is the case that 4=+y We will see shortly that a knowledge base can be considered a statement,and we often talk of a knowledge base entailing a sentence. We can apply the same kind of analysis to the wumpus-world reasoning example given in the preceding section.Consider the situation in Figure 7.3(b):the agent has detected nothing in [1,1]and a breeze in [2,1].These percepts,combined with the agent's knowledge of the rules ofthe wumpus world (the PEAS description on page 197),constitute the KB.The n the notion of truth of se and the notion o
Section 7.3. Logic 201 languages(and of arithmetic, for that matter) is usually designed for writing papers and books. There are literally dozens of different syntaxes, some with lots of Greek letters and exotic mathematical symbols, and some with rather visually appealing diagrams with arrows and bubbles. In all cases, however, sentences in an agent’s knowledge base are real physical configurations of (parts of) the agent. Reasoning will involve generating and manipulating those configurations. SEMANTICS A logic must also define the semantics of the language. Loosely speaking, semantics has to do with the “meaning” of sentences. In logic, the definition is more precise. The TRUTH semantics of the language defines the truth of each sentence with respect to each possible POSSIBLE WORLD world. For example, the usual semantics adopted for arithmetic specifies that the sentence “x + y = 4” is true in a world where x is 2 and y is 2, but false in a world where x is 1 and y is 1.1 In standard logics, every sentence must be either true or false in each possible world—there is no “in between.” 2 MODEL When we need to be precise, we will use the term model in place of “possible world.” (We will also use the phrase “m is a model of α” to mean that sentence α is true in model m.) Whereas possible worlds might be thought of as (potentially) real environments that the agent might or might not be in, models are mathematical abstractions, each of which simply fixes the truth or falsehood of every relevant sentence. Informally, we may think of x and y as the number of men and women sitting at a table playing bridge, for example, and the sentence x + y = 4 is true when there are four in total; formally, the possible models are just all possible assignments of numbers to the variables x and y. Each such assignment fixes the truth of any sentence of arithmetic whose variables are x and y. Now that we have a notion of truth, we are ready to talk about logical reasoning. This ENTAILMENT involves the relation of logical entailment between sentences—the idea that a sentence follows logically from another sentence. In mathematical notation, we write as α |= β to mean that the sentence α entails the sentence β. The formal definition of entailment is this: α |= β if and only if, in every model in which α is true, β is also true. Another way to say this is that if α is true, then β must also be true. Informally, the truth of β is “contained” in the truth of α. The relation of entailment is familiar from arithmetic; we are happy with the idea that the sentence x + y = 4 entails the sentence 4 = x + y. Obviously, in any model where x + y = 4—such as the model in which x is 2 and y is 2—it is the case that 4 = x + y. We will see shortly that a knowledge base can be considered a statement, and we often talk of a knowledge base entailing a sentence. We can apply the same kind of analysis to the wumpus-world reasoning example given in the preceding section. Consider the situation in Figure 7.3(b): the agent has detected nothing in [1,1] and a breeze in [2,1]. These percepts, combined with the agent’s knowledge of the rules of the wumpus world (the PEAS description on page 197), constitute the KB. The 1 The reader will no doubt have noticed the similarity between the notion of truth of sentences and the notion of satisfaction of constraints in Chapter 5. This is no accident—constraint languages are indeed logics and constraint solving is a form of logical reasoning. 2 Fuzzy logic, discussed in Chapter 14, allows for degrees of truth
202 Chapter 7.Logical Agents 人1 回回 (a) (b) Figure7.5 le mod s for the presence of pits (no pit in (b)Models of the knowledge base and(n pit in 2) ge oas agent is interested (among other things)in whether the adjacent squares [1.2].[2.2].and [3.1] contain pits.Each of the three squares might or might not contain a pit,so(for the purposes of this example)there are 23=8 possible models.These are shown in Figure 7.5.3 The KB is false in models that contradict what the agent knows-for example,the KB is false in any model in which [1,2]contains a pit,because there is no breeze in [1,1].There are in fact just three models in which the KB is true,and these are shown as a subset of the models in Figure 7.5.Now let us consider two possible conclusions: a=“There is no pit in[l,2]” a2 ="There is no pit in [2.2]." We have marked the models of and a in Figures 7.5(a)and 7.5(b)respectively.By inspection,we see the following: in every model in which KB is true,is also true. Hence,KB:there is no pit in [1,2].We can also see tha in some models in which KB is true,is false. Hence,KB2:the agent cannot conclude that there is no pit in [2,2].(Nor can it conclude that there is a pit in 22].) The preceding example not only illustrates entailment,but also shows how the defini- tion of entailment can be applied to derive conclusions-that is,to carry out logical infer- LOGICAL INFERENCE ence.The inference algorithm illustrated in Figure 7.5 is called model checking.because it MODEL CHECKING enumerates all possible models to check that a is true in all models in which KB is true. ol true a Thehethat there isa pitn Chapterhow
202 Chapter 7. Logical Agents 1 2 3 1 2 PIT 1 2 3 1 2 PIT 1 2 3 1 2 PIT PIT PIT 1 2 3 1 2 PIT PIT 1 2 3 1 2 PIT 1 2 3 1 2 PIT PIT 1 2 3 1 2 PIT PIT 1 2 3 1 2 KB α1 Breeze Breeze Breeze Breeze Breeze Breeze Breeze Breeze 1 2 3 1 2 PIT 1 2 3 1 2 PIT 1 2 3 1 2 PIT PIT PIT 1 2 3 1 2 PIT PIT 1 2 3 1 2 PIT 1 2 3 1 2 PIT PIT 1 2 3 1 2 PIT PIT 1 2 3 1 2 KB Breeze α2 Breeze Breeze Breeze Breeze Breeze Breeze Breeze (a) (b) Figure 7.5 Possible models for the presence of pits in squares [1,2], [2,2], and [3,1], given observations of nothing in [1,1] and a breeze in [2,1]. (a) Models of the knowledge base and α1 (no pit in [1,2]). (b) Models of the knowledge base and α2 (no pit in [2,2]). agent is interested (among other things) in whether the adjacent squares [1,2], [2,2], and [3,1] contain pits. Each of the three squares might or might not contain a pit, so (for the purposes of this example) there are 2 3 = 8 possible models. These are shown in Figure 7.5.3 The KB is false in models that contradict what the agent knows—for example, the KB is false in any model in which [1,2] contains a pit, because there is no breeze in [1,1]. There are in fact just three models in which the KB is true, and these are shown as a subset of the models in Figure 7.5. Now let us consider two possible conclusions: α1 = “There is no pit in [1,2].” α2 = “There is no pit in [2,2].” We have marked the models of α1 and α2 in Figures 7.5(a) and 7.5(b) respectively. By inspection, we see the following: in every model in which KB is true, α1 is also true. Hence, KB |= α1: there is no pit in [1,2]. We can also see that in some models in which KB is true, α2 is false. Hence, KB 6|= α2: the agent cannot conclude that there is no pit in [2,2]. (Nor can it conclude that there is a pit in [2,2].)4 The preceding example not only illustrates entailment, but also shows how the definition of entailment can be applied to derive conclusions—that is, to carry out logical inferLOGICAL INFERENCE ence. The inference algorithm illustrated in Figure 7.5 is called model checking, because it MODEL CHECKING enumerates all possible models to check that α is true in all models in which KB is true. 3 Although the figure shows the models as partial wumpus worlds, they are really nothing more than assignments of true and false to the sentences “there is a pit in [1,2]” etc. Models, in the mathematical sense, do not need to have ’orrible ’airy wumpuses in them. 4 The agent can calculate the probability that there is a pit in [2,2]; Chapter 13 shows how
Section7.3.Logic 203 In understanding entament and inference.it might help to think of the set ofall cons m通 quences haystack,inference An inference algorithm that derives only entailed sentences is called sound or truth RUTH-PRESERVIN An unsound inference procedure es the discovery of nonexistent needles It is easy to see that model checking.when it is applicable.is a sound procedure. COMPLETENESS The property of completeness is also desirable:an inference algorithm is complete if it can derive any sentence that is entailed.For real haystacks,which are finite in extent it seems obvious that a systematic examination can always decide whether the needle is in the haystack.For many knowledge bases,however,the haystack of consequences is infinite. and completeness becomes an important issue.Fortunately,there are complete inference procedures for logics that are sufficiently expressive to handle many knowledge bases. We have described a reasoning process whose conclusions are guaranteed to be true in any world in which the premises are true;in particular,if KB is true in the real world. then any sentence o derived from KB by a sound inference procedure is also true in the real world.So,while an inference process operates on"syntax"- -internal physical configurations such as bits in registers or patterns of electrical blips in brains-the process corresponds to the real-world relationship whereby some aspect of the real world is the case?by virtue of other aspects of the real world being the case.This correspondence between world and representation is illustrated in Figure 7.6. 可可 Followort Figure 7.6 Sentences are physical configurations of the agent,and reasoning is a process of constructing new physical configurations from old ones.Logical reasoning should en sure that the athreprest aspects of the world that actually folow om the re infinitey many pairs of alues thes v=4 (1922)
Section 7.3. Logic 203 In understanding entailment and inference, it might help to think of the set of all consequences of KB as a haystack and of α as a needle. Entailment is like the needle being in the haystack; inference is like finding it. This distinction is embodied in some formal notation: if an inference algorithm i can derive α from KB, we write KB `i α , which is pronounced “α is derived from KB by i” or “i derives α from KB.” SOUND An inference algorithm that derives only entailed sentences is called sound or truthTRUTH-PRESERVING preserving. Soundness is a highly desirable property. An unsound inference procedure essentially makes things up as it goes along—it announces the discovery of nonexistent needles. It is easy to see that model checking, when it is applicable,5 is a sound procedure. COMPLETENESS The property of completeness is also desirable: an inference algorithm is complete if it can derive any sentence that is entailed. For real haystacks, which are finite in extent, it seems obvious that a systematic examination can always decide whether the needle is in the haystack. For many knowledge bases, however, the haystack of consequences is infinite, and completeness becomes an important issue.6 Fortunately, there are complete inference procedures for logics that are sufficiently expressive to handle many knowledge bases. We have described a reasoning process whose conclusions are guaranteed to be true in any world in which the premises are true; in particular, if KB is true in the real world, then any sentence α derived from KB by a sound inference procedure is also true in the real world. So, while an inference process operates on “syntax”—internal physical configurations such as bits in registers or patterns of electrical blips in brains—the process corresponds to the real-world relationship whereby some aspect of the real world is the case7 by virtue of other aspects of the real world being the case. This correspondence between world and representation is illustrated in Figure 7.6. Follows Sentences Sentence Entails Semantics Semantics Representation World Aspects of the real world Aspect of the real world Figure 7.6 Sentences are physical configurations of the agent, and reasoning is a process of constructing new physical configurations from old ones. Logical reasoning should ensure that the new configurations represent aspects of the world that actually follow from the aspects that the old configurations represent. 5 Model checking works if the space of models is finite—for example, in wumpus worlds of fixed size. For arithmetic, on the other hand, the space of models is infinite: even if we restrict ourselves to the integers, there are infinitely many pairs of values for x and y in the sentence x + y = 4. 6 Compare with the case of infinite search spaces in Chapter 3, where depth-first search is not complete. 7 As Wittgenstein (1922) put it in his famous Tractatus: “The world is everything that is the case