当前位置:高等教育资讯网  >  中国高校课件下载中心  >  大学文库  >  浏览文档

gametheory《经济学理论》 The Trembling Hand: Normal-Form Analysis and Extensive-Form Implications

资源类别:文库,文档格式:PDF,文档页数:7,文件大小:124.27KB,团购合买
Introduction: Invariance In their seminal contribution, Von Neumann and Morgenstern argue that the normal form of a game contains all "strategically relevant" information. This view, note well, does not invalidate or trivialize extensive-form analysis; rather, it leads those who embrace it to be suspicious of extensive-form solution concepts which yield different predictions in distinct
点击下载完整版文档(PDF)

Eco514-Game Theory The Trembling Hand: Normal-Form Analysis and Extensive-Form Implications Marciano siniscalchi January 10, 2000 Introduction: Invariance In their seminal contribution, Von Neumann and Morgenstern argue that the normal form of a game contains all"strategically relevant"information. This view, note well, does not invalidate or trivialize extensive-form analysis; rather, it leads those who embrace it to be uspicious of extensive-form solution concepts which yield different predictions in distinct extensive games sharing the same normal (or reduced normal) form. Solution concepts which do not yield different predictions for such games are called invarian The supposed"strategic sufficiency of the normal form"also motivated the search for normal-form solution concepts which exhibit "nice"properties in every ectensive-form as- sociated with a given normal-form game. The main proponent of this line of research is JF Mertens In my opinion, whether or not the normal form contains all "strategically relevant information depends crucially on the solution concept one wishes to apply. This is actually a rather trivial point, but I am afraid it was overlooked in the debate on the sufficiency of the normal form. For instance, in order to compute the minmax value of a game, one only needs to look at strategies and payoffs associated with strategy profiles; the information conveyed by the extensive form of the game(if such information is at all provided) is irrelevant as far as the minmax value calculation is concerned. Since Von Neumann and Morgenstern were mostly concerned with minmax values, the normal form was indeed sufficient for their purposes. The argument readily extends to Nash equilibrium analysis However, as soon as one wishes to restrict the attention to sequential equilibria, it is clear that the normal form is not sufficient to carry out the analysis. Quite simply, the formal notion of "normal-form game"does not include a specification of information sets This point is more subtle that it appears. You will remember that, given an extensive me r and its normal form G, we defined, for each information set I, the collection of

Eco514—Game Theory The Trembling Hand: Normal-Form Analysis and Extensive-Form Implications Marciano Siniscalchi January 10, 2000 Introduction: Invariance In their seminal contribution, Von Neumann and Morgenstern argue that the normal form of a game contains all “strategically relevant” information. This view, note well, does not invalidate or trivialize extensive-form analysis; rather, it leads those who embrace it to be suspicious of extensive-form solution concepts which yield different predictions in distinct extensive games sharing the same normal (or reduced normal) form. Solution concepts which do not yield different predictions for such games are called invariant. The supposed “strategic sufficiency of the normal form” also motivated the search for normal-form solution concepts which exhibit “nice” properties in every extensive-form as￾sociated with a given normal-form game. The main proponent of this line of research is J.F. Mertens. In my opinion, whether or not the normal form contains all “strategically relevant” information depends crucially on the solution concept one wishes to apply. This is actually a rather trivial point, but I am afraid it was overlooked in the debate on the sufficiency of the normal form. For instance, in order to compute the minmax value of a game, one only needs to look at strategies and payoffs associated with strategy profiles; the information conveyed by the extensive form of the game (if such information is at all provided) is irrelevant as far as the minmax value calculation is concerned. Since Von Neumann and Morgenstern were mostly concerned with minmax values, the normal form was indeed sufficient for their purposes. The argument readily extends to Nash equilibrium analysis. However, as soon as one wishes to restrict the attention to sequential equilibria, it is clear that the normal form is not sufficient to carry out the analysis. Quite simply, the formal notion of “normal-form game” does not include a specification of information sets! This point is more subtle that it appears. You will remember that, given an extensive game Γ and its normal form G, we defined, for each information set I, the collection of 1

strategy profiles which reach I, S(I). Now, the latter is a normal-form object: it is simply a set of strategies. The key point is that, in order to define it, we used the information contained in the definition of the extensive game T: the set s(n) is not part of the formal description of the normal-form game GI I Note for the interested reader: Mailath, Samuelson and Swinkels(1993) characteriz the sets of strategy profiles which can correspond to some information set in some extensive game with a given normal form. Their characterization only relies on the properties of the normal-form payoff functions, and is thus purely "normal-form"in nature and inspiration However, a more sophisticated version of the argument given above applies: even granting that a given normal form contains enough information about all potential information set extensive games derived from that game differ in the actual information sets that the players have to take into account in their strategic reasoning. Clearly, this is"strategically relevant information! I usual, I am not going to ask you to subscribe to my point of view. And, in any case. even if one does ne ot "believe in the sufficiency of the normal form, it may still be interesting to investigate the extensive-form implications of normal-form solution concept This is what we shall do in these notes Before that, we will Thompson's and Dalkey's result concerning inessential transforma- ions"of extensive games; I refer you to OR for a formal treatment Inessential transformations As you will remember, Thompson and Dalkey propose four transformations of extensive games which, in their opinion, do not change the strategic problem faced by the players. As may be expected, these transformations are prima facie harmless, but, on closer inspection at least some of them should not be accepted easily The result Thompson and Dalkey prove is striking: if game r can be mutated into game t- by means of a sequence of inessential transformations, then I and r have the same (reduced) normal form. Corollary": the normal form contains all strategically relevant information Of course, the result is correct, but the " Corollary"is not a formal statement: we can only accept it if we accept the transformations proposed by Thompson and Dalkey as irrelevant Let me emphasize a few key points. First, you will recall that I introduced a fifth transformation which entails replacing a non-terminal history where Chance moves, followed by terminal histories only, with a single terminal history; the corresponding payoffs are lotteries over the payoffs attached to the original terminal nodes, with probabilities given by the relevant Chance move probabilities. I mention this only for completeness: in my opinion once we accept that players are Bayesian, this transformation is harmless

strategy profiles which reach I, S(I). Now, the latter is a normal-form object: it is simply a set of strategies. The key point is that, in order to define it, we used the information contained in the definition of the extensive game Γ: the set S(I) is not part of the formal description of the normal-form game G! [ Note for the interested reader: Mailath, Samuelson and Swinkels (1993) characterize the sets of strategy profiles which can correspond to some information set in some extensive game with a given normal form. Their characterization only relies on the properties of the normal-form payoff functions, and is thus purely “normal-form” in nature and inspiration. However, a more sophisticated version of the argument given above applies: even granting that a given normal form contains enough information about all potential information sets, extensive games derived from that game differ in the actual information sets that the players have to take into account in their strategic reasoning. Clearly, this is “strategically relevant” information! ] As usual, I am not going to ask you to subscribe to my point of view. And, in any case, even if one does not “believe” in the sufficiency of the normal form, it may still be interesting to investigate the extensive-form implications of normal-form solution concepts. This is what we shall do in these notes. Before that, we will Thompson’s and Dalkey’s result concerning“inessential transforma￾tions” of extensive games; I refer you to OR for a formal treatment. Inessential Transformations As you will remember, Thompson and Dalkey propose four transformations of extensive games which, in their opinion, do not change the strategic problem faced by the players. As may be expected, these transformations are prima facie harmless, but, on closer inspection, at least some of them should not be accepted easily. The result Thompson and Dalkey prove is striking: if game Γ 1 can be mutated into game Γ 2 by means of a sequence of inessential transformations, then Γ 1 and Γ 2 have the same (reduced) normal form. “Corollary”: the normal form contains all strategically relevant information! Of course, the result is correct, but the “Corollary” is not a formal statement: we can only accept it if we accept the transformations proposed by Thompson and Dalkey as irrelevant. Let me emphasize a few key points. First, you will recall that I introduced a fifth transformation which entails replacing a non-terminal history where Chance moves, followed by terminal histories only, with a single terminal history; the corresponding payoffs are lotteries over the payoffs attached to the original terminal nodes, with probabilities given by the relevant Chance move probabilities. I mention this only for completeness: in my opinion, once we accept that players are Bayesian, this transformation is harmless. 2

These remarks are written under the assumption that you already know the formalities If you dont, please review OR before proceeding Splitting-Coalescing. This transformation is rather harmless, in my opinion. It only becomes questionable if we assume that the "agents"of a given player assigned to distinct information sets really have a mind of their own. But since agents are a fiction to begin with, I feel comfortable with splitting/coalescing However, it must be noted that sequential equilibrium is not invariant to this transfor- 1,0 d4.1 X 1 B 3.0 F 1: Splitt n the game of Figure 1,(X,u), together with the out-of-equilibrium belief u(IT, B(B) 1, is a sequential equilibrium: given the threat of u after T or B, 1 does well to choose X and given the belief that 1 actually played B, 2 does well to play u. Note that u is really unconstrained here However, suppose that 1's choice is split into two histories: first, at 1 chooses between X and, say, Y; then, after(Y), 1 chooses between T and B. The history(Y, T) corresponds to(r)in the old game,(Y, B)corresponds to(B), and so on Now the unique equilibrium of the game is (YT, d ) after Y, Player 1 faces a simultaneous moves subgame in which T strictly dominates B. Hence, in any sequential equilibrium B1(Y(T)=l. By consistency, this implies that u((Y, T), ( Y, B)(Y, T))= l, so that A2({(Y,m),(Y,B)}(d)=1. But then1()(Y)=1 In my opinion, this example highlights a shortcoming of sequential equilibrium. However note that, even in the original game, the equilibrium(X, u) fails a"reasonableness"test based on forward induction. Here's the idea: Upon being reached, Player 2 must conclude that it is not the case that: (i) Player 1 is rational, and(2)Player 1 expects Player 2 to choose u(for otherwise Player 1 would have chosen X); however, at least T would be a rational hoice for 1, if 1 expected 2 to play d and not u(i.e,(1) may still be true, although(2 false); on the other hand, B would never be a rational choice. Thus, if we assume(note well assume") that Player 2 believes that Player 1 is rational as long as this is possible, we can conclude that Player 2 will interpret a deviation from X as a"signal"that(2)is false, but

These remarks are written under the assumption that you already know the formalities. If you don’t, please review OR before proceeding. Splitting-Coalescing. This transformation is rather harmless, in my opinion. It only becomes questionable if we assume that the “agents” of a given player assigned to distinct information sets really have a mind of their own. But since agents are a fiction to begin with, I feel comfortable with splitting/coalescing. However, it must be noted that sequential equilibrium is not invariant to this transfor￾mation. ❜ X 2,3 1 ✟ ✟ ✟ T✟✟✟ r 2✟ u✟✟ 1,0 ❍ d ❍❍ 4,1 ❍❍❍ B ❍❍❍ r 2 ✟ u✟✟ 0,1 ❍ d ❍❍ 3,0 Figure 1: Splitting matters. In the game of Figure 1, (X, u), together with the out-of-equilibrium belief µ({T, B})(B) = 1, is a sequential equilibrium: given the threat of u after T or B, 1 does well to choose X, and given the belief that 1 actually played B, 2 does well to play u. Note that µ is really unconstrained here. However, suppose that 1’s choice is split into two histories: first, at φ, 1 chooses between X and, say, Y ; then, after (Y ), 1 chooses between T and B. The history (Y, T) corresponds to (T) in the old game, (Y, B) corresponds to (B), and so on. Now the unique equilibrium of the game is (Y T, d): after Y , Player 1 faces a simultaneous￾moves subgame in which T strictly dominates B. Hence, in any sequential equilibrium, β1(Y )(T) = 1. By consistency, this implies that µ({(Y, T),(Y, B)})((Y, T)) = 1, so that β2({(Y, T),(Y, B)})(d) = 1. But then β1(φ)(Y ) = 1. In my opinion, this example highlights a shortcoming of sequential equilibrium. However, note that, even in the original game, the equilibrium (X, u) fails a “reasonableness” test based on forward induction. Here’s the idea: Upon being reached, Player 2 must conclude that it is not the case that: (i) Player 1 is rational, and (2) Player 1 expects Player 2 to choose u (for otherwise Player 1 would have chosen X); however, at least T would be a rational choice for 1, if 1 expected 2 to play d and not u (i.e., (1) may still be true, although (2) is false); on the other hand, B would never be a rational choice. Thus, if we assume (note well: “assume”) that Player 2 believes that Player 1 is rational as long as this is possible, we can conclude that Player 2 will interpret a deviation from X as a “signal” that (2) is false, but 3

that (1)is still true; in particular, Player 2 will believe that 1 has played T. But then he will best-respond with d, which breaks the original equilibrium Since we can break the(X, u) equilibrium without using a"transformation"of the game, I suggest that we accept the latter, and think about the possibility of refining away"un- reasonable"sequential equilibria. We shall return to this point in the notes on forward induction Addition/Deletion of a Superfluous Move. In my opinion, this is the crucial transformation- one that I would certainly not call"inessential. " Recall that a move of Player i is superfluous if, roughly speaking, (i) it does not influence payoffs; (i) Player i does not know that the move she is making is superfluous; (iii) no opponent of Player i observes the superfluous move. Thus, adding such a move sounds pretty harmless However, the mere eristence of a superfluous move may change the strategic problem faced by Player i. Here is an extreme example: consider the Entry Deterrence game from Lecture 11(Fig. 1 in the notes ); now add two superfluous moves, labelled f and a, after the Entrant's choice N; both moves lead to the terminal payoff corresponding to N in the original game; finally, let(E)and(N) belong to the same information set of the Incumbent You can check the definiton in OR to verify that the moves f and a added after N are indeed superfluous The game one obtains is ostensibly the normal form of the Entry Deterrence game Clearly, we do not consider the two games equivalent! To put it differently, the transformation applied to the Entry Deterrence game has changed the nature of the strategic problem faced by the Incumbent to a dramatic extent In the original formulation, if the Incumbent was called upon to move, he knew that the Entrant had entered: hence. he was certain that a was the better course of action. After the modification, however the incumbent does not observe the choice of the entrant thus we can construct an equilibrium in which the Incumbent threatens to play f, and the Entrant stays out. If we want to push the "entry " story a bit harder, we can even say that this threat is credible. If the entrant chooses e the incumbent does not observe this and continues to believe that, as specified by the equilibrium, the Entrant has actually choosen N. Since the Incumbent is indifferent between f and a after N, f is a best reply. bea this argument at least suggests that addition/ deletion of a superfluous move may not an "inessential" transformation Perfection and Properness Perfe The idea behind perfect equilibrium is that equilibria should be robust to small"trem bles"of the opponents away from the predicted play. This idea applies equally well to the normal and the extensive form

that (1) is still true; in particular, Player 2 will believe that 1 has played T. But then he will best-respond with d, which breaks the original equilibrium. Since we can break the (X, u) equilibrium without using a “transformation” of the game, I suggest that we accept the latter, and think about the possibility of refining away “un￾reasonable” sequential equilibria. We shall return to this point in the notes on forward induction. Addition/Deletion of a Superfluous Move. In my opinion, this is the crucial transformation— one that I would certainly not call “inessential.” Recall that a move of Player i is superfluous if, roughly speaking, (i) it does not influence payoffs; (ii) Player i does not know that the move she is making is superfluous; (iii) no opponent of Player i observes the superfluous move. Thus, adding such a move sounds pretty harmless. However, the mere existence of a superfluous move may change the strategic problem faced by Player i. Here is an extreme example: consider the Entry Deterrence game from Lecture 11 (Fig. 1 in the notes); now add two superfluous moves, labelled f and a, after the Entrant’s choice N; both moves lead to the terminal payoff corresponding to N in the original game; finally, let (E) and (N) belong to the same information set of the Incumbent. You can check the definiiton in OR to verify that the moves f and a added after N are indeed superfluous. The game one obtains is ostensibly the normal form of the Entry Deterrence game. Clearly, we do not consider the two games equivalent! To put it differently, the transformation applied to the Entry Deterrence game has changed the nature of the strategic problem faced by the Incumbent to a dramatic extent. In the original formulation, if the Incumbent was called upon to move, he knew that the Entrant had entered; hence, he was certain that a was the better course of action. After the modification, however, the Incumbent does not observe the choice of the Entrant; thus, we can construct an equilibrium in which the Incumbent threatens to play f, and the Entrant stays out. [If we want to push the “entry” story a bit harder, we can even say that this threat is credible. If the Entrant chooses E, the Incumbent does not observe this, and continues to believe that, as specified by the equilibrium, the Entrant has actually choosen N. Since the Incumbent is indifferent between f and a after N, f is a best reply.] This argument at least suggests that addition/deletion of a superfluous move may not be an “inessential” transformation. Perfection and Properness Perfection The basic idea behind perfect equilibrium is that equilibria should be robust to small “trem￾bles” of the opponents away from the predicted play. This idea applies equally well to the normal and the extensive form. 4

OR covers perfection and proves a key result: equilibria that are perfect in the ectensive form can be extended to sequential equilibria. I will add alternative characterizations of erects We begin by defining perturbations Definition 1 Fix a game G=(, (Ai, wiieN). A perturbation vector is a point n (na(ai))ieN, a eA, E Ra:EN A such that, for all iE N, 2o.eA ni(ai)0and(i)agra(a-)→a(a)≤E That is, in an e-equilibrium, actions that are not best replies receive"vanishingly small probability. Note that an E-perfect equilibrium need not be a Nash equilibrium I conclude with the main characterization result of this subsection Proposition 0.1 Fix a game G=(N, (Ai, uiieN). Then the following statements are (i) a is a perfect equilibrium of G

OR covers perfection and proves a key result: equilibria that are perfect in the extensive form can be extended to sequential equilibria. I will add alternative characterizations of perfection. We begin by defining perturbations: Definition 1 Fix a game G = (N,(Ai , ui)i∈N ). A perturbation vector is a point η = (ηi(ai))i∈N,ai∈Ai ∈ R P i∈N |Ai| ++ such that, for all i ∈ N, P ai∈Ai ηi(ai) 0 and (ii) ai 6∈ ri(α−i) ⇒ αi(ai) ≤ . That is, in an -equilibrium, actions that are not best replies receive “vanishingly small” probability. Note that an -perfect equilibrium need not be a Nash equilibrium. I conclude with the main characterization result of this subsection. Proposition 0.1 Fix a game G = (N,(Ai , ui)i∈N ). Then the following statements are equivalent. (i) α is a perfect equilibrium of G. 5

(i)There exists sequences a"- a and en-0 such that, for each n, an is an E-perfect equilibrium of G i) There exists a sequence an→ a such that:(a) for every r,i∈ n and a;∈A a2(a)>0;(b) for every n,t∈ N and a∈A; such that ail(a)>0.,a∈r(am) You will recognize that(iii)is the familiar characterization of perfect equilibria. Condition (b)states formally that ai is a best reply to each an I advise you to try and reconstruct the proof of this result from your notes Pr properness In an e-perfect equilibrium, informally speaking, " right choices"are infinitely more likely than mistakes. However, mistakes can be more or less costly--some mistakes entail a larger loss of utility compared with a best reply. Hence Myerson's idea: let us assume that more costly mistakes are infinitely more likely We are led to Definition 4 Fix a game G=(N, (Ai, uiieN and E>0. An e-proper equilibrium of G a profile a such that, for all iE N,(i) for every a; E Ai, ai(ai>0, and(ii)for every pair ai,a E Ai, ui(ai, a-i)<ui(a,a_i)=ai(ai)<Eai(a?. A profile a is a proper equilibrium of G iff there exist sequences a-a and e-0 such that, for each n, a" is an e"-proper equilibrium of G Clearly, every proper equilibrium is perfect, but not vice-versa The key result about proper equilibria is stated below Proposition 0. 2 Let i be an extensive-form game and let g be its normal form. Then every proper equilibrium a of G can be extended to a sequential equilibrium ofr Again, you should try to reconstruct the proof of this result from your class notes Observe that, by construction, proper equilibria are invariant to the addition or deletion of actions which yield payoff vectors which can be duplicated by existing actions. Thus, proper equilibria of a normal-form game are also proper equilibria of its reduced normal Thus, here is the tie-in with our preceding discussion of invariance: every proper equilib rium of a reduced normal-form game g induces payoff-equivalent sequential equilibria in every extensive game having g as its(reduced) normal form. We have identified a normal-form solution concept which exhibits "nice"properties in every "extensive-form presentation"of given game. Not For those of you who are(still! )interested, let me point out that, once we start eliminating duplicate actions from a game, it comes relatively natural to think about eliminating actions

(ii) There exists sequences α n → α and  n → 0 such that, for each n, α n is an -perfect equilibrium of G. (iii) There exists a sequence α n → α such that: (a) for every n, i ∈ N and ai ∈ Ai , α n i (ai) > 0; (b) for every n, i ∈ N and ai ∈ Ai such that αi(ai) > 0, ai ∈ ri(α n −i ). You will recognize that (iii) is the familiar characterization of perfect equilibria. Condition (b) states formally that αi is a best reply to each α n −i . I advise you to try and reconstruct the proof of this result from your notes. Properness In an -perfect equilibrium, informally speaking, “right choices” are infinitely more likely than mistakes. However, mistakes can be more or less costly—some mistakes entail a larger loss of utility compared with a best reply. Hence Myerson’s idea: let us assume that more costly mistakes are infinitely more likely. We are led to Definition 4 Fix a game G = (N,(Ai , ui)i∈N ) and  > 0. An -proper equilibrium of G is a profile α such that, for all i ∈ N, (i) for every ai ∈ Ai , αi(ai) > 0, and (ii) for every pair ai , a0 i ∈ Ai , ui(ai , α−i) < ui(a 0 i , α−i) ⇒ αi(ai) ≤ αi(a 0 i ). A profile α is a proper equilibrium of G iff there exist sequences α n → α and  n → 0 such that, for each n, α n is an  n -proper equilibrium of G. Clearly, every proper equilibrium is perfect, but not vice-versa. The key result about proper equilibria is stated below: Proposition 0.2 Let Γ be an extensive-form game and let G be its normal form. Then every proper equilibrium α of G can be extended to a sequential equilibrium of Γ. Again, you should try to reconstruct the proof of this result from your class notes. Observe that, by construction, proper equilibria are invariant to the addition or deletion of actions which yield payoff vectors which can be duplicated by existing actions. Thus, proper equilibria of a normal-form game are also proper equilibria of its reduced normal form. Thus, here is the tie-in with our preceding discussion of invariance: every proper equilib￾rium of a reduced normal-form game G induces payoff-equivalent sequential equilibria in every extensive game having G as its (reduced) normal form. We have identified a normal-form solution concept which exhibits “nice” properties in every “extensive-form presentation” of a given game. Not bad! For those of you who are (still!) interested, let me point out that, once we start eliminating duplicate actions from a game, it comes relatively natural to think about eliminating actions 6

that can be duplicated by a mixture of other actions. Unfortunately, proper equilibrium does not survive the addition/ deletion of such "mixed-duplicated"actions. Correspondingly, it is not always possible to find a sequential equilibrium equilibrium of an extensive game which beautiful example of this in their 1986 paper on strategic stabilify, erg and Mertens have a survives the addition/deletion of a mixed-duplicated action: Kohlb

that can be duplicated by a mixture of other actions. Unfortunately, proper equilibrium does not survive the addition/deletion of such “mixed-duplicated” actions. Correspondingly, it is not always possible to find a sequential equilibrium equilibrium of an extensive game which survives the addition/deletion of a mixed-duplicated action: Kohlberg and Mertens have a beautiful example of this in their 1986 paper on strategic stability. 7

点击下载完整版文档(PDF)VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
已到末页,全文结束
相关文档

关于我们|帮助中心|下载说明|相关软件|意见反馈|联系我们

Copyright © 2008-现在 cucdc.com 高等教育资讯网 版权所有