Eco514-Game Theory ecture 6: Interactive Epistemology(1) Marciano siniscalchi October 5. 1999 Introduction This lecture focuses on the interpretation of solution concepts for normal-form games. You will recall that, when we introduced Nash equilibrium and rationalizability, we mentioned numerous reasons why these solution concepts could be regarded as yielding plausible restric- tions on rational play, or perhaps providing a consistency check for our predictions about However, in doing so, we had to appeal to intuition, by and large. Even a simple assump- tion such as "Player 1 believes that Player 2 is rational" involves objects that are not part of the standard description of a game with complete information. In particular, recall that Bayesian rationality is a condition which relates behavior and beliefs: a player is "rational if and only if she chooses an action which is a best reply given her beliefs. But then, to say that Player 1 believes that Player 2 is rational implies that Player 1 holds a conjecture on oth Player 2's actions and her beliefs The standard model for games with complete information does not contain enough ure for us to formalize this sort of assumption. Players'beliefs are probability distributions n their opponents'action profiles But, of course, the model we have developed(following Harsanyi)for games with payoff uncertainty does allow us to generate beliefs about beliefs, and indeed infinite hierarchies of mutual beliefs The objective of this lecture is to present a model of interactive beliefs based on Harsanyi's ideas, with minimal modifications to our setting for games with payoff uncertainty. We shall then begin our investigation of "interactive epistemology"in normal-form games by looking at correlated equilibrium
Eco514—Game Theory Lecture 6: Interactive Epistemology (1) Marciano Siniscalchi October 5, 1999 Introduction This lecture focuses on the interpretation of solution concepts for normal-form games. You will recall that, when we introduced Nash equilibrium and Rationalizability, we mentioned numerous reasons why these solution concepts could be regarded as yielding plausible restrictions on rational play, or perhaps providing a consistency check for our predictions about it. However, in doing so, we had to appeal to intuition, by and large. Even a simple assumption such as “Player 1 believes that Player 2 is rational” involves objects that are not part of the standard description of a game with complete information. In particular, recall that Bayesian rationality is a condition which relates behavior and beliefs: a player is “rational” if and only if she chooses an action which is a best reply given her beliefs. But then, to say that Player 1 believes that Player 2 is rational implies that Player 1 holds a conjecture on both Player 2’s actions and her beliefs. The standard model for games with complete information does not contain enough structure for us to formalize this sort of assumption. Players’ beliefs are probability distributions on their opponents’ action profiles. But, of course, the model we have developed (following Harsanyi) for games with payoff uncertainty does allow us to generate beliefs about beliefs, and indeed infinite hierarchies of mutual beliefs. The objective of this lecture is to present a model of interactive beliefs based on Harsanyi’s ideas, with minimal modifications to our setting for games with payoff uncertainty. We shall then begin our investigation of “interactive epistemology” in normal-form games by looking at correlated equilibrium. 1
The basic idea Recall that, in order to represent payoff uncertainty, we introduced a set Q2 of states of the world, and made the players payoff functions depend on the realization w E Q, as well as on the profile(alien E Lien A; of actions chosen by the players This allowed us to represent hierarchical beliefs about the state of the world; however we are no more capable of describing hierarchical beliefs about actions(at least not without introducing additional information, such as a specification of equilibrium actions for each type of each player) Thus, a natural extension suggests itself. For simplicity, I will consider games without payoff uncertainty, but the extension should be obvious Definition 1 Consider a simultaneous game G=(N, (Ai, wi)ien)(without payoff uncer tainty). A frame for G is a tuple F=(Q, (Ti, alieN) such that, for every player iE N, Tiis a partition of Q2, and ai is a map a;: Q2- Ai such that (a1)≠0→a1(a)∈T Continue to denote by ti(w) the cell of the partition Ti containing w. Finally, a model forG is a tuple M=(F, (pilieN), where F is a frame for G and each P; is a probability distribution I distinguish between frames and models to emphasize that probabilistic beliefs convey additional information-which we wish to relate to solution concepts. The distintion is also often made in the literature le, The main innovation is the introduction of the functions ai (.). This is not so far-fetched er all, uncertainty about opponents' actions is clearly a form of payoff uncertainty--one that arises in any strategic situation. However, by making players ' choices part of the state of the world, it is possible to discuss the players' hierarchical beliefs about them. Ultimately, we wish to relate solution concepts to precisely such assumptions There is one technical issue which deserves to be noted. We are assuming that"actions e measurable with respect to types, to use a conventional expression; that is, whenever w, w'e ti E Ti, the action chosen by Player i at w has to be the same as the action she chooses at w. This is natural: after all, in any given state, a player only knows her type, so it would be impossible for her to implement a contingent action plan which specifies different choices at different states consistent with her type. Our definition of a frame captures this Putting the model to work sider one concrete example to fix ideas. Fi exhibits a game and a model for
The basic idea Recall that, in order to represent payoff uncertainty, we introduced a set Ω of states of the world, and made the players’ payoff functions depend on the realization ω ∈ Ω, as well as on the profile (ai)i∈N ∈ Q i∈N Ai of actions chosen by the players. This allowed us to represent hierarchical beliefs about the state of the world; however, we are no more capable of describing hierarchical beliefs about actions (at least not without introducing additional information, such as a specification of equilibrium actions for each type of each player). Thus, a natural extension suggests itself. For simplicity, I will consider games without payoff uncertainty, but the extension should be obvious. Definition 1 Consider a simultaneous game G = (N,(Ai , ui)i∈N ) (without payoff uncertainty). A frame for G is a tuple F = (Ω,(Ti , ai)i∈N ) such that, for every player i ∈ N, Ti is a partition of Ω, and ai is a map ai : Ω → Ai such that a −1 i (ai) 6= ∅ ⇒ a −1 i (ai) ∈ Ti . Continue to denote by ti(ω) the cell of the partition Ti containing ω. Finally, a model for G is a tuple M = (F,(pi)i∈N ), where F is a frame for G and each pi is a probability distribution on ∆(Ω). I distinguish between frames and models to emphasize that probabilistic beliefs convey additional information—which we wish to relate to solution concepts. The distintion is also often made in the literature. The main innovation is the introduction of the functions ai(·). This is not so far-fetched: after all, uncertainty about opponents’ actions is clearly a form of payoff uncertainty—one that arises in any strategic situation. However, by making players’ choices part of the state of the world, it is possible to discuss the players’ hierarchical beliefs about them. Ultimately, we wish to relate solution concepts to precisely such assumptions. There is one technical issue which deserves to be noted. We are assuming that “actions be measurable with respect to types,” to use a conventional expression; that is, whenever ω, ω0 ∈ ti ∈ Ti , the action chosen by Player i at ω has to be the same as the action she chooses at ω 0 . This is natural: after all, in any given state, a player only knows her type, so it would be impossible for her to implement a contingent action plan which specifies different choices at different states consistent with her type. Our definition of a frame captures this. Putting the model to work Let us consider one concrete example to fix ideas. Figure 1 exhibits a game and a model for it. 2
L R t1(w) al(w) pl(w) t2(w)a2(w)p2(w t1 0 22 R 0.4 ,10.0 B0.0 tiT 0 L 0.5 B Figure 1: A game and a model for it The right-hand table includes all the information required by Definition 1. In particular note that it implicitly defines the partitions Ti, i=1, 2, via the possibility correspondences As previously advertised, at each state w E S2 in a model, players actions and beliefs are completely specified. For instance, at wl, the profile(t, R)is played, Player 1 is certain that Player 2 chooses L (note that this belief is incorrect), and Player 2 is certain that Player 1 chooses T(which is a correct belief). Thus, given their beliefs, Player 1 is rational (T is a best reply to L) and Player 2 is not(R is not a best reply to T c. Moreover, note that, at w2, Player 2 believes that the state is w2(hence, that Player 1 nooses T) with probability and that it is w(hence, that Player 1 chooses B with probability & At w2 Player 2 chooses L, which is her unique best reply given her beliefs Thus, we can also say that at wn Player 1 assigns probability one to the event that the state is really w2, and hence that (i) Player 2s beliefs about Player 1's actions are given b (T: 6. B); and that(i)Player 2 chooses L. Thus, at w Player 1 is"certain"that Player 2 is rational. Of course, note that at w/ Player 2 is really not rational We can push this quite a bit further. For instance, type t? of Player 2 assigns probability d to w3, a state in which Player 1 is not rational (she is certain that the state is w3, hence that Player 2 chooses L, but she plays B). Hence, at wl, Player 1 is"certain"that Player 2 assigns probability 6 to the"event "that she (i) believes that 2 chooses L, and (i) play B-hence. she is not rational. this is a statement involving three orders of beliefs it also corresponds to an incorrect belief: at wl, Player 2 is certain that Player 1 chooses T and is of type tl-hence, that she is rational! We are ready for formal definitions of "rationality"and"certainty "Recall that, given any belief a-i E A(A-i) for Player i, ri(a-i) is the set of best replies for i given a-i First, a preliminary notion Definition 2 Fix a game G=(N, (Ai, uiieN) and a model M=( Q, (Ti, ai, pilieN) for G The first-order beliefs function a-i: Q-A(A-i for Player i is defined by vu∈9,a-i∈A-:a-()(a-)=p1({u:Wj≠i,a(u)=a3H(u) That is, the probability of a profile a_i E A-i is given by the(conditional) probability of all states where that profile is played. Notice that the function a-i( is T--measurable, just like ai(). Also note that this is a belief about players jti, held by player i
L R T 1,1 0,0 B 0,0 1,1 ω t1(ω) a1(ω) p1(ω) t2(ω) a2(ω) p2(ω) ω1 t 1 1 T 0 t 1 2 R 0.4 ω2 t 1 1 T 0.5 t 2 2 L 0.5 ω3 t 2 1 B 0.5 t 2 2 L 0.1 Figure 1: A game and a model for it The right-hand table includes all the information required by Definition 1. In particular, note that it implicitly defines the partitions Ti , i = 1, 2, via the possibility correspondences ti : Ω ⇒ Ω. As previously advertised, at each state ω ∈ Ω in a model, players’ actions and beliefs are completely specified. For instance, at ω1, the profile (T,R) is played, Player 1 is certain that Player 2 chooses L (note that this belief is incorrect), and Player 2 is certain that Player 1 chooses T (which is a correct belief). Thus, given their beliefs, Player 1 is rational (T is a best reply to L) and Player 2 is not (R is not a best reply to T). Moreover, note that, at ω2, Player 2 believes that the state is ω2 (hence, that Player 1 chooses T) with probability 0.5 0.5+0.1 = 5 6 , and that it is ω3 (hence, that Player 1 chooses B) with probability 1 6 . At ω2 Player 2 chooses L, which is her unique best reply given her beliefs. Thus, we can also say that at ω1 Player 1 assigns probability one to the event that the state is really ω2, and hence that (i) Player 2’s beliefs about Player 1’s actions are given by ( 5 6 ,T; 1 6 ,B); and that (ii) Player 2 chooses L. Thus, at ω1 Player 1 is “certain” that Player 2 is rational. Of course, note that at ω1 Player 2 is really not rational! We can push this quite a bit further. For instance, type t 2 2 of Player 2 assigns probability 1 6 to ω3, a state in which Player 1 is not rational (she is certain that the state is ω3, hence that Player 2 chooses L, but she plays B). Hence, at ω1, Player 1 is “certain” that Player 2 assigns probability 1 6 to the “event” that she (i) believes that 2 chooses L, and (ii) plays B—hence, she is not rational. This is a statement involving three orders of beliefs. It also corresponds to an incorrect belief: at ω1, Player 2 is certain that Player 1 chooses T and is of type t 1 1—hence, that she is rational! We are ready for formal definitions of “rationality” and “certainty.” Recall that, given any belief α−i ∈ ∆(A−i) for Player i, ri(α−i) is the set of best replies for i given α−i . First, a preliminary notion: Definition 2 Fix a game G = (N,(Ai , ui)i∈N ) and a model M = (Ω,(Ti , ai , pi)i∈N ) for G. The first-order beliefs function α−i : Ω → ∆(A−i) for Player i is defined by ∀ω ∈ Ω, a−i ∈ A−i : α−i(ω)(a−i) = pi ({ω 0 : ∀j 6= i, aj (ω 0 ) = aj}|ti(ω)) That is, the probability of a profile a−i ∈ A−i is given by the (conditional) probability of all states where that profile is played. Notice that the function α−i(·) is Ti-measurable, just like ai(·). Also note that this is a belief about players j 6= i, held by player i. 3
Definition 3 Fix a game G=(N, (Ai, uiieN) and a model M=(Q, (Ti, ai, pilieN) for G A player i E I is deemed rational at state w E Q iff a(w)E_i(w)). Define the event Player i is rational"b R1={u∈9:a(u)∈r(a-l(u)} and the event, "Every player is rational"by R= nien ri This is quite straightforward. Finally, adapting the definition we gave last time Definition 4 Fix a game G=(N, (Ai, ui)ieN) and a model M=(@2, (Ti, ai, piie) for G Player i's belief operator is the map Bi: 23-22 defined by VEc9,B1(E)={∈9:(E|t(u)=1} Also define the event, "Everybody is certain that E is true "by B(e)=nien Bi(e) The following shorthand definitions are also convenient vi∈N,q∈△(A-):[a-i=q={u:a-(u)=q} which extends our previous notation, and vi∈N,a1∈A:[a;=al={u:a(u)=a} We now have a rather powerful and concise language to describe strategic reasoning in games. For instance, the following relations summarize our discussion of Figure 1 u1∈B1(a2=L])∩B2(a1=T); and also, more interestingly ∈R1;w∈R2;∈B1(B2) In fact ∈9\B2(B1);a1∈B1(9\B2(R1) Notice that we are finally able to give formal content to statements such as "Player 1 is certain that Player 2 is rational". These correspond to events in a given model, which in turn represents well-defined hierarchies of beliefs I conclude by noting a few properties of belief operators Proposition 0.1 Fix a game G=(N, (Ai, uiieN) and a model M=(Q2, (Ti, ai, pi)ieN)for G.Then, for every i∈N: (1)t=B2(t) (2)EC F implies Bi(E)C Bi(F); (3)B(EnF)=B(E)∩B(F) (4)Bi (E)C B(B(E)) and 2\ B(E)C Bi(Q\B(E)): (5)R2CB2(R)
Definition 3 Fix a game G = (N,(Ai , ui)i∈N ) and a model M = (Ω,(Ti , ai , pi)i∈N ) for G. A player i ∈ I is deemed rational at state ω ∈ Ω iff ai(ω) ∈ ri(α−i(ω)). Define the event, “Player i is rational” by Ri = {ω ∈ Ω : ai(ω) ∈ ri(α−i(ω))} and the event, “Every player is rational” by R = T i∈N Ri . This is quite straightforward. Finally, adapting the definition we gave last time: Definition 4 Fix a game G = (N,(Ai , ui)i∈N ) and a model M = (Ω,(Ti , ai , pi)i∈N ) for G. Player i’s belief operator is the map Bi : 2Ω → 2 Ω defined by ∀E ⊂ Ω, Bi(E) = {ω ∈ Ω : pi(E|ti(ω)) = 1}. Also define the event, “Everybody is certain that E is true” by B(E) = T i∈N Bi(E). The following shorthand definitions are also convenient: ∀i ∈ N, q ∈ ∆(A−i) : [α−i = q] = {ω : α−i(ω) = q} which extends our previous notation, and ∀i ∈ N, ai ∈ Ai : [ai = ai ] = {ω : ai(ω) = ai} We now have a rather powerful and concise language to describe strategic reasoning in games. For instance, the following relations summarize our discussion of Figure 1: ω1 ∈ B1([a2 = L]) ∩ B2([a1 = T]); and also, more interestingly: ω1 ∈ R1; ω2 ∈ R2; ω1 ∈ B1(R2). In fact: ω2 ∈ Ω \ B2(R1); ω1 ∈ B1(Ω \ B2(R1)). Notice that we are finally able to give formal content to statements such as “Player 1 is certain that Player 2 is rational”. These correspond to events in a given model, which in turn represents well-defined hierarchies of beliefs. I conclude by noting a few properties of belief operators. Proposition 0.1 Fix a game G = (N,(Ai , ui)i∈N ) and a model M = (Ω,(Ti , ai , pi)i∈N ) for G. Then, for every i ∈ N: (1) ti = Bi(ti); (2) E ⊂ F implies Bi(E) ⊂ Bi(F); (3) Bi(E ∩ F) = Bi(E) ∩ Bi(F); (4) Bi(E) ⊂ Bi(Bi(E)) and Ω \ Bi(E) ⊂ Bi(Ω \ Bi(E)); (5) Ri ⊂ Bi(Ri). 4
Correlated Equilibrium As a first application of this formalism, I will provide a characterization of the notion of correlated equilibrium, due to R.Aumann I have already argued that the fact that players choose their actions independently of each other does not imply that beliefs should necessarily be stochastically independent(recall the " betting on coordination"game). Correlated equilibrium provides a way to allow for correlated beliefs that is consistent with the equilibrium approach Definition 5 Fix a game G=(N, (Ai, ui)ieN). A correlated equilibrium of G is a probability distribution a E A(A) such that, for every player iE N, and every function di: Ai ∑a2(a1-)a(a,a-)≥∑ (aa,a-a)∈A (aa,a-1)∈A The above is the standard definition of correlated equilibrium. However Proposition 0.2 Fix a game=(N, (Ai, ui)ieN) and a probability distribution a E A(A) Then a is a correlated equilibrium of G iff, for any player i E N and action a; E Ai such that a({a}×A-)>0, and for all a∈A, ∑(an,a-)a(a-la)≥∑ta,a-)a(a-lan) a-;∈A- a-;∈A- where a(a_ilai=a(ai,a-i] x A-i Proof: Fix a player i E N. Observe first that, for any function f: Ai-Ais ∑a2(f(a,a-)a(a12a-)=∑∑at(f(a),a-)a(a,a-)= (a1,a-)∈A a∈A1a-i∈A ∑a({a}×A-)∑at(f(a),a-)a(a-la aa({a}×A-i)>0 a-i∈A- Suppose first that there exists an action ai E Ai with adlai)x A-i)>0 such that ∑a∈A,1(a,a-)a(a-l|a)0. The claim follows from our initial observation
Correlated Equilibrium As a first application of this formalism, I will provide a characterization of the notion of correlated equilibrium, due to R. Aumann. I have already argued that the fact that players choose their actions independently of each other does not imply that beliefs should necessarily be stochastically independent (recall the “betting on coordination” game). Correlated equilibrium provides a way to allow for correlated beliefs that is consistent with the equilibrium approach. Definition 5 Fix a game G = (N,(Ai , ui)i∈N ). A correlated equilibrium of G is a probability distribution α ∈ ∆(A) such that, for every player i ∈ N, and every function di : Ai → Ai , X (ai,a−i)∈A ui(ai , a−i)α(ai , a−i) ≥ X (ai,a−i)∈A ui(di(ai), a−i)α(ai , a−i) The above is the standard definition of correlated equilibrium. However: Proposition 0.2 Fix a game G = (N,(Ai , ui)i∈N ) and a probability distribution α ∈ ∆(A). Then α is a correlated equilibrium of G iff, for any player i ∈ N and action ai ∈ Ai such that α({ai} × A−i) > 0, and for all a 0 i ∈ Ai , X a−i∈A−i ui(ai , a−i)α(a−i |ai) ≥ X a−i∈A−i ui(a 0 i , a−i)α(a−i |ai) where α(a−i |ai) = α({ai , a−i}|{ai} × A−i). Proof: Fix a player i ∈ N. Observe first that, for any function f : Ai → Ai , X (ai,a−i)∈A ui(f(ai), a−i)α(ai , a−i) = X ai∈Ai X a−i∈A−i ui(f(ai), a−i)α(ai , a−i) = = X ai:α({ai}×A−i)>0 α({ai} × A−i) X a−i∈A−i ui(f(ai), a−i)α(a−i |ai) P Suppose first that there exists an action a¯i ∈ Ai with α({a¯i} × A−i) > 0 such that a−i∈A−i ui(¯ai , a−i)α(a−i |a¯i) 0. The claim follows from our initial observation. 5
Proposition 0. 2 draws a connection between the notions of Nash and Correlated equi- librium: recall that, in the former, an action receives positive probability iff it is a best response to the equilibrium belief. Note also that, if a E A(A) is an independent probability distribution, a(a-ilai)=a-i(a-i), the marginal of a on A-i, for all ai E Ai. Thus, every Nash equilibrium is a correlated equilibrium Moreover, Proposition 0. 2 reinforces our interpretation of correlated equilibrium as an attempt to depart from independence of beliefs, while remaining firmly within an equilibrium setting The first step in the epistemic characterization of correlated equilibrium is actually prompted by a more basic question. The story that is most often heard to justify corre- lated equilibrium runs along the following lines: the players bring in an outside observer who randomizes according to the distribution a and prescribes an action to each player Definition 5 then essentially requires that the players find it profitable ex-ante to follow the prescription rather than adopt any alternative prescription- contingent plan (i.e. "if the observer tells me to do X, I shall do Y instead"). Proposition 0.2 shows that this is equivalent to assuming that, upon receiving a prescription, players do not gain by deviating to any other action The basic question that should have occurred to you is whether a richer communication structure"allows for more coordination opportunitiesie. whether there exists expected payoff vectors which may be achieved using a richer structure, but may not be achieved when messages are limited to action prescriptions The answer to this question is actually negative, as follows from a simple application of the Revelation principle. However, the point is that frames may also be used to define correlated equilibria in this extended sense Definition 6 Fix a game G=(, (Ai, ui)iEN). An extended correlated equilibrium is a (1)F=(Q, (Ti, aieN) is a frame for G (2)丌∈△(g) is a probability over S2 such that, for all i∈ N and ti∈T,丌(t)>0; (3) for every player i∈ N and t∈T ∑1(a(,、a1(u)1)r(|t)≥∑(,(()≠)x() u∈ for all c∈A The similarity between 3)and Proposition 0. 2 should be obvious Note that the formal definition of a frame in an extended correlated equilibrium is as in Definition 1. However, the standard interpretation is different: the cells ti E Ti represent possible messages that the observer may send to Player i; since the action functions ai are
Proposition 0.2 draws a connection between the notions of Nash and Correlated equilibrium: recall that, in the former, an action receives positive probability iff it is a best response to the equilibrium belief. Note also that, if α ∈ ∆(A) is an independent probability distribution, α(a−i |ai) = α−i(a−i), the marginal of α on A−i , for all ai ∈ Ai . Thus, every Nash equilibrium is a correlated equilibrium. Moreover, Proposition 0.2 reinforces our interpretation of correlated equilibrium as an attempt to depart from independence of beliefs, while remaining firmly within an equilibrium setting. The first step in the epistemic characterization of correlated equilibrium is actually prompted by a more basic question. The story that is most often heard to justify correlated equilibrium runs along the following lines: the players bring in an outside observer who randomizes according to the distribution α and prescribes an action to each player. Definition 5 then essentially requires that the players find it profitable ex-ante to follow the prescription rather than adopt any alternative prescription-contingent plan (i.e. “if the observer tells me to do X, I shall do Y instead”). Proposition 0.2 shows that this is equivalent to assuming that, upon receiving a prescription, players do not gain by deviating to any other action. The basic question that should have occurred to you is whether a richer “communication structure” allows for more coordination opportunities—i.e. whether there exists expected payoff vectors which may be achieved using a richer structure, but may not be achieved when messages are limited to action prescriptions. The answer to this question is actually negative, as follows from a simple application of the Revelation principle. However, the point is that frames may also be used to define correlated equilibria in this extended sense. Definition 6 Fix a game G = (N,(Ai , ui)i∈N ). An extended correlated equilibrium is a tuple (F, π) where: (1) F = (Ω,(Ti , ai)i∈N ) is a frame for G; (2) π ∈ ∆(Ω) is a probability over Ω such that, for all i ∈ N and ti ∈ Ti , π(ti) > 0; (3) for every player i ∈ N and ti ∈ Ti , X ω∈Ω ui(ai(ω),(aj (ω))j6=i)π(ω|ti) ≥ X ω∈Ω ui(a 0 i ,(aj (ω))j6=i)π(ω|ti) for all a 0 i ∈ Ai . The similarity between (3) and Proposition 0.2 should be obvious. Note that the formal definition of a frame in an extended correlated equilibrium is as in Definition 1. However, the standard interpretation is different: the cells ti ∈ Ti represent possible messages that the observer may send to Player i; since the action functions ai(·) are 6
Ti-measurable by the definition of a frame, they represent message-contingent action plans finally, I and T model an abstract, general randomizing(or "correlating ")device. The idea is that, upon observing w E Q, the outside observer sends the message ti(w) to every Player i∈N It is clear that every correlated equilibrium a according to Definition 5 can as an extended correlated equilibrium as per Definition 6: let Q= supp a, i. e. the set of action profiles that get played in equilibrium; then define type partitions indirectly, via the possibility correspondence, assuming that, at each state w=(ai, a-i)E S, Player i is told what her action must be: t(a,a-)={a}×{a-a:(a,a-)∈ supp c} Since ti(ai, a-i) actually depends on a; only, I denote this type by ti. Finally, let a(ai, a-i) ai and T= a. With these definitions, note that, for every ai E Ai, and for every w E ti, ai(w)=ai and wlti)=a(a_ilai). This implies that()in Definition 6 must hold mes yy the revelation principle, the converse is also true. Intuitively, instead of sending the essage ti () to Player i whenever w is realized, the outside observer could simply instruct Player i to play a(w). If it was unprofitable to deviate from ai ()in the original messaging Formally, given an extended correlated equilibrium(F, T), definewell setting, then it must be unprofitable to do so in the simplified game as (a)=丌({u:Wi∈NM,a(u)=a} now observe that, for any ai E Ai, a( x A-i)>0 iff there exists a(maximal) collection of types ti, ..., tK E T such that, at all states w E U_, th, ai ( w)=ai. Now(3)in Definition 6 implies that ∑1(a,(a1(u)1)r(叫lUt4)≥∑a(a1(a1(u)≠)r(叫∪t) u∈ k=1 for all a E Ai, because all types have positive probability. We can clearly rewrite the above summations as follows ∑u(an,a-)π∩a=∪≥∑a(a2-)(∩a,=】∪ and since(n+ila,=aillUk-1ti)=a(a-ilai)by construction, a is a correlated equilibrium according to Proposition 0.2
Ti-measurable by the definition of a frame, they represent message-contingent action plans; finally, Ω and π model an abstract, general randomizing (or “correlating”) device. The idea is that, upon observing ω ∈ Ω, the outside observer sends the message ti(ω) to every Player i ∈ N. It is clear that every correlated equilibrium α according to Definition 5 can be interpreted as an extended correlated equilibrium as per Definition 6: let Ω = supp α, i.e. the set of action profiles that get played in equilibrium; then define type partitions indirectly, via the possibility correspondence, assuming that, at each state ω = (ai , a−i) ∈ Ω, Player i is told what her action must be: ti(ai , a−i) = {ai} × {a 0 −i : (ai , a0 −i ) ∈ supp α}. Since ti(ai , a−i) actually depends on ai only, I denote this type by t ai i . Finally, let ai(ai , a−i) = ai and π = α. With these definitions, note that, for every ai ∈ Ai , and for every ω = (ai , a−i) ∈ t ai i , ai(ω) = ai and π(ω|t ai i ) = α(a−i |ai). This implies that (3) in Definition 6 must hold. By the Revelation principle, the converse is also true. Intuitively, instead of sending the message ti(ω) to Player i whenever ω is realized, the outside observer could simply instruct Player i to play ai(ω). If it was unprofitable to deviate from ai(ω) in the original messaging setting, then it must be unprofitable to do so in the simplified game as well. Formally, given an extended correlated equilibrium (F, π), define α(a) = π({ω : ∀i ∈ N, ai(ω) = ai}); now observe that, for any ai ∈ Ai , α({ai}×A−i) > 0 iff there exists a (maximal) collection of types t 1 i , . . . , tK i ∈ Ti such that, at all states ω ∈ SK k=1 t k i , ai(ω) = ai . Now (3) in Definition 6 implies that X ω∈Ω ui(ai ,(aj (ω))j6=i)π(ω| [ K k=1 t k i ) ≥ X ω∈Ω ui(a 0 i ,(aj (ω))j6=i)π(ω| [ K k=1 t k i ) for all a 0 i ∈ Ai , because all types have positive probability. We can clearly rewrite the above summations as follows: X a−i∈A−i ui(ai , a−i)π( \ j6=i [aj = aj ]| [ K k=1 t k i ) ≥ X a−i∈A−i ui(a 0 i , a−i)π( \ j6=i [aj = aj ]| [ K k=1 t k i ) and since π( T j6=i [aj = aj ]| SK k=1 t k i ) = α(a−i |ai) by construction, α is a correlated equilibrium according to Proposition 0.2. 7
The bottom line is that we can actually take Definition 6 as our basic notion of correlated equilibrium(Or does this, for example): there is no added generality But then we get an epistemic characterization of correlated equilibrium almost for free. Observe that Condition (3)in Definition 6 implies that, at each state w E ti, Player is action ai(w)is a best response to her first-order beliefs, given by a(w(a-i=w: Vi+ (w)=aili(w)). Hence, if we reinterpret an extended correlated equilibrium(F, T)as a model M=(F, (pilin) in which pi =T for all iE N, we get Proposition 0. 3 Fix a gameG=(N, (Ai, lilieN) and a model M=(, (Ti, ai, piie forG. If there exists T∈△(92) such that p;=丌 for all i∈N,andR=!.,then ( O, (Ti, aieN), T) is an extended correlated equilibrium of G Conversely, if a is a correlated equilibrium of G, there exists a model M=(Q, (Ti, ai, pilieN) for g in which n=R The model alluded to in the second part of the Proposition is of course the one constructed above. Observe that in that model a is indeed a common prior You may feel a bit cheated by this result. After all, it seems all we have done is change our interpretation of the relevant objects. This is of course entirely correct! It is certainly the case that Proposition 0.3 characterizes correlated equilibrium beliefs common prior in a model with the feature that every player is rational at every stale t is a More precisely, a distribution over action profiles is a correlated equilibrium belief if As I have mentioned several times, this may or may not have any behavioral implications but at least Proposition 0. 3 provides an arguably more palatable rationale for correlated equilibrium beliefs than the "outside observer"stor
The bottom line is that we can actually take Definition 6 as our basic notion of correlated equilibrium (OR does this, for example): there is no added generality. But then we get an epistemic characterization of correlated equilibrium almost for free. Observe that Condition (3) in Definition 6 implies that, at each state ω ∈ ti , Player i’s action ai(ω) is a best response to her first-order beliefs, given by α π −i (ω)(a−i) = π({ω 0 : ∀j 6= i, aj (ω 0 ) = aj}|ti(ω)). Hence, if we reinterpret an extended correlated equilibrium (F, π) as a model M = (F,(pi)i∈N ) in which pi = π for all i ∈ N, we get: Proposition 0.3 Fix a game G = (N,(Ai , ui)i∈N ) and a model M = (Ω,(Ti , ai , pi)i∈N ) for G. If there exists π ∈ ∆(Ω) such that pi = π for all i ∈ N, and R = Ω, then ((Ω,(Ti , ai)i∈N ), π) is an extended correlated equilibrium of G. Conversely, if α is a correlated equilibrium of G, there exists a model M = (Ω,(Ti , ai , pi)i∈N ) for G in which Ω = R. The model alluded to in the second part of the Proposition is of course the one constructed above. Observe that in that model α is indeed a common prior. You may feel a bit cheated by this result. After all, it seems all we have done is change our interpretation of the relevant objects. This is of course entirely correct! It is certainly the case that Proposition 0.3 characterizes correlated equilibrium beliefs. More precisely, a distribution over action profiles is a correlated equilibrium belief if it is a common prior in a model with the feature that every player is rational at every state. As I have mentioned several times, this may or may not have any behavioral implications, but at least Proposition 0.3 provides an arguably more palatable rationale for correlated equilibrium beliefs than the “outside observer” story. 8