events c=0 and c=5. Denote by T the probability of the former event; then, since Firm 2s best reply is Br2(Tq1,0+(1-T)q1, 2), if we are given the information that q2 is a best reply, we can "reverse-engineer"the value of T (in this case, it is unique) n terms of our mode lel, we could let Q2=10,, Ti=10, 03, T2=Q2 and p2(0) T. It is easy to see that specifying pi is not relevant for the purposes of Definition 2 what matters there are the beliefs conditional on each ti E Ti, but these will obviously be degenerate Let us think about the value of the parameter T(or, equivalently, of the probability P2). We have just noted that, given the equilibrium outcome, it is possible to derive Firm 2 s conjecture T. Hence, since Firm 1 expects Firm 2 to play g2 and(according to the equilibrium assumption) believes that Firm 2 expects Firm 1 to play q1.c, for each value of the cost parameter c, it follows that Firm 1 can infer that Firm 2's assessment of the probability that c=0 is T. In other words, implicit in the specific equilibrium we are looking at is an assumption about Firm 1s beliefs regarding Firm 2's beliefs. It is easy to see that we can continue in this fashion to build a whole hierarchy of interactive beliefs x In terms of our model, note that Player 2's type partition is degenerate. As a consequence, any state w E S, Player 2's conditional beliefs about Q2, pi (It2(w)) are the same--they are given by the unconditional probability p2 We can represent this observation formally by defining, for any probability measure q∈△(9) and player i∈N, an event [al:={:n(u|t(∞)=q.Then, in this game, [p2]2=Q2. By way of comparison, it is easy to see that it cannot be the case that pil1=Q (why? regardless of how we specify P1 Now take the point of view of Player 1. Since []2=Q2, it is trivially true that p1(p2]2|t1()=1u∈9; in words, at any state w, Player 1 is certain that Player 2s beliefs are given by p2 (i.e. by T). By the exact same argument, at any state w, Player 2 is certain that Player 1 is certain that Player 2's beliefs are given by p2, and so on and so forth I wish to draw your attention to two key conclusions that can be drawn based on this anaIvsis 1. The standard model of games with payoff uncertainty is capable of generating infinite hierarchies of interactive beliefs about the underlying state of the world Indeed, this was precisely Harsanyi's original objective: he realized that, in the pi esence of payoff uncertainty, the players' strategic reasoning necessarily involves this sort of infinite regress, and devised a very clever way to generate this information in a compact, manageableevents c = 0 and c = 1 2 . Denote by π the probability of the former event; then, since Firm 2’s best reply is BR2(πq1,0 + (1 − π)q1, 1 2 ), if we are given the information that q2 is a best reply, we can “reverse-engineer” the value of π (in this case, it is unique). In terms of our model, we could let Ω = {0, 1 2 }, T1 = {{0}, { 1 2 }}, T2 = {Ω} and p2(0) = π. It is easy to see that specifying p1 is not relevant for the purposes of Definition 2: what matters there are the beliefs conditional on each t1 ∈ T1, but these will obviously be degenerate. Let us think about the value of the parameter π (or, equivalently, of the probability p2). We have just noted that, given the equilibrium outcome, it is possible to derive Firm 2’s conjecture π. Hence, since Firm 1 expects Firm 2 to play q2 and (according to the equilibrium assumption) believes that Firm 2 expects Firm 1 to play q1,c, for each value of the cost parameter c, it follows that Firm 1 can infer that Firm 2’s assessment of the probability that c = 0 is π. In other words, implicit in the specific equilibrium we are looking at is an assumption about Firm 1’s beliefs regarding Firm 2’s beliefs. It is easy to see that we can continue in this fashion to build a whole hierarchy of interactive beliefs. In terms of our model, note that Player 2’s type partition is degenerate. As a consequence, at any state ω ∈ Ω, Player 2’s conditional beliefs about Ω, pi(·|t2(ω)) are the same—they are given by the unconditional probability p2. We can represent this observation formally by defining, for any probability measure q ∈ ∆(Ω) and player i ∈ N, an event [q]i = {ω : pi(ω|ti(ω)) = q}. Then, in this game, [p2]2 = Ω. By way of comparison, it is easy to see that it cannot be the case that [p1]1 = Ω (why?) regardless of how we specify p1. Now take the point of view of Player 1. Since [p2]2 = Ω, it is trivially true that p1([p2]2|t1(ω)) = 1 ∀ω ∈ Ω; in words, at any state ω, Player 1 is certain that Player 2’s beliefs are given by p2 (i.e. by π). By the exact same argument, at any state ω, Player 2 is certain that Player 1 is certain that Player 2’s beliefs are given by p2, and so on and so forth. I wish to draw your attention to two key conclusions that can be drawn based on this analysis. 1. The standard model of games with payoff uncertainty is capable of generating infinite hierarchies of interactive beliefs about the underlying state of the world. Indeed, this was precisely Harsanyi’s original objective: he realized that, in the presence of payoff uncertainty, the players’ strategic reasoning necessarily involves this sort of infinite regress, and devised a very clever way to generate this information in a compact, manageable way. 5