Correlated Rationalizability and Iterated Dominance Having disposed of all required technicalities, let us go back to our "informal equation Rational Behavior Assumptions about Beliefs= Solution Concepts Fix a game G=(N, (Ai, Ti, uiieN) and assume that each(Ai, Ii) is a compact metrizable space(you can also assume that each Ai is finite, as far as the substantive arguments are concerned) and that payoff functions are continuous We begin by asking: what is a plausible outcome of the game, if we are willing to assume that players are(Bayesian) rational? The answer should be clear: we can only expect an action profile(alien to be chosen iff, for every player i E N, ai E rila_i)for some a-∈△(A-2B() That is: if we think that players are "good Bayesians, " we should expect them to formu- late a belief about their opponents choice, and play a best reply given that belief. equiva- lently, we should expect a rational player to choose an action only if it that action is justified by some belief This exhausts all the implications of Bayesian rationality: in particular, we do not pos talate any relationship between the beliefs justifying the actions in any given profile In very simple games, Bayesian rationality is sufficient to isolate a unique solution: for instance, this is true in the Prisoner's Dilemma. However, in Matching Pennies, the Row player(who wants to match) may choose H because she expects the Column player to also choose H, whereas the Column player (who wants to avoid matches) actually chooses T because he(correctly, it turns out)expects the Row player to choose H. Indeed, it is easy to see that any action profile is consistent with Bayesian rationality in Matching Pennies However one might reason as follows If I, the theorist, am able to rule out certain actions because they are never best-replies, perhaps the players will be able to do so, too. But, if so, it seems easonable to assume that their beliefs should assign zero probability to the col- lection of eliminated actions. This entails a further restriction on the collection of actions that they might choose That is, in addition to assuming that players are Bayesian rational, we may be willing to assume that they believe that their own opponents are also Bayesian rational. This is precisely the type of assumption about beliefs referred to in our "informal equation. It is also a very powerful ideaCorrelated Rationalizability and Iterated Dominance Having disposed of all required technicalities, let us go back to our “informal equation” Rational Behavior + Assumptions about Beliefs = Solution Concepts. Fix a game G = (N,(Ai , Ti , ui)i∈N ) and assume that each (Ai , Ti) is a compact metrizable space (you can also assume that each Ai is finite, as far as the substantive arguments are concerned) and that payoff functions are continuous. We begin by asking: what is a plausible outcome of the game, if we are willing to assume that players are (Bayesian) rational? The answer should be clear: we can only expect an action profile (ai)i∈N to be chosen iff, for every player i ∈ N, ai ∈ ri(α−i) for some α−i ∈ ∆(A−i , B(Ti)). That is: if we think that players are “good Bayesians,” we should expect them to formulate a belief about their opponents’ choice, and play a best reply given that belief. Equivalently, we should expect a rational player to choose an action only if it that action is justified by some belief. This exhausts all the implications of Bayesian rationality: in particular, we do not postulate any relationship between the beliefs justifying the actions in any given profile. In very simple games, Bayesian rationality is sufficient to isolate a unique solution: for instance, this is true in the Prisoner’s Dilemma. However, in Matching Pennies, the Row player (who wants to match) may choose H because she expects the Column player to also choose H, whereas the Column player (who wants to avoid matches) actually chooses T because he (correctly, it turns out) expects the Row player to choose H. Indeed, it is easy to see that any action profile is consistent with Bayesian rationality in Matching Pennies. However, one might reason as follows: If I, the theorist, am able to rule out certain actions because they are never best-replies, perhaps the players will be able to do so, too. But, if so, it seems reasonable to assume that their beliefs should assign zero probability to the collection of eliminated actions. This entails a further restriction on the collection of actions that they might choose. That is, in addition to assuming that players are Bayesian rational, we may be willing to assume that they believe that their own opponents are also Bayesian rational. This is precisely the type of assumption about beliefs referred to in our “informal equation.” It is also a very powerful idea: 4