Next, let us define simultaneous games Definition 2 A finite simultaneous game is a tuple (N, (Ai, uiieN, where N is a finite set of players, and for each player i E N, Ai is a finite set of actions and ui is a payoff function u2:A1×A-→R (I use the conventional notation A-i=Ili+i A, and A=lien A; )We could more general and assume that each player is endowed with a preference relation over action profiles, but let's not worry about this right now The above is the "standard"definition of a game. As you may see, it is not quite consistent with the definition of a decision problem, so we need to reinterpret things Indeed, the most important difference is that a game implicitly defines not just one, but n decision problems(with a conventional abuse of notation, I will denote by this both the set N and its cardinality A Let's take the point of view of Player i. She has no control over her opponents choices, and she will not find out what they were until the game is over. Thus, A-i seems a good candidate for the state space: Q2= A-i Next, utilities are attached to points(ai, a-iE A according to Definition 2 This amounts to saying that there is a one-to-one correspondence between the actue outcome of the game (which is what players care about)and action profiles. Thus, we may as well imagine that action profiles are the relevant consequences: C= A By a process of elimination, acts must be actions: F= A;. Formally, action 4∈ Ai defines an act fa1:A-→A1×A-;(i.e.fa1:!→C)byfa(a-)=(a2,a-) However, the definition of a game incorporates additional information: specifically, it includes a utility function for each player. This implies that we have a complet description of players preferences among consequences. However, in view of the representation results cited above, this is only one part of the description of players preferences among acts, because the probabilities are missing This trivial observation is actually crucial: in some sense, the traditional role of game theory has been that of specifying those probabilities in a manner consistent with assumptions or intuitions concerning strategic rationality Nash Equilibrium To make this more concrete, recall what you may have heard about Nash equilibrium a player's strategy doubles as her opponents' beliefs about her strategy Thus, in a 2-player game, a Nash equilibrium is a specification of beliefs(a1, a2) for both players, with the property that if Player i's belief assigns positive probability to an action a, E A,(+i), then the act fa, is preferred by j to any other act, given that j's preferences are represented by the utility function u,: A-R and theNext, let us define simultaneous games: Definition 2 A finite simultaneous game is a tuple {N,(Ai , ui)i∈N }, where N is a finite set of players, and for each player i ∈ N, Ai is a finite set of actions and ui is a payoff function ui : Ai × A−i → R. (I use the conventional notation A−i = Q j6=i Aj and A = Q i∈N Ai .) We could be more general and assume that each player is endowed with a preference relation over action profiles, but let’s not worry about this right now. The above is the “standard” definition of a game. As you may see, it is not quite consistent with the definition of a decision problem, so we need to reinterpret things a bit. Indeed, the most important difference is that a game implicitly defines not just one, but N decision problems (with a conventional abuse of notation, I will denote by this both the set N and its cardinality |N|). Let’s take the point of view of Player i. She has no control over her opponents’ choices, and she will not find out what they were until the game is over. Thus, A−i seems a good candidate for the state space: Ω = A−i . Next, utilities are attached to points (ai , a−i) ∈ A according to Definition 2. This amounts to saying that there is a one-to-one correspondence between the actual outcome of the game (which is what players care about) and action profiles. Thus, we may as well imagine that action profiles are the relevant consequences: C = A. By a process of elimination, acts must be actions: “F = A00 i . Formally, action ai ∈ Ai defines an act fai : A−i → Ai × A−i (i.e. fai : Ω → C) by fai (a−i) = (ai , a−i). However, the definition of a game incorporates additional information: specifically, it includes a utility function for each player. This implies that we have a complete description of players’ preferences among consequences. However, in view of the representation results cited above, this is only one part of the description of players’ preferences among acts, because the probabilities are missing. This trivial observation is actually crucial: in some sense, the traditional role of game theory has been that of specifying those probabilities in a manner consistent with assumptions or intuitions concerning strategic rationality. Nash Equilibrium To make this more concrete, recall what you may have heard about Nash equilibrium: a player’s strategy doubles as her opponents’ beliefs about her strategy. Thus, in a 2-player game, a Nash equilibrium is a specification of beliefs (α1, α2) for both players, with the property that if Player i’s belief assigns positive probability to an action aj ∈ Aj (j 6= i), then the act faj is preferred by j to any other act, given that j’s preferences are represented by the utility function uj : A → R and the 3