正在加载图片...
player observes), the two can be reconciled by introducing a dummy player in Definition 1 This is not a particularly clean solution, of course, but it works I will return to Bayesian extensive games with observed actions when I discuss Perfect Bayesian equilibrium, the"modal" solution concept for such games. For the time being let me point out that, while the model implicitly defines each player i's prior on e-i, it stands to reason that, as the game progresses, Player i will update her prior beliefs about her opponents' payoff types based on her observations(as well as the conjectured equilibrium) These updated beliefs must somehow be part of the description of the equilibrium; also they must be subject to some sort of consistency condition-Bayes'rule, for instance. We shall see that identifying such conditions is the key problem in the formulation of extensive- form solution concepts General Games with Incomplete Information With the above discussion as a motivation of sorts we are led to consider a notion of extensive ames which allows for partial observability of actions The basic idea is to start with a game with perfect information(i.e. without simultaneous moves) and enrich it with a description of what players actually"know "when a given history obtains layer i's knowledge is modelled via a partition Ii of the set of histories at which the are called upon to choose an action, i.e. P-(i). This is exactly as in our model of payoff uncertainty in static games: in the interim stage, Player i learns, hence knows, that the true state of the world w is an element of the cell ti(w)cQ, but need not know that it is exactly w. Similarly, at a history h, a player learns that the actual history belongs to some set Ii(h)CP-(i), but not necessarily that it is h Definition 2 An extensive-form game with chance moves is a tupleT=(N, A, H, P, Z, (Ui, Ii), fe) N is a set of players; Chance, denoted by c, is regarded as an additional player, socM a is a set of actions h is a set of sequences whose elements are elements of A, such that (i)0∈H; (i)(a1,,a)∈ H implies(al,,a)∈ h for all t<k; i)fh=(a1,,a,.)and(a1,,a^)∈ h for all k≥1, then h∈H; Z and x are the set of terminal and non-terminal histories P is the player function P: X-NUch U: Z-R is the payoff vector function; Li is a partition of P-(i).player observes), the two can be reconciled by introducing a dummy player in Definition 1. This is not a particularly clean solution, of course, but it works. I will return to Bayesian extensive games with observed actions when I discuss Perfect Bayesian equilibrium, the “modal” solution concept for such games. For the time being, let me point out that, while the model implicitly defines each player i’s prior on Θ−i , it stands to reason that, as the game progresses, Player i will update her prior beliefs about her opponents’ payoff types based on her observations (as well as the conjectured equilibrium). These updated beliefs must somehow be part of the description of the equilibrium; also, they must be subject to some sort of consistency condition—Bayes’ rule, for instance. We shall see that identifying such conditions is the key problem in the formulation of extensive￾form solution concepts. General Games with Incomplete Information With the above discussion as a motivation of sorts, we are led to consider a notion of extensive games which allows for partial observability of actions. The basic idea is to start with a game with perfect information (i.e. without simultaneous moves) and enrich it with a description of what players actually “know” when a given history obtains. Player i’s knowledge is modelled via a partition Ii of the set of histories at which they are called upon to choose an action, i.e. P −1 (i). This is exactly as in our model of payoff uncertainty in static games: in the interim stage, Player i learns, hence knows, that the true state of the world ω is an element of the cell ti(ω) ⊂ Ω, but need not know that it is exactly ω. Similarly, at a history h, a player learns that the actual history belongs to some set Ii(h) ⊂ P −1 (i), but not necessarily that it is h. Definition 2 An extensive-form game with chance moves is a tuple Γ = (N, A, H, P, Z,(Ui , Ii), fc) where: N is a set of players; Chance, denoted by c, is regarded as an additional player, so c 6∈ N. A is a set of actions H is a set of sequences whose elements are elements of A, such that (i) ∅ ∈ H; (ii) (a 1 , . . . , ak ) ∈ H implies (a 1 , . . . , a` ) ∈ H for all ` < k; (iii) If h = (a 1 , . . . , ak , . . .) and (a 1 , . . . , ak ) ∈ H for all k ≥ 1, then h ∈ H; Z and X are the set of terminal and non-terminal histories; P is the player function P : X → N ∪ {c} U : Z → R N is the payoff vector function; Ii is a partition of P −1 (i). 3
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有