正在加载图片...
2A1 2,2 0,0 Figure 1: A perfect-information game Strategies and normal form(s) Definition 1 is arguably a "natural "way of describing a dynamic game-and one that is at ast implicit in most applications of the theory According to our formulations, actions are the primitive objects of choice. However, the notion of a strategg, i.e. a history-contingent plan, is also relevant Definition 2 Fix an extensive-form game with perfect information I. For every history h E X, let A(h)=aE A:(h,a)e h be the set of actions available at h. Then, for every player i N, a strategy is a function si: P-(i)-A such that, for every h such that P(h)=i, si(h)E A(h). Denote by Si and S the set of strategies of Player i and the set of all strategy profiles Armed with this definition(to which we shall need to return momentarily) we are ready to extend the notion of Nash equilibrium to extensive games Definition 3 Fix an extensive-form game r with perfect information. The outcome function O is a map o: S-Z defined by h=(a2,……,a)∈z,C<k:a4+h=s The normal form of the game T is G=(N, (Si, ui)ieN), where ui(s)=U1(O(s) The outcome function simply traces out the history generated by a strategy profile. The normal-form payoff function ui is then derived from Ui and O in the natural way. Finally Definition 4 Fix an extensive-form game r with perfect information. A pure-strategy Nash quilibrium of r is a profile of strategies s E which constitutes a Nash equilibrium of its normal form G; a mixed-strategy Nash equilibrium of r is a Nash equilibrium of the mixed extension of G3,3 r 1 2,2 d1 a1 r 2 D A 1,1 r 1 d2 a2 0,0 Figure 1: A perfect-information game Strategies and normal form(s) Definition 1 is arguably a “natural” way of describing a dynamic game—and one that is at least implicit in most applications of the theory. According to our formulations, actions are the primitive objects of choice. However, the notion of a strategy, i.e. a history-contingent plan, is also relevant: Definition 2 Fix an extensive-form game with perfect information Γ. For every history h ∈ X, let A(h) = {a ∈ A : (h, a) ∈ H} be the set of actions available at h. Then, for every player i ∈ N, a strategy is a function si : P −1 (i) → A such that, for every h such that P(h) = i, si(h) ∈ A(h). Denote by Si and S the set of strategies of Player i and the set of all strategy profiles. Armed with this definition (to which we shall need to return momentarily) we are ready to extend the notion of Nash equilibrium to extensive games. Definition 3 Fix an extensive-form game Γ with perfect information. The outcome function O is a map O : S → Z defined by ∀h = (a 1 , . . . , ak ) ∈ Z, ` < k : a `+1 = sP((a 1,...,a`))((a 1 , . . . , a` )) The normal form of the game Γ is GΓ = (N,(Si , ui)i∈N ), where ui(s) = Ui(O(s)). The outcome function simply traces out the history generated by a strategy profile. The normal-form payoff function ui is then derived from Ui and O in the natural way. Finally: Definition 4 Fix an extensive-form game Γ with perfect information. A pure-strategy Nash equilibrium of Γ is a profile of strategies s ∈ S which constitutes a Nash equilibrium of its normal form GΓ ; a mixed-strategy Nash equilibrium of Γ is a Nash equilibrium of the mixed extension of GΓ . 3
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有