正在加载图片...
chosen profile then becomes publicly observable. We quite simply replace individual actions with action profiles in the definition of a history, and adapt the notation accordingly Remark 0.1 Let A(h)=ae llieP(h) A: (h,a)EH. Then A(h)=IliEP(h) Ai(h) The definition of a strategy needs minimal modifications Definition 6 Fix an extensive-form game r with observable actions and chance moves Then, for every player i E NUc, a strategy is a function s;: h: i E P(h))-A such that, for every h such that iE P(h),si(h)E Ai(h). Denote by Si and s the set of strategies of Player i and the set of all strategy profiles In the absence of chance moves, Definition 4 applies verbatim to the new setting. You can think about how to generalize it with chance moves(we do not really wish to treat Chance as an additional player in a normal-form game, so we need to redefine the payoff functions in the natural way ). Finally, the definition of Nash equilibrium requires no change For those of you who are used to the traditional, tree-based definition of an extensive game, note that you need to use information sets in order to describe games without perfect information, but with observable actions. That is, you need to use the full expressive power of the tree-based notation in order to describe what is a slight and rather natural extension of perfect-information games Most games of economic interest are games with observable actions, albeit possibly with payoff uncertainty; hence, the OR notation is sufficient to deal with most applied problems (payoff uncertainty is easily added to the basic framework, as we shall see vr. l On the other hand, the OR notation is equivalent to the standard one for games with perfect information call histories“ 'nodes”, actions“arcs”, terminal histories "leaves”andW“root”chosen profile then becomes publicly observable. We quite simply replace individual actions with action profiles in the definition of a history, and adapt the notation accordingly. Remark 0.1 Let A(h) = {a ∈ Q i∈P(h) A : (h, a) ∈ H}. Then A(h) = Q i∈P(h) Ai(h). The definition of a strategy needs minimal modifications: Definition 6 Fix an extensive-form game Γ with observable actions and chance moves. Then, for every player i ∈ N ∪ {c}, a strategy is a function si : {h : i ∈ P(h)} → A such that, for every h such that i ∈ P(h), si(h) ∈ Ai(h). Denote by Si and S the set of strategies of Player i and the set of all strategy profiles. In the absence of chance moves, Definition 4 applies verbatim to the new setting. You can think about how to generalize it with chance moves (we do not really wish to treat Chance as an additional player in a normal-form game, so we need to redefine the payoff functions in the natural way). Finally, the definition of Nash equilibrium requires no change. For those of you who are used to the traditional, tree-based definition of an extensive game, note that you need to use information sets in order to describe games without perfect information, but with observable actions. That is, you need to use the full expressive power of the tree-based notation in order to describe what is a slight and rather natural extension of perfect-information games.1 Most games of economic interest are games with observable actions, albeit possibly with payoff uncertainty; hence, the OR notation is sufficient to deal with most applied problems (payoff uncertainty is easily added to the basic framework, as we shall see). 1On the other hand, the OR notation is equivalent to the standard one for games with perfect information: just call histories “nodes”, actions “arcs”, terminal histories “leaves” and ∅ “root”. 5
<<向上翻页
©2008-现在 cucdc.com 高等教育资讯网 版权所有