正在加载图片...
These remarks are written under the assumption that you already know the formalities If you dont, please review OR before proceeding Splitting-Coalescing. This transformation is rather harmless, in my opinion. It only becomes questionable if we assume that the "agents"of a given player assigned to distinct information sets really have a mind of their own. But since agents are a fiction to begin with, I feel comfortable with splitting/coalescing However, it must be noted that sequential equilibrium is not invariant to this transfor- 1,0 d4.1 X 1 B 3.0 F 1: Splitt n the game of Figure 1,(X,u), together with the out-of-equilibrium belief u(IT, B(B) 1, is a sequential equilibrium: given the threat of u after T or B, 1 does well to choose X and given the belief that 1 actually played B, 2 does well to play u. Note that u is really unconstrained here However, suppose that 1's choice is split into two histories: first, at 1 chooses between X and, say, Y; then, after(Y), 1 chooses between T and B. The history(Y, T) corresponds to(r)in the old game,(Y, B)corresponds to(B), and so on Now the unique equilibrium of the game is (YT, d ) after Y, Player 1 faces a simultaneous moves subgame in which T strictly dominates B. Hence, in any sequential equilibrium B1(Y(T)=l. By consistency, this implies that u((Y, T), ( Y, B)(Y, T))= l, so that A2({(Y,m),(Y,B)}(d)=1. But then1()(Y)=1 In my opinion, this example highlights a shortcoming of sequential equilibrium. However note that, even in the original game, the equilibrium(X, u) fails a"reasonableness"test based on forward induction. Here's the idea: Upon being reached, Player 2 must conclude that it is not the case that: (i) Player 1 is rational, and(2)Player 1 expects Player 2 to choose u(for otherwise Player 1 would have chosen X); however, at least T would be a rational hoice for 1, if 1 expected 2 to play d and not u(i.e,(1) may still be true, although(2 false); on the other hand, B would never be a rational choice. Thus, if we assume(note well assume") that Player 2 believes that Player 1 is rational as long as this is possible, we can conclude that Player 2 will interpret a deviation from X as a"signal"that(2)is false, butThese remarks are written under the assumption that you already know the formalities. If you don’t, please review OR before proceeding. Splitting-Coalescing. This transformation is rather harmless, in my opinion. It only becomes questionable if we assume that the “agents” of a given player assigned to distinct information sets really have a mind of their own. But since agents are a fiction to begin with, I feel comfortable with splitting/coalescing. However, it must be noted that sequential equilibrium is not invariant to this transfor￾mation. ❜ X 2,3 1 ✟ ✟ ✟ T✟✟✟ r 2✟ u✟✟ 1,0 ❍ d ❍❍ 4,1 ❍❍❍ B ❍❍❍ r 2 ✟ u✟✟ 0,1 ❍ d ❍❍ 3,0 Figure 1: Splitting matters. In the game of Figure 1, (X, u), together with the out-of-equilibrium belief µ({T, B})(B) = 1, is a sequential equilibrium: given the threat of u after T or B, 1 does well to choose X, and given the belief that 1 actually played B, 2 does well to play u. Note that µ is really unconstrained here. However, suppose that 1’s choice is split into two histories: first, at φ, 1 chooses between X and, say, Y ; then, after (Y ), 1 chooses between T and B. The history (Y, T) corresponds to (T) in the old game, (Y, B) corresponds to (B), and so on. Now the unique equilibrium of the game is (Y T, d): after Y , Player 1 faces a simultaneous￾moves subgame in which T strictly dominates B. Hence, in any sequential equilibrium, β1(Y )(T) = 1. By consistency, this implies that µ({(Y, T),(Y, B)})((Y, T)) = 1, so that β2({(Y, T),(Y, B)})(d) = 1. But then β1(φ)(Y ) = 1. In my opinion, this example highlights a shortcoming of sequential equilibrium. However, note that, even in the original game, the equilibrium (X, u) fails a “reasonableness” test based on forward induction. Here’s the idea: Upon being reached, Player 2 must conclude that it is not the case that: (i) Player 1 is rational, and (2) Player 1 expects Player 2 to choose u (for otherwise Player 1 would have chosen X); however, at least T would be a rational choice for 1, if 1 expected 2 to play d and not u (i.e., (1) may still be true, although (2) is false); on the other hand, B would never be a rational choice. Thus, if we assume (note well: “assume”) that Player 2 believes that Player 1 is rational as long as this is possible, we can conclude that Player 2 will interpret a deviation from X as a “signal” that (2) is false, but 3
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有