正在加载图片...
orU(≌A,B)=0.If A,B{4}×BB]<0 a contradiction. Similarly, if Ba is not a singleton EU(cuA,B)BA×{B]>0, a contradiction. Thus, B is a singleton and U(w)=0 if wE B The set Ww: U(w)=0 has probability zero, so the probability of disagreeing forever is 0. In other words, both agents will choose the same action in finite time and once they have chosen the same action, they have reached an absorbing state and will continue to choose the same action in every subseque 3.1.An To illustrate the short-run dynamics of the model, we can further spe- cialize the example by assuming that, for each agent i, the signal i w)=wi is uniformly distributed on the interval [-1, 1 and the payoff to action 1 At date 1, each agent chooses 1 if his signal is positive and zero if it is negative. If both choose the same action at date 1, they will continue to choose the same action at each subsequent date. Seeing the other agent choose the same action will only reinforce each agent's belief that he has made the correct choice. No further information is revealed at subsequent dates and so we have reached an absorbing state, in which each agent knows his own signal and that the other's signal has the same sign, but nothing more. So interesting dynamics occur only in the case where agents choose different actions at date 1. The exact nature of the dynamics depends on the relative strength of the two signals, measured here by their absolute values. Without loss of generality, we assume that A has a negative signal B a positive signal, and B's signal is relatively the stronger, i. e, wal< Case 1: WA>-1/2 and wB>1/2. In the first round at date 1, agent A will choose action 0 and agent B will choose action 1. At the second date, having observed that agent B chose 1, agent A will switch to action 1. while agent B will continue to choose 1. Thereafter, there is an absorbing state in which both agents choose 1 for ever and no further learning occurs Case 2: 3/4<wA <-1/2 and wB>3/4. As before, A chooses 0 and B chooses 1 at date 1. at date 2. a observes that b chose 1 and infers that his signal has expected value 1/2. Since wA <-1/2, it is optimal for a to choose o again since b has an even stronger signal. he will continue to choose 1. At date 3. A observes that B chose 1. thus revealing that B>1/ 2 so the expected value of B's signal is 3/4 and since wA >-3/4 is optimal for him to switch to l, which then becomes an absorbing stateor U(ωA, ωB)=0. If BB is not a singleton, E[U(ωA, ωB)|{ωA} × BB] < 0 a contradiction. Similarly, if BA is not a singleton, E[U(ωA, ωB)|BA × {ωB}] > 0, a contradiction. Thus, B is a singleton and U(ω)=0 if ω ∈ B. The set {ω : U(ω)=0} has probability zero, so the probability of disagreeing forever is 0. In other words, both agents will choose the same action in finite time and once they have chosen the same action, they have reached an absorbing state and will continue to choose the same action in every subsequent period. 3.1. An example To illustrate the short-run dynamics of the model, we can further spe￾cialize the example by assuming that, for each agent i, the signal σi(ω) = ωi is uniformly distributed on the interval [−1, 1] and the payoff to action 1 is U(ω) = ωA + ωB. At date 1, each agent chooses 1 if his signal is positive and zero if it is negative. If both choose the same action at date 1, they will continue to choose the same action at each subsequent date. Seeing the other agent choose the same action will only reinforce each agent’s belief that he has made the correct choice. No further information is revealed at subsequent dates and so we have reached an absorbing state, in which each agent knows his own signal and that the other’s signal has the same sign, but nothing more. So interesting dynamics occur only in the case where agents choose different actions at date 1. The exact nature of the dynamics depends on the relative strength of the two signals, measured here by their absolute values. Without loss of generality, we assume that A has a negative signal, B a positive signal, and B’s signal is relatively the stronger, i.e., |ωA| < |ωB|. Case 1: ωA > −1/2 and ωB > 1/2. In the first round at date 1, agent A will choose action 0 and agent B will choose action 1. At the second date, having observed that agent B chose 1, agent A will switch to action 1, while agent B will continue to choose 1. Thereafter, there is an absorbing state in which both agents choose 1 for ever and no further learning occurs. Case 2: 3/4 < ωA < −1/2 and ωB > 3/4. As before, A chooses 0 and B chooses 1 at date 1. At date 2, A observes that B chose 1 and infers that his signal has expected value 1/2. Since ωA < −1/2, it is optimal for A to choose 0 again. Since B has an even stronger signal, he will continue to choose 1. At date 3, A observes that B chose 1, thus revealing that ωB > 1/2 so the expected value of B’s signal is 3/4 and since ωA > −3/4 it is optimal for him to switch to 1, which then becomes an absorbing state. 9
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有