正在加载图片...
ases (ii) and(iii) are uninteresting because there is no possibility of mu- ual learning. For example, in case(i), agent B observes a private signal and chooses the optimal action at date 1. Since he observes no further in- formation, he chooses the same action at every subsequent date. Agent A observes a private signal and chooses the optimal action at date 1. At date 2. he observes agent B's action at date 1. updates his beliefs and chooses the new optimal action at date 2. After that, A receives no additional information, so agent A chooses the same action at every subsequent date. Agent A has learned something from agent B, but that is as far as it goes In case(i), on the other hand, the two agents learn from each other and learning can continue for an unbounded number of periods. We focus on the network defined in(i) in what follows. For simplicity, we consider a special information and payoff structure. We assume that Q2=SA QB, where Q; is an interval a, b and the generic element is w=(wA, WB). The signals are assumed to satisfy =A,B, where the random variables w a and wb are independently and continuously distributed. that is, P= PA x PB and Pi has no atoms. There are two actions a=0, 1 and the payoff function is assumed to satisfy 0 ifa=0 where the function U(wA, wB) is assumed to be a continuous and increasing function. To avoid trivialities we assume that neither action is weakly These assumptions are sufficient for the optimal strategy to have the form of a cutoff rule. To see this, note that for any history that occurs with positive probability, agent is beliefs at date t take the form of an event Wwil x Bit, where the true value of wi is known to belong to Bit. Then the payoff to action 1 is pi wi, Bit)=eu(wA, wBwi x Bit. Clearly, Pi wi, Bit)is increasing in wi, because the distribution of w, is independent of wi, so there exists a cutoff wi (Bit)such that >(Bt)→→y;(,Bt)>0, 1<(Bt)=1(u,Bjt)<0 We assume that when an agent is indifferent between two actions, he chooses action 1. The analysis is essentially the same for any other the tie-breaking rule. The fact that agent is strategy takes the form of a cutoff rule implies that the set Bit is an interval. This can be proved by induction as follows At date 1, agent j has a cutoff w, and Xi1(w)=l if and only if wj2w;Cases (ii) and (iii) are uninteresting because there is no possibility of mu￾tual learning. For example, in case (ii), agent B observes a private signal and chooses the optimal action at date 1. Since he observes no further in￾formation, he chooses the same action at every subsequent date. Agent A observes a private signal and chooses the optimal action at date 1. At date 2, he observes agent B’s action at date 1, updates his beliefs and chooses the new optimal action at date 2. After that, A receives no additional information, so agent A chooses the same action at every subsequent date. Agent A has learned something from agent B, but that is as far as it goes. In case (i), on the other hand, the two agents learn from each other and learning can continue for an unbounded number of periods. We focus on the network defined in (i) in what follows. For simplicity, we consider a special information and payoff structure. We assume that Ω = ΩA ×ΩB , where Ωi is an interval [a, b] and the generic element is ω = (ωA, ωB). The signals are assumed to satisfy σi(ω) = ωi, ∀ω ∈ Ω, i = A, B, where the random variables ωA and ωB are independently and continuously distributed, that is, P = PA × PB and Pi has no atoms. There are two actions a = 0, 1 and the payoff function is assumed to satisfy u(a, ω) = ½ 0 if a = 0 U(ω) if a = 1, where the function U(ωA, ωB) is assumed to be a continuous and increasing function. To avoid trivialities we assume that neither action is weakly dominated. These assumptions are sufficient for the optimal strategy to have the form of a cutoff rule. To see this, note that for any history that occurs with positive probability, agent i’s beliefs at date t take the form of an event {ωi} × Bjt, where the true value of ωj is known to belong to Bjt. Then the payoff to action 1 is ϕi(ωi, Bjt) = E[U(ωA, ωB)|{ωi} × Bjt}. Clearly, ϕi(ωi, Bjt) is increasing in ωi, because the distribution of ωj is independent of ωi, so there exists a cutoff ω∗ i (Bjt) such that ωi > ω∗ i (Bjt) =⇒ ϕi(ωi, Bjt) > 0, ωi < ω∗ i (Bjt) =⇒ ϕi(ωi, Bjt) < 0. We assume that when an agent is indifferent between two actions, he chooses action 1. The analysis is essentially the same for any other the tie-breaking rule. The fact that agent i’s strategy takes the form of a cutoff rule implies that the set Bit is an interval. This can be proved by induction as follows. At date 1, agent j has a cutoff ω∗ j1 and Xj1(ω)=1 if and only if ωj ≥ ω∗ j1. 7
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有