正在加载图片...
CHAPTER 3. THE PRINCIPAL-AGENT PROBLEM The difference between DP4 and the original DP3 is that only the downward incentive constraints are included Obviously, V**(a)2 V*(a). Suppose that V**(a)>v"(a). This means that the agent wants to choose a higher action than a in the modified problem But this is good for the principal, who will never choose a if he can get a better action for the same price. Thus max V"(a)=max V**(a) Thus. we can use the solution to the modified problem dP4 to characterize the optimal incentive scheme Theorem 3 Suppose that a E arg max V*(a). The incentive scheme w( is a solution of dP if and only if it is a solution of DPg 3.6 Monotonicity Many incentive schemes observed in practice reward the agent with higher rewards for higher outcomes, i. e, w(s) is increasing (or non-decreasing)in s It is interesting to see when this is a property of the theoretical optimal in- centive scheme. Assuming an interior solution, the Kuhn-Tucker (necessary) conditions are p(a, s)V(R(s)-w(s))-Ap(a, s)U(w(s))-ubP(a, s)-p(b, s))U(w(s))=0 V(B)-()=(x+ U(w(s)) ∑ 1-s P(a, s) By the mlrP, the right hand side is non-increasing in s, so the left hand side is non-increasing, which means that w(s) is non-decreasing 3. 7 Examples There are two outcomes s= 1, 2, where R(1)< R(2), and two project a=1, 2 represented by the respective probabilities of success 0< P(1, 2)< p(2, 2)<1. The costs of effort are v (1)=0 and v/(2)>0. The agent's utilit8 CHAPTER 3. THE PRINCIPAL-AGENT PROBLEM The difference between DP4 and the original DP3 is that only the downward incentive constraints are included. Obviously, V ∗∗(a) ≥ V ∗(a). Suppose that V ∗∗(a) > V ∗(a). This means that the agent wants to choose a higher action than a in the modified problem. But this is good for the principal, who will never choose a if he can get a better action for the same price. Thus, maxa V ∗ (a) = maxa V ∗∗(a). Thus, we can use the solution to the modified problem DP4 to characterize the optimal incentive scheme. Theorem 3 Suppose that a ∈ arg max V ∗(a). The incentive scheme w(·) is a solution of DP4 if and only if it is a solution of DP3. 3.6 Monotonicity Many incentive schemes observed in practice reward the agent with higher rewards for higher outcomes, i.e., w(s) is increasing (or non-decreasing) in s. It is interesting to see when this is a property of the theoretical optimal in￾centive scheme. Assuming an interior solution, the Kuhn-Tucker (necessary) conditions are: p(a, s)V 0 (R(s)−w(s))−λp(a, s)U0 (w(s))− X b<a µb {p(a, s) − p(b, s)}U0 (w(s)) = 0 or V 0 (R(s) − w(s)) U0 (w(s)) = Ã λ +X b<a µb ½ 1 − p(b, s) p(a, s) ¾! . By the MLRP, the right hand side is non-increasing in s, so the left hand side is non-increasing, which means that w(s) is non-decreasing. 3.7 Examples There are two outcomes s = 1, 2, where R(1) < R(2), and two projects a = 1, 2 represented by the respective probabilities of success 0 < p(1, 2) < p(2, 2) < 1. The costs of effort are ψ(1) = 0 and ψ(2) > 0. The agent’s utility
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有