Problem set 3 Micro Susen Wang Try to do most problems in MWG(1995), Chapters 7-9 Question 3.1.(Mixed-Strategy Nash Equilibrium). A principal hires an agent to perform some service at a price(which is supposed to equal the cost of the service The principal and the agent have initial wealth up=1.3 and wa =0.5, respectively The principal can potentially lose I=0.8. If the agent offers low quality, the probabilit of losing I is P= 80%; if the agent offers high quality, the probability of losing P3=50%. The quality is unobservable to the principal. The price of a low qualit product is(paid to the agent)is C1=0.08 and the price of a high quality product is C3=0.2: by the competitive market assumption, I and cs are the costs of producing the products(the agent bears the costs ). The agent is required by regulation to provide high quality, but he may cheat. After the bad event happens, the principal can spend F=0.32 for an investigation; if the agent is found to have provided the low quality, the agent will have to pay for the loss I to the principal. This game can be written in the following normal form low quality, a high quality, 1-a Investigate, P P1F,a+c3-c1-1 P3(F+l), not to investigate, 1-pI Up -C3-Pil, wa+C3-CI whe p= probability of principal investigating, a= probability of agent delivering low quality Find mixed-strategy Nash equilibria Question 3.2.(Pure-Strategy Nash Equilibrium). Find pure-strategy Nash equi- libria for the above exercise
Problem Set 3 Micro, Susen Wang Try to do most problems in MWG (1995), Chapters 7—9. Question 3.1. (Mixed-Strategy Nash Equilibrium). A principal hires an agent to perform some service at a price (which is supposed to equal the cost of the service). The principal and the agent have initial wealth wp = 1.3 and wa = 0.5, respectively. The principal can potentially lose l = 0.8. If the agent offers low quality, the probability of losing l is P1 = 80%; if the agent offers high quality, the probability of losing l is P3 = 50%. The quality is unobservable to the principal. The price of a low quality product is (paid to the agent) is c1 = 0.08 and the price of a high quality product is c3 = 0.2; by the competitive market assumption, c1 and c3 are the costs of producing the products (the agent bears the costs). The agent is required by regulation to provide high quality, but he may cheat. After the bad event happens, the principal can spend F = 0.32 for an investigation; if the agent is found to have provided the low quality, the agent will have to pay for the loss l to the principal. This game can be written in the following normal form: low quality, α high quality, 1 − α investigate, ρ wp − c3 − P1F, wa + c3 − c1 − P1l wp − c3 − P3(F + l), wa not to investigate, 1 − ρ wp − c3 − P1l, wa + c3 − c1 wp − c3 − P3l, wa where ρ = probability of principal investigating, α = probability of agent delivering low quality. Find mixed-strategy Nash equilibria. Question 3.2. (Pure-Strategy Nash Equilibrium). Find pure-strategy Nash equilibria for the above exercise. 3—1
Question 3.3. For the game in Example 3.11, find all the pure strategy Nash equilibria Obviously, among these Nash equilibria, only the one that is found by backward induction satisfies sequential rationality Question 3.4. For Example 3. 13, find the mixed strategy SPNE Question 3.5. Find all the pure-strategy Nash equilibria of the game in Example 3. 20 Question 3.6. [A revised version of Exercise 9.C.7 in MWG(1995, P. 304) (a) For the following game, find all the pure-strategy NEs. Which one is the SPNE? (b) Now suppose that P2 cannot observe P1s move. Draw the game tree, and find all the mixed-strategy nes (c) Following the game in(b), now suppose that Pl may make a mistake in implementing his strategies. Specifically, after Pl has decided to play T, he may actually imple- ment T with probability p and mistakenly implernent B with probability 1-p symmetrically, after P1 has decided to play B, he may actually implement B with probability p and mistakenly implement T with probability l-p. Draw the game tree and find all the bes IIn MWG(1995), it is P2 who may make a mistake in observing P1's strategies. In this case, there is no mistake in implementation; it is just a mistake in identifying the acutal strategy
Question 3.3. For the game in Example 3.11, find all the pure strategy Nash equilibria. Obviously, among these Nash equilibria, only the one that is found by backward induction satisfies sequential rationality. Question 3.4. For Example 3.13, find the mixed strategy SPNE. Question 3.5. Find all the pure-strategy Nash equilibria of the game in Example 3.20. Question 3.6. [A revised version of Exercise 9.C.7 in MWG (1995, p.304)]. (a) For the following game, find all the pure-strategy NEs. Which one is the SPNE? o P2 ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ 2 4 . . P1 ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ 2 2 δ 1 δ 2 1 γ 2 γ δ 1 δ 2 B T D U D U ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ 1 1 ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ 1 5 P2 (b) Now suppose that P2 cannot observe P1’s move. Draw the game tree, and find all the mixed-strategy NEs. (c) Following the game in (b), now suppose that P1 may make a mistake in implementing his strategies. Specifically, after P1 has decided to play T, he may actually implement T with probability p and mistakenly implement B with probability 1 − p; symmetrically, after P1 has decided to play B, he may actually implement B with probability p and mistakenly implement T with probability 1 − p. 1 Draw the game tree and find all the BEs. 1 In MWG (1995), it is P2 who may make a mistake in observing P1’s strategies. In this case, there is no mistake in implementation; it is just a mistake in identifying the acutal strategy. 3—2
Answer set 3 Answer 3. 1. Assume that the principal can commit ex ante to investigate or not before a loss occurs. In other words, the principal can only make up his mind on investigation before she has suffered a loss. Before a loss occurs, the game box of surpluses is low quality, a high quality, 1 nvestigate, p PF, Wo +e P 3-B3(F+l) not to investigate, 1-Pl Wp -C3-P1l, wa+C3-CI Wp-C3-P3l,wa In each cell, the value on the left is the surplus of the principal and the value on the right The optimal choice of a is to make the principal indifferent between investigation and no investigation p-a-aBF-(1-a)B3(F+D)=p-c3-al-(1-a)f3,(5) impI B1F-(1-a)B3F=-a1 implying a(Pil+P3F-PiF)=P3F, impI P3F 0.5×0.32 P1l+3F-B1F0.8×0.8+0.5×0.32-0.8×0.32 The choice of p is to make the agent indifferent between cheating and no cheating Pll Imp 3-C10.20-0.08 Pl 0.8×0.8 0.19 Answer 3.2. By substituting the parameter values into the game box of surpluses, we
Answer Set 3 Econ522, Susen Wang Answer 3.1. Assume that the principal can commit ex ante to investigate or not before a loss occurs. In other words, the principal can only make up his mind on investigation before she has suffered a loss. Before a loss occurs, the game box of surpluses is low quality, α high quality, 1 − α investigate, ρ wp − c3 − P1F, wa + c3 − c1 − P1l wp − c3 − P3(F + l), wa not to investigate, 1 − ρ wp − c3 − P1l, wa + c3 − c1 wp − c3 − P3l, wa In each cell, the value on the left is the surplus of the principal and the value on the right is the surplus of the agent. The optimal choice of α is to make the principal indifferent between investigation and no investigation: wp − c3 − αP1F − (1 − α) P3 (F + l) = wp − c3 − αP1l − (1 − α) P3l, (5) implying −αP1F − (1 − α) P3F = −αP1l, implying α (P1l + P3F − P1F) = P3F, implying α = P3F P1l + P3F − P1F = 0.5 × 0.32 0.8 × 0.8+0.5 × 0.32 − 0.8 × 0.32 = 0.29. The choice of ρ is to make the agent indifferent between cheating and no cheating: wa + c3 − c1 − ρP1l = wa, (6) implying ρ = c3 − c1 P1l = 0.20 − 0.08 0.8 × 0.8 = 0.19. Answer 3.2. By substituting the parameter values into the game box of surpluses, we have 3—3
cheat. a not to cheat, 1-a Investigate, P 0.84,-0.02 0.54,0.5 not to investigate, 1-Pl 0.46, 0.62 0.7.0.5 By Proposition 3. 2, to find pure-strategy Nash equilibria, we can restrict to pure strategies only. Thus, simply by inspecting each cell one by one, we know that there is no pure strategy Nash equilibrium Answer 3.3. The strategy sets for players 1 and 2 are simple S1={L,R},S2={a,b} There are three information sets for player 3. Denote a typical strategy of player 3 as S3=(a1, a2, a3), where a1 is the action if the information set on the left is reached, a2 is the action if the information set in the middle is reached, and as is the action if the information set on the right is reached. Player 3 has eight strategies s13=(l,l,),s23=(r,,l),s33=(,l,r),s4s3=(r,l,r) s53=(,r,l),s63=(r,r,l),s73=(l,r,r),s8=(r,r,r The normal forr 51 S53S6373583 20,1-1,562,0,1-1.62.0.,1-1.562.0,11,5,6 b|20.115.62.01(-1562.01|-1562,0.1(-1.56 PI plays R S13S23533 43 73 a|31,231.2|3.12|31.2|644)(54(544)(4.4) b|0.-1,7|0.-1,7|-22,0-2200.-170.-17-220-220 All the pure strategy Nash equilibria are indicated in the boxes
cheat, α not to cheat, 1 − α investigate, ρ 0.84, −0.02 0.54, 0.5 not to investigate, 1 − ρ 0.46, 0.62 0.7, 0.5 By Proposition 3.2, to find pure-strategy Nash equilibria, we can restrict to pure strategies only. Thus, simply by inspecting each cell one by one, we know that there is no purestrategy Nash equilibrium. Answer 3.3. The strategy sets for players 1 and 2 are simple: S1 = {L, R}, S2 = {a, b}. There are three information sets for player 3. Denote a typical strategy of player 3 as s3 = (a1, a2, a3), where a1 is the action if the information set on the left is reached, a2 is the action if the information set in the middle is reached, and a3 is the action if the information set on the right is reached. Player 3 has eight strategies: s13 = (l,l,l), s23 = (r, l, l), s33 = (l, l, r), s43 = (r, l, r), s53 = (l, r, l), s63 = (r, r, l), s73 = (l, r, r), s83 = (r, r, r). The normal form is P1 plays L P3 s13 s23 s33 s43 s53 s63 s73 s83 P2: a 2,0,1 -1,5,6 2,0,1 -1,5,6 2,0,1 -1,5,6 2,0,1 -1,5,6 b 2,0,1 -1,5,6 2,0,1 (-1,5,6) 2,0,1 -1,5,6 2,0,1 (-1,5,6) P1 plays R P3 s13 s23 s33 s43 s53 s63 s73 s83 P2: a 3,1,2 3,1,2 3,1,2 3,1,2 (5,4,4) (5,4,4) (5,4,4) (5,4,4) b 0,-1,7 0,-1,7 -2,2,0 -2,2,0 0,-1,7 0,-1,7 -2,2,0 -2,2,0 All the pure strategy Nash equilibria are indicated in the boxes. 3—4
To find all the Nash equilibria, we can check each cell one by one. A cell cannot be a Nash equilibrium if one of the players doesnt stick to it. In each cell, we can first check to see if player 3 will stick to his strategy, by which we can quickly eliminate many cell A sequentially rational NE must be an outcome from backward induction. Example 3.11 shows that backward induction only leads to one outcome: S1= R, S 53=56 which is one of the Nash equilibria Answer 3.4. In the proper subgame with the normal form Firm I Small, o2 Large, 1-02 Firm E: Smallo (-1,1) Large,1-a1(1,-1 The equilibrium a2 is to make firm E indifferent between his two strategies (1-a2)=02-3(1-02) implying o*=3. Since the game is symmetric, we also have oi=3.Then, the expected payoff is T'=-602-1(1-02)=-9.We also have TE=-9. The game is reduced Firm e Out Then firm E will choose 'out,. Thus the SPne is oE=(out, choose small niche with probability i= choose small niche with probability Answer 3. 5. Firm I has one information set HI containing two nodes. Based on this information. firm i has two strategies Su= Fight. 521= Accom
To find all the Nash equilibria, we can check each cell one by one. A cell cannot be a Nash equilibrium if one of the players doesn’t stick to it. In each cell, we can first check to see if player 3 will stick to his strategy, by which we can quickly eliminate many cells. A sequentially rational NE must be an outcome from backward induction. Example 3.11 shows that backward induction only leads to one outcome: s1 = R, s2 = a, s3 = s63, which is one of the Nash equilibria. Answer 3.4. In the proper subgame with the normal form: Firm I Small, σ2 Large, 1 − σ2 Firm E: Small, σ1 -6, -6 (-1, 1) Large, 1 − σ1 (1, -1) -3, -3 The equilibrium σ2 is to make firm E indifferent between his two strategies: −6σ2 − 1(1 − σ2) = σ2 − 3(1 − σ2), implying σ∗ 2 = 2 9 . Since the game is symmetric, we also have σ∗ 1 = 2 9 . Then, the expected payoff is π∗ I = −6σ∗ 2 − 1(1 − σ∗ 2) = −19 9 . We also have π∗ E = −19 9 . The game is reduced to: o Out In ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ − − 9 19 9 19 Firm E ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ 2 0 Then, firm E will choose ‘out’. Thus, the SPNE is σ∗ E = (out, choose small niche with probability 2 9 ), σ∗ I = choose small niche with probability 2 9 . Answer 3.5. Firm I has one information set HI containing two nodes. Based on this information, firm I has two strategies: s11 = Fight, s21 = Accom. 3—5
Firm E has two informations H1 and H2, where H, contains the initial node. Denote firm E's strategies as S2=(a1, a2), where al is an action at HI and a2 is an action at H2. We can then find the normal form: Firm E (out, fight)(out, accom. )(in, fight)(in, accom. (2,0) (2.0) accom 2.0 2,0 We can easily find the pure-strategy Nash equilibria, as indicated in the above box. Among these three NEs. there is one SPNE. which is and there is one be. which is fight. (out, accom. This example indicates that 1. BE and sPne don't imply each other 2. bE eliminates two NEs. one of which is SPNE. SPNe also eliminates two NEs. one of which is be Answer 3.6 (a) There are two information sets for P2. Let(a1, a2) be a typical P2's strategy, where al is an action taken at the left information set and a2 is an action taken at the right information set. The normal form of the game is ,U)(U,D)(U,U) P1:B4,2(4,2)1,1 T5,1 2.2 There are two pure-strategy NES: o'=[B,(D, U) and o*=T,(U, U)I. The first one is the SPNe
Firm E has two informations H1 and H2, where H1 contains the initial node. Denote firm E’s strategies as s2 = (a1, a2), where a1 is an action at H1 and a2 is an action at H2. We can then find the normal form: Firm E (out, fight) (out, accom.) (in, fight) (in, accom.) Firm I: fight (2, 0) (2, 0) -1, -3 -1, -2 accom. 2, 0 2, 0 -2, 1 (1, 3) We can easily find the pure-strategy Nash equilibria, as indicated in the above box. Among these three NEs, there is one SPNE, which is sI = accom., sE = (in, accom.), and there is one BE, which is sI = fight, sE = (out, accom.). This example indicates that 1. BE and SPNE don’t imply each other. 2. BE eliminates two NEs, one of which is SPNE. SPNE also eliminates two NEs, one of which is BE. Answer 3.6. (a) There are two information sets for P2. Let (a1, a2) be a typical P2’s strategy, where a1 is an action taken at the left information set and a2 is an action taken at the right information set. The normal form of the game is P2 (D, D) (D, U) (U, D) (U, U) P1: B 4, 2 (4, 2) 1, 1 1, 1 T 5, 1 2, 2 5, 1 (2, 2) There are two pure-strategy NEs: σ∗ = [B, (D, U)] and σ∗ = [T, (U, U)]. The first one is the SPNE. 3—6
he gan Y3 P2 2 2 The normal form is P2 P1:B|4.21.1 There is a pure-strategy NE: O*=T, U). Since playing T is a strictly dominant strategy for Pl, this NE is the only mixed-strategy NE (c) The game tree is vB Y2 P2 (1-p)y2 H Py2+(1-p)n 2 2 In this game tree, the beliefs are 141=m1+(1-p)2,p2=m2+(1-p) which are derived from Bayes rule by allowing the possibility of an error in implemen- tation, where i 20 and 21+12= 1. We have A1+42= 1. We can also have the
(b) The game tree is: o P2 ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ 2 4 . . P1 ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ 2 2 δ 1 δ 2 H 2 1 γ 2 γ δ 1 δ 2 B T D U D U ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ 1 1 ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ 1 5 The normal form is P2 D U P1: B 4, 2 1, 1 T 5, 1 (2, 2) There is a pure-strategy NE: σ∗ = (T, U). Since playing T is a strictly dominant strategy for P1, this NE is the only mixed-strategy NE. (c) The game tree is o P2 ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ 2 4 . . P1 1 2 pγ + (1− p)γ ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ 2 2 δ 1 δ 2 H 2 1 γ 2 γ 2 1 pγ + (1− p)γ δ 1 δ 2 B T D U D U ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ 1 1 ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ 1 5 Figure 3.6 In this game tree, the beliefs are μ1 = pγ1 + (1 − p)γ2, μ2 = pγ2 + (1 − p)γ1, which are derived from Bayes rule by allowing the possibility of an error in implementation, where γi ≥ 0 and γ1 + γ2 = 1. We have μ1 + μ2 = 1. We can also have the 3—7
following game tree, where P2,s beliefs are also derived from Bayes rule. Since the two game trees are equivalent, we will thus use Figure 3.6 only. P/B TI-P PT B-p H D 2 We now solve for BEs in the game tree of Figure 3.6. We solve by backward induction We find D>U兮211+12>1+212兮(1-2p)72>(1-2p)1 Then, first, if D>U, Pl will choose T, i.e., 1=0 o be consistent with (7), we need p3. Thus, we have another BE when p>3:01=0 and di=0 Third, if D N U,(7)implies(1-2p)y2=(1-2p). If p+, ve have 71=727 3. PI compares the expected profits for the two choices: TB=401+d2 and T=501+ 202. Since TT >TB, Pl chooses T, i.e., 01=0 and 12=1, which inconsistent with i =3. If p=,, we still have TB=401+d2 and TT =501+202 Since TT>TB, Pl chooses T, i.e., 11=0 and 12=1. Thus, we have another BE when p=3: i=0 and &1 can be any value in 0, 1 In summary, we have three BEs Error be p2 Pl plays T, P2 plays U p=2 Pl plays T, P2 plays any strategy(pure or mixed) End
following game tree, where P2’s beliefs are also derived from Bayes rule. Since the two game trees are equivalent, we will thus use Figure 3.6 only. o P2 . . P1 1 pγ H 2 1 γ 2 γ 2 pγ B T ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ 2 4 D U ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ 1 1 Nature p B 1 − p p B 1 − p T T 1 (1− p)γ ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ 1 5 D U ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ 2 2 ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ 1 5 D U ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ 2 2 ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ 2 4 D U ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ 1 1 . . . . 2 (1− p)γ We now solve for BEs in the game tree of Figure 3.6. We solve by backward induction. We find D " U ⇔ 2μ1 + μ2 > μ1 + 2μ2 ⇔ (1 − 2p)γ2 > (1 − 2p)γ1. (7) Then, first, if D " U, P1 will choose T, i.e., γ1 = 0 and γ2 = 1. To be consistent with (7), we need p 1 2 . Thus, we have another BE when p > 1 2 : γ∗ 1 = 0 and δ∗ 1 = 0. Third, if D ∼ U, (7) implies (1 − 2p)γ2 = (1 − 2p)γ1. If p 9= 1 2 , we have γ1 = γ2, i.e., γi = 1 2 . P1 compares the expected profits for the two choices: πB = 4δ1 + δ2 and πT = 5δ1 + 2δ2. Since πT > πB, P1 chooses T, i.e., γ1 = 0 and γ2 = 1, which is inconsistent with γi = 1 2 . If p = 1 2 , we still have πB = 4δ1 + δ2 and πT = 5δ1 + 2δ2. Since πT > πB, P1 chooses T, i.e., γ1 = 0 and γ2 = 1. Thus, we have another BE when p = 1 2 : γ∗ 1 = 0 and δ∗ 1 can be any value in [0, 1]. In summary, we have three BEs: Error BE p 1 2 P1 plays T, P2 plays U p = 1 2 P1 plays T, P2 plays any strategy (pure or mixed) End 3—8