ames vs.search problems GAME PLAYIN 雷 疆 用用用用用用 Game tree (2-player,deterministic,turns)
Game playing Chapter 6 Chapter 6 1 Outline ♦ Games ♦ Perfect play – minimax decisions – α–β pruning ♦ Resource limits and approximate evaluation ♦ Games of chance ♦ Games of imperfect information Chapter 6 2 Games vs. search problems “Unpredictable” opponent ⇒ solution is a strategy specifying a move for every possible opponent reply Time limits ⇒ unlikely to find goal, must approximate Plan of attack: • Computer considers possible lines of play (Babbage, 1846) • Algorithm for perfect play (Zermelo, 1912; Von Neumann, 1944) • Finite horizon, approximate evaluation (Zuse, 1945; Wiener, 1948; Shannon, 1950) • First chess program (Turing, 1951) • Machine learning to improve evaluation accuracy (Samuel, 1952–57) • Pruning to allow deeper search (McCarthy, 1956) Chapter 6 3 Types of games deterministic chance imperfect information perfect information go, othello chess, checkers, nuclear war bridge, poker, scrabble monopoly backgammon blind tictactoe battleships, Chapter 6 4 Game tree (2-player, deterministic, turns) X X X X X X X X X MIN (O) MAX (X) X X O O O X O O O O O O O MAX (X) X O X O X O X X X X X X X MIN (O) X O X X O X X O X . . . . . . . . . . . . . . . . . . . . . TERMINAL X X −1 0 +1 Utility Chapter 6 5 Minimax Perfect play for deterministic, perfect-information games Idea: choose move to position with highest minimax value = best achievable payoff against best play E.g., 2-ply game: MAX 3 12 8 6 4 2 14 5 2 MIN 3 A 1 A 3 A 2 A 13 A 12 A 11 A 21 A 23 A 22 A 33 A 32 A 31 3 2 2 Chapter 6 6
Properties of minimax Properties of minimax return thein AcnoNs(sfate min MIs-VALUE(ESU(tate Minimax algorithm : acem(depth-first expration Time complexity7(m) Properties of minimax Properties of minimax Optima Yes,against an optimal opponent.Otherwise?? Properties of minimax
Minimax algorithm function Minimax-Decision(state) returns an action inputs: state, current state in game return the a in Actions(state) maximizing Min-Value(Result(a,state)) function Max-Value(state) returns a utility value if Terminal-Test(state) then return Utility(state) v ← −∞ for a, s in Successors(state) do v ← Max(v, Min-Value(s)) return v function Min-Value(state) returns a utility value if Terminal-Test(state) then return Utility(state) v ← ∞ for a, s in Successors(state) do v ← Min(v, Max-Value(s)) return v Chapter 6 7 Properties of minimax Complete?? Chapter 6 8 Properties of minimax Complete?? Only if tree is finite (chess has specific rules for this). NB a finite strategy can exist even in an infinite tree! Optimal?? Chapter 6 9 Properties of minimax Complete?? Yes, if tree is finite (chess has specific rules for this) Optimal?? Yes, against an optimal opponent. Otherwise?? Time complexity?? Chapter 6 10 Properties of minimax Complete?? Yes, if tree is finite (chess has specific rules for this) Optimal?? Yes, against an optimal opponent. Otherwise?? Time complexity?? O(b m) Space complexity?? Chapter 6 11 Properties of minimax Complete?? Yes, if tree is finite (chess has specific rules for this) Optimal?? Yes, against an optimal opponent. Otherwise?? Time complexity?? O(b m) Space complexity?? O(bm) (depth-first exploration) For chess, b ≈ 35, m ≈ 100 for “reasonable” games ⇒ exact solution completely infeasible But do we need to explore every path? Chapter 6 12
pruning example Define similarly for MIN is the best value (toMAx)found so far off the current Why is it called example pruning example
α–β pruning example MAX 3 12 8 MIN 3 3 Chapter 6 13 α–β pruning example MAX 3 12 8 MIN 3 2 2 X X 3 Chapter 6 14 α–β pruning example MAX 3 12 8 MIN 3 2 2 X X 14 14 3 Chapter 6 15 α–β pruning example MAX 3 12 8 MIN 3 2 2 X X 14 14 5 5 3 Chapter 6 16 α–β pruning example MAX 3 12 8 MIN 3 3 2 2 X X 14 14 5 5 2 2 3 Chapter 6 17 Why is it called α–β? MIN MAX .. .. .. MIN MAX V α is the best value (to max) found so far off the current path If V is worse than α, max will avoid it ⇒ prune that branch Define β similarly for min Chapter 6 18
reaches depth 8 pretty good chess program 4欧 Suppose we have 10 seconds,explre 1 nodes/second Standard approach: ie.evaluation functin that estimates desirability of position .Use EVAL instead of UTLIrY e.g..depth limit (perhaps add quiescence search) Use CUTOFF-TEST instead of TERMINAL-TEST Unfortunately,is still impossiblel relevant (a form of metareasoning) Resource limits A simple example of the value of reasoning about which computations are With "perfect ordering."time complexity() Good move ordering improves effectiveness of pruning Pruning does not affect final result ife 2 then retum e -MAX(B.MIN-VALUEs.0.B)) for a,s in te)do Properties of o- if TEsnXAL-TET(sfatd then return UriLrry(state) inpiits afur6 curet stite in gamd funct ion MAX-VALUEsfufr,o.B)returns a nfifify mfue return the a in AcnONs(sfute)maximinng MIX-VALUE(RESULT a,sfate)) The a-algorithm MAX suggest plausible moves. 1008994. positions. bad.Ing.>300.so most programs use pattern knowledge bases to Go:human champions refuse to compete against computers.who are too Othello:human champions refuse to compete against computers,who are some lines of search up to 40 ply. uses very sophisticated evaluation,and undisclsed methods for extending game match in 1997.Deep Blue searches 200 million positions per second Chess:Deep Blue defeated human workd champion Gary Kasparov in a six- positions involingorfewer piooes on the board,a totalof 443,74801,247 Tinsley in 1994.Used an endgame database defining perfect play for all Checkers:Chinook ended 40-year-reign of human world champion Marion Deterministic games in practice payoff in deter ministicgamesctsasan ordinal utlity function Behaviour is preserved under any monotonic transformation of EVAL Digression:Exact values don't matter (s)=(number of white queens)-(number of black queens),etc. 初是E总十月息十:十惠 For chess,typically linear weighted sum of features Black to move Evaluation fuctions Black winning White to move 市市 #睡的
The α–β algorithm function Alpha-Beta-Decision(state) returns an action return the a in Actions(state) maximizing Min-Value(Result(a,state)) function Max-Value(state,α, β) returns a utility value inputs: state, current state in game α, the value of the best alternative for max along the path to state β, the value of the best alternative for min along the path to state if Terminal-Test(state) then return Utility(state) v ← −∞ for a, s in Successors(state) do v ← Max(v, Min-Value(s,α, β)) if v ≥ β then return v α ← Max(α, v) return v function Min-Value(state,α, β) returns a utility value same as Max-Value but with roles of α, β reversed Chapter 6 19 Properties of α–β Pruning does not affect final result Good move ordering improves effectiveness of pruning With “perfect ordering,” time complexity = O(b m/2 ) ⇒ doubles solvable depth A simple example of the value of reasoning about which computations are relevant (a form of metareasoning) Unfortunately, 35 50 is still impossible! Chapter 6 20 Resource limits Standard approach: • Use Cutoff-Test instead of Terminal-Test e.g., depth limit (perhaps add quiescence search) • Use Eval instead of Utility i.e., evaluation function that estimates desirability of position Suppose we have 100 seconds, explore 10 4 nodes/second ⇒ 10 6 nodes per move ≈ 35 8/2 ⇒ α–β reaches depth 8 ⇒ pretty good chess program Chapter 6 21 Evaluation functions White slightly better Black to move Black winning White to move For chess, typically linear weighted sum of features Eval(s) = w1f1(s) + w2f2(s) + . . . + wnfn(s) e.g., w1 = 9 with f1(s) = (number of white queens) – (number of black queens), etc. Chapter 6 22 Digression: Exact values don’t matter MIN MAX 2 1 1 4 2 2 20 1 1 400 20 20 Behaviour is preserved under any monotonic transformation of Eval Only the order matters: payoff in deterministic games acts as an ordinal utility function Chapter 6 23 Deterministic games in practice Checkers: Chinook ended 40-year-reign of human world champion Marion Tinsley in 1994. Used an endgame database defining perfect play for all positions involving 8 or fewer pieces on the board, a total of 443,748,401,247 positions. Chess: Deep Blue defeated human world champion Gary Kasparov in a sixgame match in 1997. Deep Blue searches 200 million positions per second, uses very sophisticated evaluation, and undisclosed methods for extending some lines of search up to 40 ply. Othello: human champions refuse to compete against computers, who are too good. Go: human champions refuse to compete against computers, who are too bad. In go, b > 300, so most programs use pattern knowledge bases to suggest plausible moves. Chapter 6 24
if state is a chance node then return the lwest EXPECTIMINIMAX-VALUE of SUCCESSORS(stale) if stateis a MIN node then if state is a MAX node then Just like MINIMAX,except we must also handle chance nodes EXPECTIMIN IMAX gives perfect play Algorithm for nondeterministic games CHANCE MAX return average of Ex PECTIMINIMAX-VALUE of SUCCESSORs(stote) return the highest Ex PECTIMINIMAX-VALUE of SUCCESSORS(state) Simplified example with coin-flipping: In nondeterministic games,chance introduced by dice,card-shuffling Nondeterministic games in general 25242322212019 用图 123456 181716151413 788101112 Nondeterministic games:backgammon 2)picking the action that wins most tricks on average 1)generating 100 deals consistent with bidding information GIB,current best bridge program,approximates this idea by Special case:if an action is optimal for all deals,it's optimal." 是。:the ldea:compute the minimax vaue of each action in each deal. Seems just like having one big dice roll at the beginning of the game Typically we can cakulate a probability for each possible deal E.g.card games,where opponent's initial cards are unknown Games of imperfect information Hence EVAL shouk be proportional to the expected payoff Behaviour is preserved ony by positive linear transformation of EVAL Digression:Exact values DO matter world-champion level TDG AMMON uses depth-2 search +very good EVAL pruning is much less effective value of lookahead is diminished As depth incres probability of reaching aiven node shrinks Backgammon 20 legal moves (can be 6,000 with 1-1 roll) Dice rolls increase b:21 possible ralls with 2 dice Nondeterministic games in practice
Nondeterministic games: backgammon 1 2 3 4 5 6 7 8 9 10 11 12 24 23 22 21 20 19 18 17 16 15 14 13 0 25 Chapter 6 25 Nondeterministic games in general In nondeterministic games, chance introduced by dice, card-shuffling Simplified example with coin-flipping: MIN MAX 2 CHANCE 4 7 4 6 0 5 −2 2 4 0 −2 0.5 0.5 0.5 0.5 3 −1 Chapter 6 26 Algorithm for nondeterministic games Expectiminimax gives perfect play Just like Minimax, except we must also handle chance nodes: . . . if state is a Max node then return the highest ExpectiMinimax-Value of Successors(state) if state is a Min node then return the lowest ExpectiMinimax-Value of Successors(state) if state is a chance node then return average of ExpectiMinimax-Value of Successors(state) . . . Chapter 6 27 Nondeterministic games in practice Dice rolls increase b: 21 possible rolls with 2 dice Backgammon ≈ 20 legal moves (can be 6,000 with 1-1 roll) depth 4 = 20 × (21 × 20) 3 ≈ 1.2 × 10 9 As depth increases, probability of reaching a given node shrinks ⇒ value of lookahead is diminished α–β pruning is much less effective TDGammon uses depth-2 search + very good Eval ≈ world-champion level Chapter 6 28 Digression: Exact values DO matter MIN DICE MAX 2 2 3 3 1 1 4 4 2 3 1 4 .9 .1 .9 .1 2.1 1.3 20 20 30 30 1 1 400 400 20 30 1 400 .9 .1 .9 .1 21 40.9 Behaviour is preserved only by positive linear transformation of Eval Hence Eval should be proportional to the expected payoff Chapter 6 29 Games of imperfect information E.g., card games, where opponent’s initial cards are unknown Typically we can calculate a probability for each possible deal Seems just like having one big dice roll at the beginning of the game ∗ Idea: compute the minimax value of each action in each deal, then choose the action with highest expected value over all deals ∗ Special case: if an action is optimal for all deals, it’s optimal. ∗ GIB, current best bridge program, approximates this idea by 1) generating 100 deals consistent with bidding information 2) picking the action that wins most tricks on average Chapter 6 30
豆 our-card bridge/whist/hearts hand.MAx to play first Example 翻出 our-card bndge/whist/hearts hand,MAX to play hrst Example H::8B1:8:8 ard bridge/whist/hearts hand.MAx to play first Example .. 。 0 Commonsense example Commonsense example Commonsense example
Example Four-card bridge/whist/hearts hand, Max to play first 8 9 2 6 6 6 8 7 6 6 7 6 6 7 6 6 7 6 7 4 2 9 3 4 2 9 3 4 2 3 4 3 4 3 0 Chapter 6 31 Example Four-card bridge/whist/hearts hand, Max to play first 6 4 8 9 2 6 6 6 8 7 6 6 7 6 6 7 6 6 7 6 7 4 2 9 3 4 2 9 3 4 2 3 4 3 4 3 0 8 9 2 6 6 8 7 6 6 7 6 6 7 6 6 7 7 2 9 3 2 9 3 2 3 3 3 0 4 4 4 4 6 MIN MAX MIN MAX Chapter 6 32 Example Four-card bridge/whist/hearts hand, Max to play first 8 9 2 6 6 6 8 7 6 6 7 6 6 7 6 6 7 6 7 4 2 9 3 4 2 9 3 4 2 3 4 3 4 3 0 6 4 8 9 2 6 6 8 7 6 6 7 6 6 7 6 6 7 7 2 9 3 2 9 3 2 3 3 3 0 4 4 4 4 6 6 4 8 9 2 6 6 8 7 6 6 7 6 6 7 2 9 3 2 9 3 2 3 3 7 6 4 6 6 3 7 4 4 4 6 6 3 7 4 −0.5 −0.5 MIN MAX MIN MAX MIN MAX Chapter 6 33 Commonsense example Road A leads to a small heap of gold pieces Road B leads to a fork: take the left fork and you’ll find a mound of jewels; take the right fork and you’ll be run over by a bus. Chapter 6 34 Commonsense example Road A leads to a small heap of gold pieces Road B leads to a fork: take the left fork and you’ll find a mound of jewels; take the right fork and you’ll be run over by a bus. Road A leads to a small heap of gold pieces Road B leads to a fork: take the left fork and you’ll be run over by a bus; take the right fork and you’ll find a mound of jewels. Chapter 6 35 Commonsense example Road A leads to a small heap of gold pieces Road B leads to a fork: take the left fork and you’ll find a mound of jewels; take the right fork and you’ll be run over by a bus. Road A leads to a small heap of gold pieces Road B leads to a fork: take the left fork and you’ll be run over by a bus; take the right fork and you’ll find a mound of jewels. Road A leads to a small heap of gold pieces Road B leads to a fork: guess correctly and you’ll find a mound of jewels; guess incorrectly and you’ll be run over by a bus. Chapter 6 36
perfection is unattainablemust approximate They ustrate several important points aboutAl Games are fun to work on (and dangerous) Can generatend searchatree f informantate Proper analysis
Proper analysis * Intuition that the value of an action is the average of its values in all actual states is WRONG With partial observability, value of an action depends on the information state or belief state the agent is in Can generate and search a tree of information states Leads to rational behaviors such as ♦ Acting to obtain information ♦ Signalling to one’s partner ♦ Acting randomly to minimize information disclosure Chapter 6 37 Summary Games are fun to work on! (and dangerous) They illustrate several important points about AI ♦ perfection is unattainable ⇒ must approximate ♦ good idea to think about what to think about ♦ uncertainty constrains the assignment of values to states ♦ optimal decisions depend on information state, not real state Games are to AI as grand prix racing is to automobile design Chapter 6 38