当前位置:高等教育资讯网  >  中国高校课件下载中心  >  大学文库  >  浏览文档

《Structure and Interpretation of Computer Programs》lecture4webhand

资源类别:文库,文档格式:PDF,文档页数:18,文件大小:1.71MB,团购合买
6.001 Structure and Interpretation of Computer Programs. Copyright o 2004 by Massachusetts Institute of Technology 6.001 Notes: Section 4.1 Slide 4.1.1 Today ' s topics In this lecture, we are going to take a careful look at the kinds
点击下载完整版文档(PDF)

6.001 Structure and Interpretation of Computer Programs. Copyright o 2004 by Massachusetts Institute of Technology 6.001 Notes: Section 4.1 Slide 4.1.1 Today ' s topics In this lecture, we are going to take a careful look at the kinds of procedures we can build. We will first go back to look ver Orders of growth of processes carefully at the substitution model, to see how the rules for Relating ty pes of procedures to different orders of growth evaluation help us determine the evolution of a process controlled by a procedure. We will then see how different kinds of process evolution can occur, with different demands in terms of computational resources. And we will see how to characterize those differences formally in terms of orders of growth of a process. Finally, we will explore examples of procedures from different classes of growth, helping you to 8w203 60015e egin to relate how choices in procedure design may affect performance of the actual use of the procedure Rules for evaluation Slide 4.1.2 Now that we have seen two different implementations of the ame idea, with different behaviors, we need a way of more formally characterizing why that difference occurs. To do that we are going to go back to our substitution model Now we are going to think of our model as a set of rewrite es. that set of rules that formally specify, given expression of a particular form, rewrite it in the following form Slide 4.1.3 Rules for evaluation So elementary expressions in this viewpoint are just "left Elementary expressions are left alone: Elementary expressions are alone, that is, numbers, names of built-in procedures, and lambda expressions are left as is ambda expressions. naming procedures iial names of primitive procedures

6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology. 6.001 Notes: Section 4.1 Slide 4.1.1 In this lecture, we are going to take a careful look at the kinds of procedures we can build. We will first go back to look very carefully at the substitution model, to see how the rules for evaluation help us determine the evolution of a process controlled by a procedure. We will then see how different kinds of process evolution can occur, with different demands in terms of computational resources. And we will see how to characterize those differences formally in terms of orders of growth of a process. Finally, we will explore examples of procedures from different classes of growth, helping you to begin to relate how choices in procedure design may affect performance of the actual use of the procedure. Slide 4.1.2 Now that we have seen two different implementations of the same idea, with different behaviors, we need a way of more formally characterizing why that difference occurs. To do that, we are going to go back to our substitution model. Now we are going to think of our model as a set of rewrite rules, that is, as a set of rules that formally specify, given an expression of a particular form, rewrite it in the following form. Slide 4.1.3 So elementary expressions in this viewpoint are just "left alone", that is, numbers, names of built-in procedures, and lambda expressions are left as is

6.001 Structure and Interpretation of Computer Programs. Copyright o 2004 by Massachusetts Institute of Technology Slide 4.1.4 Rules for evaluation If our expression is a name that we created ourselves, we will Elementary expressions"are left alone: Elementary expressions are rewrite in place of it the value that was associated with that name during the definition A name bound by DEFINE: Rewrite the name as the value it is Slide 4.1.5 For the special form if we use the rule we just saw. We Rules for evaluation Elementary expressions"are left alone: Elementary expressions are evaluate the predicate clause and based on its value, we either rewrite the i f expression by its consequent or its alternative write the name as the value it is associated with by the definiti IF: If the evaluation of the predicate expression terminates in non-false Rules for evaluation Slide 4.1.6 And finally: combinations. We use the rule that we first inital names of primtive procedures evaluate the operator expression to get the procedure, and we Rssaosetn n e pese a era reme etse ate primitive procedure, we are just going to "do the right thing 9 A name bound by DEFINE Rewmte the name as the value it I evaluate the operands to get the set of arguments. If we have Otherwise we are going to replace this entire expression with Evaluate the operator expression to get the procedure, and the body of the compound expression, with the arguments if the operator names a primitive procedure, do whatever m: substituted for their associated parameters operator names a compound procedure, evaluate the body of the compound procedure wth the arguments substituted for the formal parameters in the body 8200 Slide 4.1.7 Orders of growth of processes Given that model, we can more formally capture the evolution Suppose n is a parameter that measures the size of a of a process. We can talk about the order of growth of a process problem these rewrite rules evolve. This measures how much of a Let r(n) be the amount of resources needed to compute a procedure of size n particular resource a process is going to take as these rules We say R(n) has order of growth e(f(n)if there are evolve, where typically we measure either space or time as the onstants k, and kz such that k,f(n)<=R(n)<=kf(n) for large n More formally, let n be a parameter that measures the size of number of deferred operations, and time, measured by the the problem of interest. In the case of factorial, this would be number of primitive steps the size of the input argument. We let r(n)denote the amount of resources we will need to solve a problem of this size, where 842003 600SC as noted, the resources are usually space and time. We are interested in characterizing R(n) for large values of N, that is, in

6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology. Slide 4.1.4 If our expression is a name that we created ourselves, we will rewrite in place of it the value that was associated with that name during the definition. Slide 4.1.5 For the special form if we use the rule we just saw. We evaluate the predicate clause, and based on its value, we either rewrite the if expression by its consequent or its alternative. Slide 4.1.6 And finally: combinations. We use the rule that we first evaluate the operator expression to get the procedure, and we evaluate the operands to get the set of arguments. If we have a primitive procedure, we are just going to “do the right thing”. Otherwise we are going to replace this entire expression with the body of the compound expression, with the arguments substituted for their associated parameters. Slide 4.1.7 Given that model, we can more formally capture the evolution of a process. We can talk about the order of growth of a process as these rewrite rules evolve. This measures how much of a particular resource a process is going to take as these rules evolve, where typically we measure either space or time as the resource. More formally, let n be a parameter that measures the size of the problem of interest. In the case of factorial, this would be the size of the input argument. We let R(n) denote the amount of resources we will need to solve a problem of this size, where as noted, the resources are usually space and time. We are interested in characterizing R(n) for large values of N, that is, in

6.001 Structure and Interpretation of Computer Programs. Copyright o 2004 by Massachusetts Institute of Technology the asymptotic limit. We say that r(n) has order of growth, Theta of f of n if we can find a function f(n) that is oughly of the same order as the needed resource, where the more formal definition is shown in the Slide 4.1.8 Partial trace for (fact 4) In more common terms, this says that when measuring the amount of space we need, we want to find a function that measures the number of deferred operations, as a function of the size of the problem, up to a constant factor. For measuring the amount of time we want to find a function that measures the number of basic or primitive steps that the rewrite rules go through, again as a function of the size of the problem 6001 SKP Slide 4.1.9 Partial trace for(fact 4) So let's use our rewrite rule idea, together with this notion of measuring the amount of resource we need as an order of a(if (=n 1)i bda(n) n(act(-n1)))) growth in the size of the problem. Here is our recursive factorial 1)1《·4《aet(-41)))) 4(·3(·2(11)1(·1《tact(-11))))))) 4(·32)) 6 001 SICP Partial trace for (ifact 4) Slide 4.1.10 (define ifact -helper (lambda (product count n) and here is a partial trace of fact using those rewrite rules We start with (fact 4). That reduces to evaluating an if detine ifact (lambda (n) (ifact-helper 11 n))) expression. Since the predicate is not true, this reduces to 1per 11 4) evaluating the alternative statement, which is a combination of (f14)1( ifact- helper(*11)(+11)4) e 6 2 4) i difaot-helper (+1 2)(214) a multiplication and another evaluation of fact (if (3 4)2 (afaat-helper ( 2 3)(31) 4)) Notice how the orange colored lines capture the evolution of the rewrite rules. Note how(fact 4 )reduces to a deferred operation ft44)6 helper(64)(+41)4)) and an evaluation of (fact 3)and this further reduces to another irc>54)24( ifact- helper(*245)(+51)4) deferred operation and a subsequent call to fact. Notice the shape of this process: it grows out with a set of deferre operations until it reaches a base case, and then it contracts back in, completing each deferred operation. Using this, we can see that the amount of space grows linearly with the size of the argument. That is, with each increase in the argument size, I add a constant amount of space requirement. In terms of the number of operations, we can see that we basically need twice as many steps as the size of the problem, one to expand out, and one to reduce back down. This is also a linear growth, since the constant 2 does not change with problem size

6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology. the asymptotic limit. We say that R(n) has order of growth, Theta of f of n if we can find a function f(n) that is roughly of the same order as the needed resource, where the more formal definition is shown in the Slide 4.1.8 In more common terms, this says that when measuring the amount of space we need, we want to find a function that measures the number of deferred operations, as a function of the size of the problem, up to a constant factor. For measuring the amount of time, we want to find a function that measures the number of basic or primitive steps that the rewrite rules go through, again as a function of the size of the problem. Slide 4.1.9 So let's use our rewrite rule idea, together with this notion of measuring the amount of resource we need as an order of growth in the size of the problem. Here is our recursive factorial procedure ... Slide 4.1.10 ..and here is a partial trace of fact using those rewrite rules. We start with (fact 4). That reduces to evaluating an if expression. Since the predicate is not true, this reduces to evaluating the alternative statement, which is a combination of a multiplication and another evaluation of fact. Notice how the orange colored lines capture the evolution of the rewrite rules. Note how (fact 4) reduces to a deferred operation and an evaluation of (fact 3) and this further reduces to another deferred operation and a subsequent call to fact. Notice the shape of this process: it grows out with a set of deferred operations until it reaches a base case, and then it contracts back in, completing each deferred operation. Using this, we can see that the amount of space grows linearly with the size of the argument. That is, with each increase in the argument size, I add a constant amount of space requirement. In terms of the number of operations, we can see that we basically need twice as many steps as the size of the problem, one to expand out, and one to reduce back down. This is also a linear growth, since the constant 2 does not change with problem size

6.001 Structure and Interpretation of Computer Programs. Copyright o 2004 by Massachusetts Institute of Technology Slide 4.1.11 And let's compare that our iterative factorial. Here is our code Examples of orders of growth gain, and here is a partial trace of the rewrite evolution of that FACT pr Here we see the different shape of the process, and our order of growth analysis captures that difference. In particular, there are no deferred operations, and you can see that the maximum amount of space I need at any stage, that is, the maximun amount of space used by any rewrite, is independent of the size of the problem. We say that it is constant, as it does not grow with n In terms of time, there is basically one operation for each increment in the argument, so this is again a linear order of growth Examples of orders of growth Slide 4.1.12 ·FAcT So we can formally capture this, as shown. Fact has linear Space e(n)-linear growth in space, written as shown, because the maximum Time e(n)-linear amount of space needed by any rewrite stage is a linear multiple of the size of the argument. It also has linear growth in time, ⊙(1)- constant because as we saw it takes a number of basic steps that is also a (n)-linear linear multiple of the size of the argument Room 6001 SICP Slide 4.1.13 Examples of orders of growth On the other hand. iterative-fact has no deferred FACT operations, and is constant in space, written as shown, while as Space e(n)-linear we argued, the time order of growth is linear. Notice that the fact that the recursive version took 2n steps and the iterative version took n steps doesn't matter in terms of order of growth."P ASpTace e(1)-constant Both are said to be linear Time e(n)-linear Thus we see that we can formally characterize the difference between these two processes, which have different shapes of evolution 60015e

6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology. Slide 4.1.11 And let's compare that our iterative factorial. Here is our code again, and here is a partial trace of the rewrite evolution of that process. Here we see the different shape of the process, and our order of growth analysis captures that difference. In particular, there are no deferred operations, and you can see that the maximum amount of space I need at any stage, that is, the maximum amount of space used by any rewrite, is independent of the size of the problem. We say that it is constant, as it does not grow with n. In terms of time, there is basically one operation for each increment in the argument, so this is again a linear order of growth. Slide 4.1.12 So we can formally capture this, as shown. Fact has linear growth in space, written as shown, because the maximum amount of space needed by any rewrite stage is a linear multiple of the size of the argument. It also has linear growth in time, because as we saw it takes a number of basic steps that is also a linear multiple of the size of the argument. Slide 4.1.13 On the other hand, iterative-fact has no deferred operations, and is constant in space, written as shown, while as we argued, the time order of growth is linear. Notice that the fact that the recursive version took 2n steps and the iterative version took n steps doesn't matter in terms of order of growth. Both are said to be linear. Thus we see that we can formally characterize the difference between these two processes, which have different shapes of evolution

6.001 Structure and Interpretation of Computer Programs. Copyright o 2004 by Massachusetts Institute of Technology Slide 4.1.1 So why have we done this? The primary goal is to allow you to start to recognize different kinds of behaviors, and the associated procedure forms that give rise to those behaviors This will start to allow you to work backwards, in designing a procedure, by enabling you to visualize the consequences of performing a computation in different ways 6.001 Notes: Section 4.2 Slide 4.2.1 Com puting Fi Having seen two different kinds of processes, one linear, and Consider the following function one constant, we want to fill out our repertoire of processes, by. F(m)=0 if n=0 looking at other kinds of processes. The next one is an example|.F(n)=1 ifn=1 of an exponential process, and deals with a classic function F(n)= F(n-1)+ F(n-2)otherwise called Fibonacci. Its definition is that if its argument is 0, its value is 0, if its argument is 1, its value is 1, and for all other positive integer arguments, its value is the sum of its values for the two preceding arguments We would like to write a procedure to compute Fibonacci, and in particular see how it gives rise to a different kind of behavior Fibonacci Slide 4.2.2 o solve this problem, let's use our tool of wishful thinking Here, that wishful thinking says, lets assume that given an (oand (( n 0) 0) n1)1) argument n, we know how to solve Fibonacci for any smaller (else ( (fib ("n 1)) (1b(n2))))))) sized argument. Using that idea, we can then work out a solution to the problem. With this in hand, it is clear that the (cond ( solution to the general problem is just to solve two smaller sized problems, then just add the results together. Note that in Else <consequent) this case we are using wishful thinking twice, not once, as in our previous examples I Here is a procedure that captures this idea First, we introduce a new expression, called a cond expression. Cond uses the following rules of evaluation. The cond consists of a set of clauses, each of which has within it, a predicate clause, and one or more subsequent expressions. Cond proceeds by first evaluating the predicate of the first clause, in this case(= n 0). If it is true, then we evaluate in turn each of the other expressions in this clause, returning the value of the last one as the value of the whole cond. In the case of the first clause of this cond that is the expression 0. If the predicate of the first clause is false we move to the next

6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology. Slide 4.1.14 So why have we done this? The primary goal is to allow you to start to recognize different kinds of behaviors, and the associated procedure forms that give rise to those behaviors. This will start to allow you to work backwards, in designing a procedure, by enabling you to visualize the consequences of performing a computation in different ways. 6.001 Notes: Section 4.2 Slide 4.2.1 Having seen two different kinds of processes, one linear, and one constant, we want to fill out our repertoire of processes, by looking at other kinds of processes. The next one is an example of an exponential process, and deals with a classic function called Fibonacci. Its definition is that if its argument is 0, its value is 0, if its argument is 1, its value is 1, and for all other positive integer arguments, its value is the sum of its values for the two preceding arguments. We would like to write a procedure to compute Fibonacci, and in particular see how it gives rise to a different kind of behavior. Slide 4.2.2 To solve this problem, let's use our tool of wishful thinking. Here, that wishful thinking says, let's assume that given an argument n, we know how to solve Fibonacci for any smaller sized argument. Using that idea, we can then work out a solution to the problem. With this in hand, it is clear that the solution to the general problem is just to solve two smaller sized problems, then just add the results together. Note that in this case we are using wishful thinking twice, not once, as in our previous examples. Here is a procedure that captures this idea. First, we introduce a new expression, called a cond expression. Cond uses the following rules of evaluation. The cond consists of a set of clauses, each of which has within it, a predicate clause, and one or more subsequent expressions. Cond proceeds by first evaluating the predicate of the first clause, in this case (= n 0). If it is true, then we evaluate in turn each of the other expressions in this clause, returning the value of the last one as the value of the whole cond. In the case of the first clause of this cond that is the expression 0. If the predicate of the first clause is false, we move to the next

6.001 Structure and Interpretation of Computer Programs. Copyright o 2004 by Massachusetts Institute of Technology clause of the cond and repeat the process. This continues for however many clauses are contained in the cond until either a true predicate is reached or we reach a clause with the special keyword e lse. In this latter case, that predicate is treated as true, and the subsequent expressions are evaluated Given that form for cond, we can see how our fibonacci procedure captures the idea we expressed. We keep recursively solving smaller sized problems until we get to a base case. Notice that here we have two base cases. not one Also notice that while this is a recursive procedure, it is different from our earlier ones. Here, there are two recursive calls to the procedure in the body, rather than just one. Our question is whether this leads to a different kind of behavior, or a different order of growth Slide 4.2.3 A tree recursion This does give rise to a different kind of behavior, which we can easily see with the illustrated diagram. To solve a problem of size 4, notice that we have to solve two problems, one of size 3 and one of size 2, and each of these requires solving two smaller problems, and so on This gives rise to a kind of tree of things we have to do, and Fib 1 Fib each recursive call gives rise to two subproblems of smaller size. This leads to a different order of growth Fib1 Fib o 6001 sICP Orders of growth for Fibonacci Slide 4.2.4 Let ta be the number of steps that we need to take to solve the case o measure the order of growth, let,'s let t of n denote the for size n. Then number of time steps we need to solve a problem of size n t=tnt+t,2=25,2=454=8 From our tree. we see that to do this we need to solve In space, we have one defered operation for cach increment of the problem of size n-1, and a problem of size n-2.We could stack of -e(n)-linear actually work out a detailed analysis of this relationship, but for our purposes, we can approximate this as roughly the same as solving two problems of size n-2. Expanding a step further, this is roughly the same as solving 4 problems of size n-4 and roughly the same as solving 8 problems of size n-6. A little I math shows that in general this reduces to 2 to the power of n/2 This is an example of an exponential order of growth, and this is very different from what we saw earlier. To convince yourself of this, assume that each step takes one second and see how much time it would take for an exponential process as compared to a linear one, as n gets large In terms of space, our tree shows us that we have basically one deferred operation for step, or in other words, the maximum depth of that tree is linear in the size of the problem, and the maximum depth exactly captures the maximum number of deferred operations 6.001 Notes: Section 4.3

6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology. clause of the cond and repeat the process. This continues for however many clauses are contained in the cond until either a true predicate is reached, or we reach a clause with the special keyword else. In this latter case, that predicate is treated as true, and the subsequent expressions are evaluated. Given that form for cond, we can see how our fibonacci procedure captures the idea we expressed. We keep recursively solving smaller sized problems until we get to a base case. Notice that here we have two base cases, not one. Also notice that while this is a recursive procedure, it is different from our earlier ones. Here, there are two recursive calls to the procedure in the body, rather than just one. Our question is whether this leads to a different kind of behavior, or a different order of growth. Slide 4.2.3 This does give rise to a different kind of behavior, which we can easily see with the illustrated diagram. To solve a problem of size 4, notice that we have to solve two problems, one of size 3 and one of size 2, and each of these requires solving two smaller problems, and so on. This gives rise to a kind of tree of things we have to do, and each recursive call gives rise to two subproblems of smaller size. This leads to a different order of growth. Slide 4.2.4 To measure the order of growth, let's let t of n denote the number of time steps we need to solve a problem of size n. From our tree, we see that to do this, we need to solve a problem of size n-1, and a problem of size n-2. We could actually work out a detailed analysis of this relationship, but for our purposes, we can approximate this as roughly the same as solving two problems of size n-2. Expanding a step further, this is roughly the same as solving 4 problems of size n-4 and roughly the same as solving 8 problems of size n-6. A little math shows that in general this reduces to 2 to the power of n/2 steps. This is an example of an exponential order of growth, and this is very different from what we saw earlier. To convince yourself of this, assume that each step takes one second, and see how much time it would take for an exponential process as compared to a linear one, as n gets large. In terms of space, our tree shows us that we have basically one deferred operation for step, or in other words, the maximum depth of that tree is linear in the size of the problem, and the maximum depth exactly captures the maximum number of deferred operations. 6.001 Notes: Section 4.3

6.001 Structure and Interpretation of Computer Programs. Copyright o 2004 by Massachusetts Institute of Technology Slide 4.3.1 Lets take another quick look at how we can create procedures Using different processes for the same goal ith different orders of growth to compute the same function We want to compute a b, just using multiplication and As a specific example, suppose we want to compute exponentials, such as a raised to the b power but to do so only using the simpler operations of multiplication and addition How might we use the tools we have been developing to accomplish this? Using different processes for the same goal Slide 4.3.2 We want to compute a b, just using multiplication and So, recall the stages we used to solve problems like this:w will use some wishful thinking to assume that solutions to Remember our stages: Wishful thinking simpler versions of the problem exist; we will then decompose the problem into a simpler version of the same problem, plus some other simple operations, and we will use this to construct a solution to the more general problem; and finally, we will determine the smallest sized subproblem into which we want to do decomposition. Let's look at using these tools on the problem of exponentiation 6001 SICP Slide 4.3.3 Using different processes for the same goal wishful thinking is our tool for using induction. It says, let's assume that some procedure my-expt exists, so that we can. Assume that the procedure my-expt exists, but only use it, but that it only solves smaller versions of the same solves smaller versions of the same problem problem Using different processes for the same goal Slide 4.3.4 Wishful thinking Given that assumption, we can then turn to the tricky part, Assume that the procedure mry-expt exists, but only which is determining how to decompose the problem into a soles smaller versions of the same problem simpler version of the same problem Decompose problem into soking smaller version and using Here is one method: Notice that a to the power b result b=a*a*,*a=a*a^{b-1 mathematically is just b products of a. But this we can also group as a times b-l products of a, and that latter we ecognize as a smaller version of the same exponentiation problem. Thus, we can reduce a to the bth power as a times 1001 SICP I to the b-1st power. Thus we can reduce the problem to a simpler version of the same problem, together with some simple operations, in this case, an additional multiplication

6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology. Slide 4.3.1 Let's take another quick look at how we can create procedures with different orders of growth to compute the same function. As a specific example, suppose we want to compute exponentials, such as a raised to the b power, but to do so only using the simpler operations of multiplication and addition. How might we use the tools we have been developing to accomplish this? Slide 4.3.2 So, recall the stages we used to solve problems like this: we will use some wishful thinking to assume that solutions to simpler versions of the problem exist; we will then decompose the problem into a simpler version of the same problem, plus some other simple operations, and we will use this to construct a solution to the more general problem; and finally, we will determine the smallest sized subproblem into which we want to do decomposition. Let's look at using these tools on the problem of exponentiation. Slide 4.3.3 Wishful thinking is our tool for using induction. It says, let's assume that some procedure my-expt exists, so that we can use it, but that it only solves smaller versions of the same problem. Slide 4.3.4 Given that assumption, we can then turn to the tricky part, which is determining how to decompose the problem into a simpler version of the same problem. Here is one method: Notice that a to the power b mathematically is just b products of a. But this we can also group as a times b-1 products of a, and that latter we recognize as a smaller version of the same exponentiation problem. Thus, we can reduce a to the bth power as a times a to the b-1st power. Thus we can reduce the problem to a simpler version of the same problem, together with some simple operations, in this case, an additional multiplication

6.001 Structure and Interpretation of Computer Programs. Copyright o 2004 by Massachusetts Institute of Technology Slide 4.3.5 Using different processes for the same goal Given that idea, here is the start of our procedure for my- expt. Notice how it recursively uses my-expt to solve Assume that the procedure my-expt exists, but only solves smaller versions of the same problem the subproblem, then multiplies the result by a to get the full Decompose problem into solving smaller version and using solution (lambda (a b) ( a (my-expt a (-b 1))))) 6001s Using different processes for the same goal Slide 4.3.6 Of course. as stated. that procedure will fail. as it will recurs Identify smallest size subproblem a0=1 Here, that is easy. Anything to the Oth power is just lep o indefinitely. To stop unwinding into simpler versions of th same problem, we need to find a smallest size subprob 6001 SICP Slide 4.3.7 Using different processes for the same goal And thus we can add our base case to our procedure, creating the same kind of procedure we saw earlier for factorial Identify smallest size subproblem Note how the base case will stop unwinding the computation into simpler cases (lambda (a b) ( a(rmy-expt a(-b 1)))) 60015e

6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology. Slide 4.3.5 Given that idea, here is the start of our procedure for my￾expt. Notice how it recursively uses my-expt to solve the subproblem, then multiplies the result by a to get the full solution. Slide 4.3.6 Of course, as stated, that procedure will fail, as it will recurse indefinitely. To stop unwinding into simpler versions of the same problem, we need to find a smallest size subproblem. Here, that is easy. Anything to the 0th power is just 1! Slide 4.3.7 And thus we can add our base case to our procedure, creating the same kind of procedure we saw earlier for factorial. Note how the base case will stop unwinding the computation into simpler cases

6.001 Structure and Interpretation of Computer Programs. Copyright o 2004 by Massachusetts Institute of Technology Slide 4.3.8 Using different processes for the same goal You could run the substitution model on this procedure to both verify that it works, and to determine its growth pattern However, the form is very familiar, and we can thus deduce by Space e(n)-linear Time e(n)-linear comparison to factorial that this procedure has linear growth in both space and time. The time is easy to see: there is one additional step for each increment in the size of the And fo see that there is one deferred operation for each recursive call of the procedure, so we will stack up a linear number of such deferred operations until we get down to the base case, at which point we can start completing the deferred multiplications and gathering up the Slide 4.3.9 Using different processes for the same goal As we saw earlier, we expect there are other ways of creating Are there other ways to decompose this problem? procedures to solve the same problem. With factorial, we use Use the idea of state variables. and table evolutio the idea of a table of state variables, with update rules for hanging the values of those variables. We should be able to do the same thing here terative algorithm to compute a b as a table Slide 4.3.10 So recall the idea of our table. We set up one column for each In this table One column information used piece of information that we will need to complete a step in the computation, and we use one row for each such step. The idea here is pretty straightforward. We know that a to the bth power is just b successive multiplications by a. So we can just do the multiplication, keep track of the product accumulated so far, and how many multiplications we have left. Thus after one step, we will have a product of a and b-l things left to do

6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology. Slide 4.3.8 You could run the substitution model on this procedure to both verify that it works, and to determine its growth pattern. However, the form is very familiar, and we can thus deduce by comparison to factorial that this procedure has linear growth in both space and time. The time is easy to see: there is one additional step for each increment in the size of the argument. And for space, we see that there is one deferred operation for each recursive call of the procedure, so we will stack up a linear number of such deferred operations until we get down to the base case, at which point we can start completing the deferred multiplications and gathering up the answer. Slide 4.3.9 As we saw earlier, we expect there are other ways of creating procedures to solve the same problem. With factorial, we used the idea of a table of state variables, with update rules for changing the values of those variables. We should be able to do the same thing here. Slide 4.3.10 So recall the idea of our table. We set up one column for each piece of information that we will need to complete a step in the computation, and we use one row for each such step. The idea here is pretty straightforward. We know that a to the bth power is just b successive multiplications by a. So we can just do the multiplication, keep track of the product accumulated so far, and how many multiplications we have left. Thus after one step, we will have a product of a and b-1 things left to do

6.001 Structure and Interpretation of Computer Programs. Copyright o 2004 by Massachusetts Institute of Technology Slide 4.3.11 After the next step, we will multiply our current product by a Iterative algorithm to compute aab as a table and keep track of that new result, plus the fact that we have one In this ta less thing to do One column for each piece of information used Iterative algorithm to compute a b as a table Slide 4.3.12 And this process we can continue until In this table One column for each piece of information used product counter a a 2 b-2 a a 3 b-3 6001 SICP Slide 4.3.13 Iterative algorithm to compute a b as a table we reach a point where we have no further multiplies to do or each piece of information used each step 234 <84003 Iterative algorithm to compute a b as a table Slide 4.3.14 Now. what are the stages of this Itation? To get the next In this table One column for each piece of information used value for the product, we take the current value of the product ab-4 a/0 0 multiplications That updates the other state variabe e and the value of a, multiply together, and keep That updates produ one of the state variables. To get the next value of counter, we simply subtract 1, since we have done one more of th 1001 SICP

6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology. Slide 4.3.11 After the next step, we will multiply our current product by a and keep track of that new result, plus the fact that we have one less thing to do. Slide 4.3.12 And this process we can continue until ... Slide 4.3.13 ... we reach a point where we have no further multiplies to do. Slide 4.3.14 Now, what are the stages of this computation? To get the next value for the product, we take the current value of the product and the value of a, multiply together, and keep. That updates one of the state variables. To get the next value of counter, we simply subtract 1, since we have done one more of the multiplications. That updates the other state variable

点击下载完整版文档(PDF)VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
共18页,试读已结束,阅读完整版请下载
相关文档

关于我们|帮助中心|下载说明|相关软件|意见反馈|联系我们

Copyright © 2008-现在 cucdc.com 高等教育资讯网 版权所有