430 Chapter 10.Minimization or Maximization of Functions Quasi-Newton methods like dfpmin work well with the approximate line minimization done by Insrch.The routines powell(810.5)and frprmn($10.6), however,need more accurate line minimization,which is carried out by the routine linmin Advanced Implementations of Variable Metric Methods Although rare,it can conceivably happen that roundoff errors cause the matrix Hi to become nearly singular or non-positive-definite.This can be serious,because the supposed search directions might then not lead downhill,and because nearly singular Hi's tend to give subsequent Hi's that are also nearly singular. There is a simple fix for this rare problem,the same as was mentioned in 810.4:In case of any doubt,you should restart the algorithm at the claimed minimum point,and see if it goes anywhere.Simple,but not very elegant.Modern implementations of variable metric methods deal with the problem in a more sophisticated way. Instead of building up an approximation to A,it is possible to build up an approximation of A itself.Then,instead of calculating the left-hand side of (10.7.4)directly,one solves the set of linear equations 华 A·(x-x)=-7fx) (10.7.11) 2 At first glance this seems like a bad idea,since solving (10.7.11)is a process of order N3-and anyway,how does this help the roundoff problem?The trick is not to store A but rather a triangular decomposition of A,its Cholesky decomposition (cf.$2.9).The updating Press. formula used for the Cholesky decomposition of A is of order N2 and can be arranged to guarantee that the matrix remains positive definite and nonsingular,even in the presence of finite roundoff.This method is due to Gill and Murray [1,2]. CITED REFERENCES AND FURTHER READING: OF SCIENTIFIC( Dennis,J.E.,and Schnabel,R.B.1983.Numerica/Methods for Unconstrained Optimization and Nonlinear Equations (Englewood Cliffs,NJ:Prentice-Hall).[1] 6 Jacobs,D.A.H.(ed.)1977,The State of the Art in Numerical Analysis(London:Academic Press), Chapter Ill.1,883-6 (by K.W.Brodlie).[2] Polak,E.1971,Computational Methods in Optimization (New York:Academic Press),pp.56ff.[3] 最 Acton,F.S.1970,Numerica/Methods That Work,1990,corrected edition (Washington:Mathe- matical Association of America),pp.467-468. Fuunrgroirion Numerical Recipes 10-621 43106 10.8 Linear Programming and the Simplex (outside Method North Software. The subject of linear programming,sometimes called linear optimization, Ame concerns itself with the following problem:For N independent variables1,...,N, maximize the function 2=a01E1+a02r2+···+a0NxN (10.8.1) subject to the primary constraints x1≥0,2≥0,.xN≥0 (10.8.2)
430 Chapter 10. Minimization or Maximization of Functions Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Quasi-Newton methods like dfpmin work well with the approximate line minimization done by lnsrch. The routines powell (§10.5) and frprmn (§10.6), however, need more accurate line minimization, which is carried out by the routine linmin. Advanced Implementations of Variable Metric Methods Although rare, it can conceivably happen that roundoff errors cause the matrix Hi to become nearly singular or non-positive-definite. This can be serious, because the supposed search directions might then not lead downhill, and because nearly singular Hi’s tend to give subsequent Hi’s that are also nearly singular. There is a simple fix for this rare problem, the same as was mentioned in §10.4: In case of any doubt, you should restart the algorithm at the claimed minimum point, and see if it goes anywhere. Simple, but not very elegant. Modern implementations of variable metric methods deal with the problem in a more sophisticated way. Instead of building up an approximation to A−1, it is possible to build up an approximation of A itself. Then, instead of calculating the left-hand side of (10.7.4) directly, one solves the set of linear equations A · (x − xi) = −∇f(xi) (10.7.11) At first glance this seems like a bad idea, since solving (10.7.11) is a process of order N3 — and anyway, how does this help the roundoff problem? The trick is not to store A but rather a triangular decomposition of A, its Cholesky decomposition (cf. §2.9). The updating formula used for the Cholesky decomposition of A is of order N2 and can be arranged to guarantee that the matrix remains positive definite and nonsingular, even in the presence of finite roundoff. This method is due to Gill and Murray [1,2]. CITED REFERENCES AND FURTHER READING: Dennis, J.E., and Schnabel, R.B. 1983, Numerical Methods for Unconstrained Optimization and Nonlinear Equations (Englewood Cliffs, NJ: Prentice-Hall). [1] Jacobs, D.A.H. (ed.) 1977, The State of the Art in Numerical Analysis (London: Academic Press), Chapter III.1, §§3–6 (by K. W. Brodlie). [2] Polak, E. 1971, Computational Methods in Optimization (New York: Academic Press), pp. 56ff. [3] Acton, F.S. 1970, Numerical Methods That Work; 1990, corrected edition (Washington: Mathematical Association of America), pp. 467–468. 10.8 Linear Programming and the Simplex Method The subject of linear programming, sometimes called linear optimization, concerns itself with the following problem: For N independent variables x1,...,xN , maximize the function z = a01x1 + a02x2 + ··· + a0N xN (10.8.1) subject to the primary constraints x1 ≥ 0, x2 ≥ 0, ... xN ≥ 0 (10.8.2)
10.8 Linear Programming and the Simplex Method 431 and simultaneously subject to M=m1+m2+m3 additional constraints,mi of them of the form ai1x1+a2r2+·+aNxN≤b: (b:≥0)i=1,,m1 (10.8.3) m2 of them of the form aj1x1+aj22+.+aNxW≥bj≥0j=m1+1,.,m1+m2(10.8.4) and ma of them of the form 三 ak1T1+ak2x2+·+OkNTN=bk≥0 (10.8.5) k=m1+m2+1,.,m1+m2+m3 The various ai's can have either sign,or be zero.The fact that the b's must all be nonnegative(as indicated by the final inequality in the above three equations)is a matter of convention only,since you can multiply any contrary inequality by-1. There is no particular significance in the number of constraints M being less than, 2 equal to,or greater than the number of unknowns N. A set of values 1...N that satisfies the constraints (10.8.2)-(10.8.5)is called a feasible vector.The function that we are trying to maximize is called the objective fimnction.The feasible vector that maximizes the objective function is called the optimal feasible vector.An optimal feasible vector can fail to exist for two distinct reasons:(i)there are no feasible vectors,i.e.,the given constraints are incompatible, 6君是 or(ii)there is no maximum,i.e.,there is a direction in N space where one or more of the variables can be taken to infinity while still satisfying the constraints,giving SCIENTIFIC( an unbounded value for the objective function. 6 As you see,the subject of linear programming is surrounded by notational and terminological thickets.Both of these thorny defenses are lovingly cultivated by a coterie of stern acolytes who have devoted themselves to the field.Actually,the basic ideas of linear programming are quite simple.Avoiding the shrubbery,we want to teach you the basics by means of a couple of specific examples;it should then be quite obvious how to generalize. Why is linear programming so important?(i)Because "nonnegativity"is the Numerica 10621 usual constraint on any variable zi that represents the tangible amount of some 431 physical commodity,like guns,butter,dollars,units of vitamin E,food calories, Recipes kilowatt hours,mass,etc.Hence equation (10.8.2).(ii)Because one is often interested in additive (linear)limitations or bounds imposed by man or nature minimum nutritional requirement,maximum affordable cost,maximum on available North labor or capital,minimum tolerable level of voter approval,etc.Hence equations (10.8.3)(10.8.5).(iii)Because the function that one wants to optimize may be linear,or else may at least be approximated by a linear function-since that is the problem that linear programming can solve.Hence equation(10.8.1).For a short, semipopular survey of linear programming applications,see Bland [11. Here is a specific example of a problem in linear programming,which has N=4,m1 2,m2 m3 1,hence M=4: Maximize z=1+2+3x3-4 (10.8.6)
10.8 Linear Programming and the Simplex Method 431 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). and simultaneously subject to M = m1 + m2 + m3 additional constraints, m1 of them of the form ai1x1 + ai2x2 + ··· + aiN xN ≤ bi (bi ≥ 0) i = 1,...,m1 (10.8.3) m2 of them of the form aj1x1 + aj2x2 + ··· + ajN xN ≥ bj ≥ 0 j = m1 + 1,...,m1 + m2 (10.8.4) and m3 of them of the form ak1x1 + ak2x2 + ··· + akN xN = bk ≥ 0 k = m1 + m2 + 1,...,m1 + m2 + m3 (10.8.5) The various aij ’s can have either sign, or be zero. The fact that the b’s must all be nonnegative (as indicated by the final inequality in the above three equations) is a matter of convention only, since you can multiply any contrary inequality by −1. There is no particular significance in the number of constraints M being less than, equal to, or greater than the number of unknowns N. A set of values x1 ...xN that satisfies the constraints (10.8.2)–(10.8.5) is called a feasible vector. The function that we are trying to maximize is called the objective function. The feasible vector that maximizes the objective function is called the optimal feasible vector. An optimal feasible vector can fail to exist for two distinct reasons: (i) there are no feasible vectors, i.e., the given constraints are incompatible, or (ii) there is no maximum, i.e., there is a direction in N space where one or more of the variables can be taken to infinity while still satisfying the constraints, giving an unbounded value for the objective function. As you see, the subject of linear programming is surrounded by notational and terminological thickets. Both of these thorny defenses are lovingly cultivated by a coterie of stern acolytes who have devoted themselves to the field. Actually, the basic ideas of linear programming are quite simple. Avoiding the shrubbery, we want to teach you the basics by means of a couple of specific examples; it should then be quite obvious how to generalize. Why is linear programming so important? (i) Because “nonnegativity” is the usual constraint on any variable xi that represents the tangible amount of some physical commodity, like guns, butter, dollars, units of vitamin E, food calories, kilowatt hours, mass, etc. Hence equation (10.8.2). (ii) Because one is often interested in additive (linear) limitations or bounds imposed by man or nature: minimum nutritional requirement, maximum affordable cost, maximum on available labor or capital, minimum tolerable level of voter approval, etc. Hence equations (10.8.3)–(10.8.5). (iii) Because the function that one wants to optimize may be linear, or else may at least be approximated by a linear function — since that is the problem that linear programming can solve. Hence equation (10.8.1). For a short, semipopular survey of linear programming applications, see Bland [1]. Here is a specific example of a problem in linear programming, which has N = 4, m1 = 2, m2 = m3 = 1, hence M = 4: Maximize z = x1 + x2 + 3x3 − 1 2x4 (10.8.6)
432 Chapter 10. Minimization or Maximization of Functions a feasible basic vector (not optimal) additional con additional constraint(equality) aint 个 (inequal http://www.nr. Permission is readable files some feasible vectors granted fori the optimal feasible vector inequality) (including this one) additional constraint =3.1 .com or call 1-800-872- primary constraint =2.9 from NUMERICAL RECIPES IN X2 2=2.8 2=2.7 2=2.6 (North America to any server computer, tusers to make one paper 1988-1992 by Cambridge University Press. THE 2=2.5 ART 2=2.4 是 Programs copy for their Figure 10.8.1.Basic concepts of linear programming. The case of only two independent variables, strictly prohibited c1,2,is shown.The linear function z,to be maximized,is represented by its contour lines.Primary constraints require c1 and r2 to be positive.Additional constraints may restrict the solution to regions (inequality constraints)or to surfaces of lower dimensionality (equality constraints).Feasible vectors to dir satisfy all constraints.Feasible basic vectors also lie on the boundary of the allowed region.The simplex method steps among feasible basic vectors until the optimal feasible vector is found. OF SCIENTIFIC COMPUTING (ISBN with all the z's nonnegative and also with 1788-1982 x1+2x3≤740 10-621 2x2-7x4≤0 .Further reproduction, 43108 (10.8.7) Numerical Recipes x2-x3+2x4≥ E1十r2+x3+E4=9 (outside North Software. The answer turns out to be (to 2 decimals)1=0,x2 =3.33,x3 =4.73,x4 =0.95. In the rest of this section we will learn how this answer is obtained.Figure 10.8.1 Ame summarizes some of the terminology thus far. visit website machine Fundamental Theorem of Linear Optimization Imagine that we start with a full N-dimensional space of candidate vectors.Then (in mind's eye,at least)we carve away the regions that are eliminated in turn by each imposed constraint.Since the constraints are linear,every boundary introduced by this process is a plane,or rather hyperplane.Equality constraints of the form(10.8.5)
432 Chapter 10. Minimization or Maximization of Functions Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). additional constraint (inequality) additional constraint (inequality) the optimal feasible vector some feasible vectors x1 primary constraint x2 a feasible basic vector (not optimal) primary constraint additional constraint (equality) z = 3.1 z = 2.9 z = 2.8 z = 2.7 z = 2.6 z = 2.5 z = 2.4 z = 3.0 Figure 10.8.1. Basic concepts of linear programming. The case of only two independent variables, x1, x2, is shown. The linear function z, to be maximized, is represented by its contour lines. Primary constraints require x1 and x2 to be positive. Additional constraints may restrict the solution to regions (inequality constraints) or to surfaces of lower dimensionality (equality constraints). Feasible vectors satisfy all constraints. Feasible basic vectors also lie on the boundary of the allowed region. The simplex method steps among feasible basic vectors until the optimal feasible vector is found. with all the x’s nonnegative and also with x1 + 2x3 ≤ 740 2x2 − 7x4 ≤ 0 x2 − x3 + 2x4 ≥ 1 2 x1 + x2 + x3 + x4 = 9 (10.8.7) The answer turns out to be (to 2 decimals) x1 = 0, x2 = 3.33, x3 = 4.73, x4 = 0.95. In the rest of this section we will learn how this answer is obtained. Figure 10.8.1 summarizes some of the terminology thus far. Fundamental Theorem of Linear Optimization Imagine that we start with a full N-dimensional space of candidate vectors. Then (in mind’s eye, at least) we carve away the regions that are eliminated in turn by each imposed constraint. Since the constraints are linear, every boundary introduced by this process is a plane, or rather hyperplane. Equality constraints of the form (10.8.5)
10.8 Linear Programming and the Simplex Method 433 force the feasible region onto hyperplanes of smaller dimension,while inequalities simply divide the then-feasible region into allowed and nonallowed pieces. When all the constraints are imposed,either we are left with some feasible region or else there are no feasible vectors.Since the feasible region is bounded by hyperplanes,it is geometrically a kind of convex polyhedron or simplex(cf.$10.4). If there is a feasible region,can the optimal feasible vector be somewhere in its interior,away from the boundaries?No,because the objective function is linear. This means that it always has a nonzero vector gradient.This,in turn,means that we could always increase the objective function by running up the gradient until we hit a boundary wall. The boundary of any geometrical region has one less dimension than its interior. Therefore,we can now run up the gradient projected into the boundary wall until we reach an edge of that wall.We can then run up that edge,and so on,down through whatever number of dimensions,until we finally arrive at a point,a vertex of the original simplex.Since this point has all N of its coordinates defined,it must be the solution of N simultaneous egualities drawn from the original set of equalities and inequalities (10.8.2)-(10.8.5). Points that are feasible vectors and that satisfy N of the original constraints as equalities,are termed feasible basic vectors.If N M,then a feasible basic 9 vector has at least N-M of its components equal to zero,since at least that many of the constraints(10.8.2)will be needed to make up the total of N.Put the other way,at most M components of a feasible basic vector are nonzero.In the example (10.8.6)(10.8.7),you can check that the solution as given satisfies as equalities the last three constraints of(10.8.7)and the constraint>0,for the required total of 4. 三色%D9 9 Put together the two preceding paragraphs and you have the Fundamental OF SCIENTIFIC Theorem of Linear Optimization:If an optimal feasible vector exists,then there is a feasible basic vector that is optimal.(Didn't we warn you about the terminological 6 thicket?) The importance of the fundamental theorem is that it reduces the optimization problem to a"combinatorial"problem,that of determining which N constraints (out of the M+N constraints in 10.8.2-10.8.5)should be satisfied by the optimal feasible vector.We have only to keep trying different combinations,and computing the objective function for each trial,until we find the best. Numerica 10621 Doing this blindly would take halfway to forever.The simplex method,first 43106 published by Dantzig in 1948(see [21),is a way of organizing the procedure so that (i)a series of combinations is tried for which the objective function increases at each Recipes step,and(ii)the optimal feasible vector is reached after a number of iterations that is almost always no larger than of order M or N,whichever is larger.An interesting Software. g mathematical sidelight is that this second property,although known empirically ever since the simplex method was devised,was not proved to be true until the 1982 work of Stephen Smale.(For a contemporary account,see [3].) Simplex Method for a Restricted Normal Form A linear programming problem is said to be in normal form if it has no constraints in the form(10.8.3)or(10.8.4),but rather only equality constraints of the form (10.8.5)and nonnegativity constraints of the form (10.8.2)
10.8 Linear Programming and the Simplex Method 433 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). force the feasible region onto hyperplanes of smaller dimension, while inequalities simply divide the then-feasible region into allowed and nonallowed pieces. When all the constraints are imposed, either we are left with some feasible region or else there are no feasible vectors. Since the feasible region is bounded by hyperplanes, it is geometrically a kind of convex polyhedron or simplex (cf. §10.4). If there is a feasible region, can the optimal feasible vector be somewhere in its interior, away from the boundaries? No, because the objective function is linear. This means that it always has a nonzero vector gradient. This, in turn, means that we could always increase the objective function by running up the gradient until we hit a boundary wall. The boundary of any geometrical region has one less dimension than its interior. Therefore, we can now run up the gradient projected into the boundary wall until we reach an edge of that wall. We can then run up that edge, and so on, down through whatever number of dimensions, until we finally arrive at a point, a vertex of the original simplex. Since this point has all N of its coordinates defined, it must be the solution of N simultaneous equalities drawn from the original set of equalities and inequalities (10.8.2)–(10.8.5). Points that are feasible vectors and that satisfy N of the original constraints as equalities, are termed feasible basic vectors. If N>M, then a feasible basic vector has at least N − M of its components equal to zero, since at least that many of the constraints (10.8.2) will be needed to make up the total of N. Put the other way, at most M components of a feasible basic vector are nonzero. In the example (10.8.6)–(10.8.7), you can check that the solution as given satisfies as equalities the last three constraints of (10.8.7) and the constraint x1 ≥ 0, for the required total of 4. Put together the two preceding paragraphs and you have the Fundamental Theorem of Linear Optimization: If an optimal feasible vector exists, then there is a feasible basic vector that is optimal. (Didn’t we warn you about the terminological thicket?) The importance of the fundamental theorem is that it reduces the optimization problem to a “combinatorial” problem, that of determining which N constraints (out of the M + N constraints in 10.8.2–10.8.5) should be satisfied by the optimal feasible vector. We have only to keep trying different combinations, and computing the objective function for each trial, until we find the best. Doing this blindly would take halfway to forever. The simplex method, first published by Dantzig in 1948 (see [2]), is a way of organizing the procedure so that (i) a series of combinations is tried for which the objective function increases at each step, and (ii) the optimal feasible vector is reached after a number of iterations that is almost always no larger than of order M or N, whichever is larger. An interesting mathematical sidelight is that this second property, although known empirically ever since the simplex method was devised, was not proved to be true until the 1982 work of Stephen Smale. (For a contemporary account, see [3].) Simplex Method for a Restricted Normal Form A linear programming problem is said to be in normal form if it has no constraints in the form (10.8.3) or (10.8.4), but rather only equality constraints of the form (10.8.5) and nonnegativity constraints of the form (10.8.2)
434 Chapter 10.Minimization or Maximization of Functions For our purposes it will be useful to consider an even more restricted set of cases, with this additional property:Each equality constraint of the form (10.8.5)must have at least one variable that has a positive coefficient and that appears uniquely in that one constraint only.We can then choose one such variable in each constraint equation,and solve that constraint equation for it.The variables thus chosen are called left-hand variables or basic variables,and there are exactly M (=m3)of them.The remaining N-M variables are called right-hand variables or nonbasic variables.Obviously this restricted normal form can be achieved only in the case M N,so that is the case that we will consider. 三 You may be thinking that our restricted normal form is so specialized that it is unlikely to include the linear programming problem that you wish to solve. Not at all!We will presently show how any linear programming problem can be transformed into restricted normal form.Therefore bear with us and learn how to apply the simplex method to a restricted normal form. 之 Here is an example of a problem in restricted normal form: Maximize z=2x2-4x3 (10.8.8) with z1,x2,z3,and x4 all nonnegative and also with 、三%&、今H 9 x1=2-6x2+T3 (10.8.9) x4=8+3r2-4x3 0學总 9 This example has N=4,M=2;the left-hand variables are 1 and 4;the right-hand variables are 2 and x3.The objective function(10.8.8)is written so as to depend only on right-hand variables;note,however,that this is not an actual restriction on objective functions in restricted normal form,since any left-hand variables appearing in the objective function could be eliminated algebraically by use of (10.8.9)or its analogs For any problem in restricted normal form,we can instantly read off a feasible basic vector(although not necessarily the optimal feasible basic vector).Simply set all right-hand variables equal to zero,and equation(10.8.9)then gives the values of 10621 the left-hand variables for which the constraints are satisfied.The idea of the simplex Numerica method is to proceed by a series of exchanges.In each exchange,a right-hand 431 variable and a left-hand variable change places.At each stage we maintain a problem Recipes in restricted normal form that is equivalent to the original problem. (outside It is notationally convenient to record the information content of equations (10.8.8)and (10.8.9)in a so-called tableau.as follows: North T2 T3 0 -4 T1 2 -6 1 工4 8 3 -4 (10.8.10) You should study (10.8.10)to be sure that you understand where each entry comes from,and how to translate back and forth between the tableau and equation formats of a problem in restricted normal form
434 Chapter 10. Minimization or Maximization of Functions Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). For our purposes it will be useful to consider an even more restricted set of cases, with this additional property: Each equality constraint of the form (10.8.5) must have at least one variable that has a positive coefficient and that appears uniquely in that one constraint only. We can then choose one such variable in each constraint equation, and solve that constraint equation for it. The variables thus chosen are called left-hand variables or basic variables, and there are exactly M (= m3) of them. The remaining N − M variables are called right-hand variables or nonbasic variables. Obviously this restricted normal form can be achieved only in the case M ≤ N, so that is the case that we will consider. You may be thinking that our restricted normal form is so specialized that it is unlikely to include the linear programming problem that you wish to solve. Not at all! We will presently show how any linear programming problem can be transformed into restricted normal form. Therefore bear with us and learn how to apply the simplex method to a restricted normal form. Here is an example of a problem in restricted normal form: Maximize z = 2x2 − 4x3 (10.8.8) with x1, x2, x3, and x4 all nonnegative and also with x1 = 2 − 6x2 + x3 x4 =8+3x2 − 4x3 (10.8.9) This example has N = 4, M = 2; the left-hand variables are x1 and x4; the right-hand variables are x2 and x3. The objective function (10.8.8) is written so as to depend only on right-hand variables; note, however, that this is not an actual restriction on objective functions in restricted normal form, since any left-hand variables appearing in the objective function could be eliminated algebraically by use of (10.8.9) or its analogs. For any problem in restricted normal form, we can instantly read off a feasible basic vector (although not necessarily the optimal feasible basic vector). Simply set all right-hand variables equal to zero, and equation (10.8.9) then gives the values of the left-hand variables for which the constraints are satisfied. The idea of the simplex method is to proceed by a series of exchanges. In each exchange, a right-hand variable and a left-hand variable change places. At each stage we maintain a problem in restricted normal form that is equivalent to the original problem. It is notationally convenient to record the information content of equations (10.8.8) and (10.8.9) in a so-called tableau, as follows: x2 x3 z 0 2 −4 x1 2 −6 1 x4 8 3 −4 (10.8.10) You should study (10.8.10) to be sure that you understand where each entry comes from, and how to translate back and forth between the tableau and equation formats of a problem in restricted normal form
10.8 Linear Programming and the Simplex Method 435 The first step in the simplex method is to examine the top row of the tableau. which we will call the "z-row."Look at the entries in columns labeled by right-hand variables(we will call these"right-columns").We want to imagine in turn the effect of increasing each right-hand variable from its present value of zero,while leaving all the other right-hand variables at zero.Will the objective function increase or decrease?The answer is given by the sign of the entry in the z-row.Since we want to increase the objective function,only right columns having positive z-row entries are of interest.In(10.8.10)there is only one such column,whose z-row entry is 2. The second step is to examine the column entries below each z-row entry that was selected by step one.We want to ask how much we can increase the right-hand variable before one of the left-hand variables is driven negative,which is not allowed. If the tableau element at the intersection of the right-hand column and the left-hand variable's row is positive,then it poses no restriction:the corresponding left-hand variable will just be driven more and more positive.Ifal/the entries in any right-hand column are positive,then there is no bound on the objective function and(having said so)we are done with the problem. If one or more entries below a positive z-row entry are negative,then we have to figure out which such entry first limits the increase of that column's right-hand 2 variable.Evidently the limiting increase is given by dividing the element in the right- hand column(which is called the pivot element)into the element in the "constant column"(leftmost column)of the pivot element's row.A value that is small in magnitude is most restrictive.The increase in the objective function for this choice of pivot element is then that value multiplied by the z-row entry of that column.We repeat this procedure on all possible right-hand columns to find the pivot element 、么净)w with the largest such increase.That completes our "choice of a pivot element." OF SCIENTIFIC( In the above example,the only positive z-row entry is 2.There is only one negative entry below it,namely -6,so this is the pivot element.Its constant-column 6 entry is 2.This pivot will therefore allow x2 to be increased by 2:6,which results in an increase of the objective function by an amount(2 x 2)6. The third step is to do the increase of the selected right-hand variable,thus making it a left-hand variable;and simultaneously to modify the left-hand variables, reducing the pivot-row element to zero and thus making it a right-hand variable.For our above example let's do this first by hand:We begin by solving the pivot-row equation for the new left-hand variable x2 in favor of the old one z1,namely 人 Numerical Recipes 10621 43106 x1=2-6E2+x3 x2=专-言x1十吉3 (10.8.11) (outside 腿 North We then substitute this into the old z-row, 2=2x2-4红3=2[店-1+-4红3=号-31-号x3 (10.8.12) and into all other left-variable rows,in this case only 4, x4=8+3[3-x1+6x3]-4x3=9-2x1-x3 (10.8.13)
10.8 Linear Programming and the Simplex Method 435 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). The first step in the simplex method is to examine the top row of the tableau, which we will call the “z-row.” Look at the entries in columns labeled by right-hand variables (we will call these “right-columns”). We want to imagine in turn the effect of increasing each right-hand variable from its present value of zero, while leaving all the other right-hand variables at zero. Will the objective function increase or decrease? The answer is given by the sign of the entry in the z-row. Since we want to increase the objective function, only right columns having positive z-row entries are of interest. In (10.8.10) there is only one such column, whose z-row entry is 2. The second step is to examine the column entries below each z-row entry that was selected by step one. We want to ask how much we can increase the right-hand variable before one of the left-hand variables is driven negative, which is not allowed. If the tableau element at the intersection of the right-hand column and the left-hand variable’s row is positive, then it poses no restriction: the corresponding left-hand variable will just be driven more and more positive. If all the entries in any right-hand column are positive, then there is no bound on the objective function and (having said so) we are done with the problem. If one or more entries below a positive z-row entry are negative, then we have to figure out which such entry first limits the increase of that column’s right-hand variable. Evidently the limiting increase is given by dividing the element in the righthand column (which is called the pivot element) into the element in the “constant column” (leftmost column) of the pivot element’s row. A value that is small in magnitude is most restrictive. The increase in the objective function for this choice of pivot element is then that value multiplied by the z-row entry of that column. We repeat this procedure on all possible right-hand columns to find the pivot element with the largest such increase. That completes our “choice of a pivot element.” In the above example, the only positive z-row entry is 2. There is only one negative entry below it, namely −6, so this is the pivot element. Its constant-column entry is 2. This pivot will therefore allow x2 to be increased by 2 ÷ |6|, which results in an increase of the objective function by an amount (2 × 2) ÷ |6|. The third step is to do the increase of the selected right-hand variable, thus making it a left-hand variable; and simultaneously to modify the left-hand variables, reducing the pivot-row element to zero and thus making it a right-hand variable. For our above example let’s do this first by hand: We begin by solving the pivot-row equation for the new left-hand variable x2 in favor of the old one x1, namely x1 = 2 − 6x2 + x3 → x2 = 1 3 − 1 6x1 + 1 6x3 (10.8.11) We then substitute this into the old z-row, z = 2x2 − 4x3 = 2 1 3 − 1 6x1 + 1 6x3 − 4x3 = 2 3 − 1 3x1 − 11 3 x3 (10.8.12) and into all other left-variable rows, in this case only x4, x4 =8+3 1 3 − 1 6x1 + 1 6x3 − 4x3 = 9 − 1 2x1 − 7 2x3 (10.8.13)
436 Chapter 10. Minimization or Maximization of Functions Equations (10.8.11)-(10.8.13)form the new tableau T1 3 2 23 一 -号 工2 13 -君 吉 TA 9 一 (10.8.14) The fourth step is to go back and repeat the first step,looking for another possible increase of the objective function.We do this as many times as possible,that is,until all the right-hand entries in the z-row are negative,signaling that no further increase nted for is possible.In the present example,this already occurs in(10.8.14),so we are done. The answer can now be read from the constant column of the final tableau.In (10.8.14)we see that the objective function is maximized to a value of 2/3 for the solution vector 2 1/3,x4 9,x1 x3 =0. RECIPES I Now look back over the procedure that led from(10.8.10)to (10.8.14).You will 令 find that it could be summarized entirely in tableau format as a series of prescribed elementary matrix operations: Locate the pivot element and save it. p Press. Save the whole pivot column. Replace each row,except the pivot row,by that linear combination ofitself and the pivot row which makes its pivot-column entry zero Divide the pivot row by the negative of the pivot. Replace the pivot element by the reciprocal of its saved value SCIENTIFIC Replace the rest of the pivot column by its saved values divided by the saved pivot element. to dir This is the sequence of operations actually performed by a linear programming routine,such as the one that we will presently give. You should now be able to solve almost any linear programming problem that starts in restricted normal form.The only special case that might stump you is if an entry in the constant column turns out to be zero at some stage,so that a Recipes Numerical Recipes 10.621 left-hand variable is zero at the same time as all the right-hand variables are zero. This is called a degenerate feasible vector.To proceed,you may need to exchange E喜 43106 the degenerate left-hand variable for one of the right-hand variables,perhaps even making several such exchanges. 腿 Writing the General Problem in Restricted Normal Form North Here is a pleasant surprise.There exist a couple of clever tricks that render trivial the task of translating a general linear programming problem into restricted normal form! First,we need to get rid of the inequalities of the form (10.8.3)or (10.8.4).for example,the first three constraints in (10.8.7).We do this by adding to the problem so-called slack variables which,when their nonnegativity is required,convert the inequalities to equalities.We will denote slack variables as yi.There will be m+m2 of them.Once they are introduced,you treat them on an equal footing with the original variables zi;then,at the very end,you simply ignore them
436 Chapter 10. Minimization or Maximization of Functions Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Equations (10.8.11)–(10.8.13) form the new tableau x1 x3 z 2 3 −1 3 −11 3 x2 1 3 −1 6 1 6 x4 9 −1 2 −7 2 (10.8.14) The fourth step is to go back and repeat the first step, looking for another possible increase of the objective function. We do this as many times as possible, that is, until all the right-hand entries in the z-row are negative, signaling that no further increase is possible. In the present example, this already occurs in (10.8.14), so we are done. The answer can now be read from the constant column of the final tableau. In (10.8.14) we see that the objective function is maximized to a value of 2/3 for the solution vector x2 = 1/3, x4 = 9, x1 = x3 = 0. Now look back over the procedure that led from (10.8.10) to (10.8.14). You will find that it could be summarized entirely in tableau format as a series of prescribed elementary matrix operations: • Locate the pivot element and save it. • Save the whole pivot column. • Replace each row, except the pivot row, by that linear combination of itself and the pivot row which makes its pivot-column entry zero. • Divide the pivot row by the negative of the pivot. • Replace the pivot element by the reciprocal of its saved value. • Replace the rest of the pivot column by its saved values divided by the saved pivot element. This is the sequence of operations actually performed by a linear programming routine, such as the one that we will presently give. You should now be able to solve almost any linear programming problem that starts in restricted normal form. The only special case that might stump you is if an entry in the constant column turns out to be zero at some stage, so that a left-hand variable is zero at the same time as all the right-hand variables are zero. This is called a degenerate feasible vector. To proceed, you may need to exchange the degenerate left-hand variable for one of the right-hand variables, perhaps even making several such exchanges. Writing the General Problem in Restricted Normal Form Here is a pleasant surprise. There exist a couple of clever tricks that render trivial the task of translating a general linear programming problem into restricted normal form! First, we need to get rid of the inequalities of the form (10.8.3) or (10.8.4), for example, the first three constraints in (10.8.7). We do this by adding to the problem so-called slack variables which, when their nonnegativity is required, convert the inequalities to equalities. We will denote slack variables as yi. There will be m1 + m2 of them. Once they are introduced, you treat them on an equal footing with the original variables xi; then, at the very end, you simply ignore them
10.8 Linear Programming and the Simplex Method 437 For example,introducing slack variables leaves (10.8.6)unchanged but turns (10.8.7)into x1+2x3+h=740 2x2-7x4+2=0 (10.8.15) x2-x3+2x4-3= E1+x2+3+x4=9 三 (Notice how the sign of the coefficient of the slack variable is determined by which sense of inequality it is replacing. Second,we need to insure that there is a set of M left-hand vectors,so that we can set up a starting tableau in restricted normal form.(In other words,we need to find a "feasible basic starting vector.")The trick is again to invent new variables! 、新M There are M of these,and they are called artificial variables;we denote them by zi. You put exactly one artificial variable into each constraint equation on the following model for the example (10.8.15): 9 21=740-x1-2x3-y1 22=-2x2+7x4-2 (10.8.16) 28=2-x2+3-2x4十g 24=9-工1-x2-T3-T4 Our example is now in restricted normal form. 6 Now you may object that (10.8.16)is not the same problem as (10.8.15)or (10.8.7)unless all the zi's are zero.Right you are!There is some subtlety here! We must proceed to solve our problem in two phases.First phase:We replace our objective function (10.8.6)by a so-called auxiliary objective function 10-521 2三-a1-22-23-24=-(7495-2x1-4x2-2x3+4x4-1-2+3) 豆2 Numerica (10.8.17) (where the last equality follows from using 10.8.16).We now perform the simplex method on the auxiliary objective function (10.8.17)with the constraints (10.8.16). 5@g3 Obviously the auxiliary objective function will be maximized for nonnegative zi's if 腿 all the zi's are zero.We therefore expect the simplex method in this first phase to produce a set of left-hand variables drawn from the xi's and yi's only,with all the zi's being right-hand variables.Aha!We then cross out the zi's,leaving a problem involving only i's and yi's in restricted normal form.In other words,the first phase produces an initial feasible basic vector.Second phase:Solve the problem produced by the first phase,using the original objective function,not the auxiliary. And what if the first phase doesn't produce zero values for all the zi's?That signals that there is no initial feasible basic vector,i.e.,that the constraints given to us are inconsistent among themselves.Report that fact,and you are done. Here is how to translate into tableau format the information needed for both the first and second phases of the overall method.As before,the underlying problem
10.8 Linear Programming and the Simplex Method 437 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). For example, introducing slack variables leaves (10.8.6) unchanged but turns (10.8.7) into x1 + 2x3 + y1 = 740 2x2 − 7x4 + y2 = 0 x2 − x3 + 2x4 − y3 = 1 2 x1 + x2 + x3 + x4 = 9 (10.8.15) (Notice how the sign of the coefficient of the slack variable is determined by which sense of inequality it is replacing.) Second, we need to insure that there is a set of M left-hand vectors, so that we can set up a starting tableau in restricted normal form. (In other words, we need to find a “feasible basic starting vector.”) The trick is again to invent new variables! There are M of these, and they are called artificial variables; we denote them by z i. You put exactly one artificial variable into each constraint equation on the following model for the example (10.8.15): z1 = 740 − x1 − 2x3 − y1 z2 = −2x2 + 7x4 − y2 z3 = 1 2 − x2 + x3 − 2x4 + y3 z4 = 9 − x1 − x2 − x3 − x4 (10.8.16) Our example is now in restricted normal form. Now you may object that (10.8.16) is not the same problem as (10.8.15) or (10.8.7) unless all the zi’s are zero. Right you are! There is some subtlety here! We must proceed to solve our problem in two phases. First phase: We replace our objective function (10.8.6) by a so-called auxiliary objective function z ≡ −z1 − z2 − z3 − z4 = −(749 1 2 − 2x1 − 4x2 − 2x3 + 4x4 − y1 − y2 + y3) (10.8.17) (where the last equality follows from using 10.8.16). We now perform the simplex method on the auxiliary objective function (10.8.17) with the constraints (10.8.16). Obviously the auxiliary objective function will be maximized for nonnegative z i’s if all the zi’s are zero. We therefore expect the simplex method in this first phase to produce a set of left-hand variables drawn from the xi’s and yi’s only, with all the zi’s being right-hand variables. Aha! We then cross out the z i’s, leaving a problem involving only xi’s and yi’s in restricted normal form. In other words, the first phase produces an initial feasible basic vector. Second phase: Solve the problem produced by the first phase, using the original objective function, not the auxiliary. And what if the first phase doesn’t produce zero values for all the z i’s? That signals that there is no initial feasible basic vector, i.e., that the constraints given to us are inconsistent among themselves. Report that fact, and you are done. Here is how to translate into tableau format the information needed for both the first and second phases of the overall method. As before, the underlying problem
438 Chapter 10. Minimization or Maximization of Functions to be solved is as posed in equations(10.8.6)(10.8.7). T2 T3 1 2 0 1 1 3 - 0 0 0 21 740 -1 0 -2 0 -1 0 0 22 0 -2 0 7 0 0 23 0 1 -2 0 0 24 9 -1 1 0 0 0 83 3 -749号 2 4 1 -1 (10.8.18) 李二gOM 鱼 100 This is not as daunting as it may,at first sight,appear.The table entries inside from NUMERICAL RECIPES I 18881892 the box of double lines are no more than the coefficients of the original problem (10.8.6)(10.8.7)organized into a tabular form.In fact,these entries,along with the values of N,M,m1,m2,and m3,are the only input that is needed by the simplex method routine below.The columns under the slack variables yi simply e≥e8N America THE record whether each of the M constraints is of the form≤,≥,or=,this is redundant information with the values m1,m2,m3,as long as we are sure to enter the rows of ART the tableau in the correct respective order.The coefficients of the auxiliary objective function (bottom row)are just the negatives of the column sums of the rows above. Progra so these are easily calculated automatically. The output from a simplex routine will be (i)a flag telling whether a finite solution,no solution,or an unbounded solution was found,and (ii)an updated tableau. The output tableau that derives from(10.8.18),given to two significant figures,is T1 y2 y3 1881892 OF SCIENTIFIC COMPUTING (ISBN 17.03 -.95 -.05 -1.05 3.33 -.35 -.15 .35 v@cam 72 …… 10-6211 T3 4.73 -.55 .05 -.45 工4 .95 -.10 .10 .10 gegegh% Numerical Recipes 43198-5 1 730.55 .10 -.10 90 (10.8.19) (outside North Software. A little counting of the zi's and yi's will convince you that there are M+1 rows (including the z-row)in both the input and the output tableaux,but that only N+1-m3 columns of the output tableau(including the constant column)contain any useful information,the other columns belonging to now-discarded artificial variables.In the output,the first numerical column contains the solution vector. along with the maximum value of the objective function.Where a slack variable(y) appears on the left,the corresponding value is the amount by which its inequality is safely satisfied.Variables that are not left-hand variables in the output tableau have zero values.Slack variables with zero values represent constraints that are satisfied as equalities
438 Chapter 10. Minimization or Maximization of Functions Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). to be solved is as posed in equations (10.8.6)–(10.8.7). x1 x2 x3 x4 y1 y2 y3 z 0 1 1 3 −1 2 0 0 0 z1 740 −1 0 −2 0 −1 0 0 z2 0 0 −2 0 7 0 −1 0 z3 1 2 0 −1 1 −2 0 0 1 z4 9 −1 −1 −1 −1 0 0 0 z −749 1 2 2 4 2 −4 1 1 −1 (10.8.18) This is not as daunting as it may, at first sight, appear. The table entries inside the box of double lines are no more than the coefficients of the original problem (10.8.6)–(10.8.7) organized into a tabular form. In fact, these entries, along with the values of N, M, m1, m2, and m3, are the only input that is needed by the simplex method routine below. The columns under the slack variables y i simply record whether each of the M constraints is of the form ≤, ≥, or =; this is redundant information with the values m1, m2, m3, as long as we are sure to enter the rows of the tableau in the correct respective order. The coefficients of the auxiliary objective function (bottom row) are just the negatives of the column sums of the rows above, so these are easily calculated automatically. The output from a simplex routine will be (i) a flag telling whether a finite solution, no solution, or an unbounded solution was found,and (ii) an updated tableau. The output tableau that derives from (10.8.18), given to two significant figures, is x1 y2 y3 ··· z 17.03 −.95 −.05 −1.05 ··· x2 3.33 −.35 −.15 .35 ··· x3 4.73 −.55 .05 −.45 ··· x4 .95 −.10 .10 .10 ··· y1 730.55 .10 −.10 .90 ··· (10.8.19) A little counting of the xi’s and yi’s will convince you that there are M + 1 rows (including the z-row) in both the input and the output tableaux, but that only N + 1 − m3 columns of the output tableau (including the constant column) contain any useful information, the other columns belonging to now-discarded artificial variables. In the output, the first numerical column contains the solution vector, along with the maximum value of the objective function. Where a slack variable (y i) appears on the left, the corresponding value is the amount by which its inequality is safely satisfied. Variables that are not left-hand variables in the output tableau have zero values. Slack variables with zero values represent constraints that are satisfied as equalities
10.8 Linear Programming and the Simplex Method 439 Routine Implementing the Simplex Method The following routine is based algorithmically on the implementation of Kuenzi, Tzschach,and Zehnder [4].Aside from input values of M,N,m1,m2,m3,the principal input to the routine is a two-dimensional array a containing the portion of the tableau(10.8.18)that is contained between the double lines.This input occupies the M+1 rows and N+1 columns of a[1..m+1][1..n+1].Note,however,that reference is made internally to row M+2 of a (used for the auxiliary objective function,just as in 10.8.18).Therefore the variable declared as float **a,must point to allocated memory allowing references in the subrange a[i订[k],i=1..m+2,k=1..n+1 (10.8.20 菲 You will suffer endless agonies if you fail to understand this simple point.Also do 三 not neglect to order the rows of a in the same order as equations(10.8.1),(10.8.3), (10.8.4),and (10.8.5),that is,objective function,-constraints, =-constraints. RECIPES On output,the tableau a is indexed by two returned arrays of integers.iposv [j] contains,for j=1...M,the number i whose original variable i is now represented 9 by row j+1 of a.These are thus the left-hand variables in the solution.(The first row of a is of course the z-row.)A valuei>N indicates that the variable is a yi rather than an i,N+y Likewise,izrov [j]contains,for j=1...N,the number i whose original variable x;is now a right-hand variable,represented by column j+1 of a.These variables are all zero in the solution.The meaning ofi>N is the same 三星是g口0 as above,except that i N +m+m2 denotes an artificial or slack variable which was used only internally and should now be entirely ignored. The flag icase is set to zero if a finite solution is found,+1 if the objective function is unbounded,-1 if no solution satisfies the given constraints. The routine treats the case of degenerate feasible vectors,so don't worry about them.You may also wish to admire the fact that the routine does not require storage for the columns of the tableau (10.8.18)that are to the right of the double line:it keeps track of slack variables by more efficient bookkeeping. 巴>空。P 61 Please note that,as given,the routine is only "semi-sophisticated"in its tests Numerica 10621 for convergence.While the routine properly implements tests for inequality with zero as tests against some small parameter EPS,it does not adjust this parameter to reflect the scale of the input data.This is adequate for many problems,where the a蓝3 input data do not differ from unity by too many orders of magnitude.If,however, you encounter endless cycling,then you should modify EPS in the routines simplx and simp2.Permuting your variables can also help.Finally,consult [51. #include "nrutil.h" #define EPS 1.0e-6 Here EPS is the absolute precision,which should be adjusted to the scale of your variables. #define FREEALL free_ivector(13,1,m);free_ivector(11,1,n+1); void simplx(float **a,int m,int n,int mi,int m2,int m3,int *icase, int izrov[],int iposv]) Simplex method for linear programming.Input parameters a,m,n,mp,np,m1,m2,and m3, and output parameters a,icase,izrov,and iposv are described above. void simp1(float **a,int mm,int 11[],int nll,int iabf,int *kp
10.8 Linear Programming and the Simplex Method 439 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Routine Implementing the Simplex Method The following routine is based algorithmically on the implementation of Kuenzi, Tzschach, and Zehnder [4]. Aside from input values of M, N, m1, m2, m3, the principal input to the routine is a two-dimensional array a containing the portion of the tableau (10.8.18) that is contained between the double lines. This input occupies the M + 1 rows and N + 1 columns of a[1..m+1][1..n+1]. Note, however, that reference is made internally to row M + 2 of a (used for the auxiliary objective function, just as in 10.8.18). Therefore the variable declared as float **a, must point to allocated memory allowing references in the subrange a[i][k], i = 1 ... m+2, k = 1 ... n+1 (10.8.20) You will suffer endless agonies if you fail to understand this simple point. Also do not neglect to order the rows of a in the same order as equations (10.8.1), (10.8.3), (10.8.4), and (10.8.5), that is, objective function, ≤-constraints, ≥-constraints, =-constraints. On output, the tableau a is indexed by two returned arrays of integers. iposv[j] contains, for j= 1 ...M, the number i whose original variable xi is now represented by row j+1 of a. These are thus the left-hand variables in the solution. (The first row of a is of course the z-row.) A value i>N indicates that the variable is a yi rather than an xi, xN+j ≡ yj . Likewise, izrov[j] contains, for j= 1 ...N, the number i whose original variable xi is now a right-hand variable, represented by column j+1 of a. These variables are all zero in the solution. The meaning of i>N is the same as above, except that i>N + m1 + m2 denotes an artificial or slack variable which was used only internally and should now be entirely ignored. The flag icase is set to zero if a finite solution is found, +1 if the objective function is unbounded, −1 if no solution satisfies the given constraints. The routine treats the case of degenerate feasible vectors, so don’t worry about them. You may also wish to admire the fact that the routine does not require storage for the columns of the tableau (10.8.18) that are to the right of the double line; it keeps track of slack variables by more efficient bookkeeping. Please note that, as given, the routine is only “semi-sophisticated” in its tests for convergence. While the routine properly implements tests for inequality with zero as tests against some small parameter EPS, it does not adjust this parameter to reflect the scale of the input data. This is adequate for many problems, where the input data do not differ from unity by too many orders of magnitude. If, however, you encounter endless cycling, then you should modify EPS in the routines simplx and simp2. Permuting your variables can also help. Finally, consult [5]. #include "nrutil.h" #define EPS 1.0e-6 Here EPS is the absolute precision, which should be adjusted to the scale of your variables. #define FREEALL free_ivector(l3,1,m);free_ivector(l1,1,n+1); void simplx(float **a, int m, int n, int m1, int m2, int m3, int *icase, int izrov[], int iposv[]) Simplex method for linear programming. Input parameters a, m, n, mp, np, m1, m2, and m3, and output parameters a, icase, izrov, and iposv are described above. { void simp1(float **a, int mm, int ll[], int nll, int iabf, int *kp