Mest 16888 ESD.J7 Multidisciplinary System Design Optimization(MSDO) Numerical Optimization Lecture 6 23 February 2004 Karen willcox C Massachusetts Institute of Technology - Prof de Weck and Prof Willcox Engineering Systems Division and Dept of Aeronautics and Astronautics
1 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Engineering Systems Division and Dept. of Aeronautics and Astronautics Multidisciplinary System Multidisciplinary System Design Optimization (MSDO) Design Optimization (MSDO) Numerical Optimization I Lecture 6 23 February 2004 Karen Willcox
Mest Today's Topics 16888 E77 Existence Uniqueness of an optimum Solution Kuhn-Tucker Conditions Convex Spaces Unconstrained problems Linear Programming C Massachusetts Institute of Technology - Prof de Weck and Prof Willcox Engineering Systems Division and Dept of Aeronautics and Astronautics
2 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Engineering Systems Division and Dept. of Aeronautics and Astronautics Today’s Topics Today’s Topics • Existence & Uniqueness of an Optimum Solution • Kuhn-Tucker Conditions • Convex Spaces • Unconstrained Problems • Linear Programming
Mest Disclaimer 16888 ESD.J7 This is not a classic optimization class The aim is not to teach you the details of optimization algorithms, but rather to expose you to different methods We will utilize optimization techniques-the goal is to understand enough to be able to utilize them wisely If you plan to use optimization extensively in your research, you should take an optimization class, e.g 15093 C Massachusetts Institute of Technology - Prof de Weck and Prof Willcox Engineering Systems Division and Dept of Aeronautics and Astronautics
3 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Engineering Systems Division and Dept. of Aeronautics and Astronautics Disclaimer! Disclaimer! • This is not a classic optimization class ... • The aim is not to teach you the details of optimization algorithms, but rather to expose you to different methods. • We will utilize optimization techniques – the goal is to understand enough to be able to utilize them wisely. • If you plan to use optimization extensively in your research, you should take an optimization class, e.g. 15.093
Mest Learning objectives 16888 ES077 After the next two lectures you should be familiar with what gradient-based optimization techniques are available understand the basics of how each technique works be able to choose which optimization technique is appropriate for your problem understand what to look for when the algorithm terminates understand why the algorithm might fail C Massachusetts Institute of Technology - Prof de Weck and Prof Willcox Engineering Systems Division and Dept of Aeronautics and Astronautics
4 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Engineering Systems Division and Dept. of Aeronautics and Astronautics Learning Objectives Learning Objectives After the next two lectures, you should: • be familiar with what gradient-based optimization techniques are available • understand the basics of how each technique works • be able to choose which optimization technique is appropriate for your problem • understand what to look for when the algorithm terminates • understand why the algorithm might fail
Mlesd How to Choose an Algorithm? 850. Number of design variables Type of design variables(real/integer, continuous/discrete) Linear/nonlinear Equality/inequality constraints Discontinuous feasible spaces Initial solution feasible /infeasible Simulation code runtime C Massachusetts Institute of Technology - Prof de Weck and Prof Willcox Engineering Systems Division and Dept of Aeronautics and Astronautics
5 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Engineering Systems Division and Dept. of Aeronautics and Astronautics How to Choose an Algorithm? How to Choose an Algorithm? • Number of design variables • Type of design variables (real/integer, continuous/discrete) • Linear/nonlinear • Equality/inequality constraints • Discontinuous feasible spaces • Initial solution feasible/infeasible • Simulation code runtime
Mlsd Technique Overview 16888 E77 Steepest Descent UNCONSTRAINED Conjugate Gradient Quasi-Newton Newton Simplex-linear CONSTRAINED SLP-linear SQP-nonlinear, expensive, common in engineering applications Exterior Penalty-nonlinear, discontinuous design spaces Interior Penalty-nonlinear Generalized reduced gradient- nonlinear Method of feasible directions nonlinear Mixed Integer Programming C Massachusetts Institute of Technology - Prof de Weck and Prof Willcox Engineering Systems Division and Dept of Aeronautics and Astronautics
6 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Engineering Systems Division and Dept. of Aeronautics and Astronautics Technique Overview Technique Overview Steepest Descent Conjugate Gradient Quasi-Newton Newton Simplex – linear SLP – linear SQP – nonlinear, expensive, common in engineering applications Exterior Penalty – nonlinear, discontinuous design spaces Interior Penalty – nonlinear Generalized Reduced Gradient – nonlinear Method of Feasible Directions – nonlinear Mixed Integer Programming UNCONSTRAINED CONSTRAINED
Mesd Standard Problem Definition 16888 ES077 s.9(x)≤0j h2(x)=0k=1m2 X≤Xj=1,,n For now, we consider a single objective function Jx) There are n design variables and a total of m constraints(m=m,+m2) The bounds are known as side constraints o Massachusetts Institute of Technology -Prof de Weck and Prof Willcox Engineering Systems Division and Dept of Aeronautics and Astronautics
7 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Engineering Systems Division and Dept. of Aeronautics and Astronautics Standard Problem Definition Standard Problem Definition 1 2 min ( ) s.t. ( ) 0 1,.., ( ) 0 1,.., 1,.., j k u i ii J g jm h km x x xi n d dd x x x A • For now, we consider a single objective function, J(x). • There are n design variables, and a total of m constraints ( m = m 1 + m 2). • The bounds are known as side constraints
Mest Linear vs Nonlinear 16888 E77 The objective function is a linear function of the design variables if each design variable appears only to the first power with constant coefficients multiplying it J(X)=X1+2x2-34X is linear in X=[x,X2 X3] J(x)=X1X2+2X2-34x3 is nonlinear in x J(x)=COS(X1)+2X2-3 4X3 is nonlinear in x 8 C Massachusetts Institute of Technology - Prof de Weck and Prof Willcox Engineering Systems Division and Dept of Aeronautics and Astronautics
8 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Engineering Systems Division and Dept. of Aeronautics and Astronautics Linear vs. Nonlinear Linear vs. Nonlinear The objective function is a linear function of the design variables if each design variable appears only to the first power with constant coefficients multiplying it. 12 3 J xx x ( ) 2 3.4 x is linear in x=[x1 x 2 x 3 ] T 12 2 3 J xx x x ( ) 2 3.4 x is nonlinear in x 1 2 J xx ( ) cos( ) 2 3.4 x 3 x is nonlinear in x
Mest Linear vs Nonlinear 16888 ES077 A constraint is a linear function of the design variables if each design variable appears only to the first power with constant coefficients multiplying it 6x,, <10 is linear in x 6X1-X 2 2x6< 10 is nonlinear in x 6X1-sin(x2 )-2X3 10 is nonlinear in x C Massachusetts Institute of Technology - Prof de Weck and Prof Willcox Engineering Systems Division and Dept of Aeronautics and Astronautics
9 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Engineering Systems Division and Dept. of Aeronautics and Astronautics Linear vs. Nonlinear Linear vs. Nonlinear A constraint is a linear function of the design variables if each design variable appears only to the first power with constant coefficients multiplying it. 6 21 12 3 xx x 0 is linear in x 0 is nonlinear in x 2 6 21 12 3 xx x is nonlinear in x 6 sin( ) 2 10 1 23 x xx
M esd Iterative Optimization Procedures ESD. 71 Most optimization algorithms are iterative x?=x9-1+as Where g=iteration number S=vector search direction ascalar distance and the initial solution x is given The algorithm determines the search direction S according to some criteria Gradient-based algorithms use gradient information to decide where to move o Massachusetts Institute of Technology -Prof de Weck and Prof Willcox Engineering Systems Division and Dept of Aeronautics and Astronautics
10 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Engineering Systems Division and Dept. of Aeronautics and Astronautics Iterative Optimization Procedures Iterative Optimization Procedures Most optimization algorithms are iterative: where q=iteration number S=vector search direction D=scalar distance and the initial solution x0 is given The algorithm determines the search direction S according to some criteria. Gradient-based algorithms use gradient information to decide where to move. q q qq 1 D xx S