Gradient-Based Multi-Objective Optimization Technology                                             Vladimir Sevastyanov1  ...
determined by the user, which increases algorithm efficiency by orders of magnitude, and gives the user more controlover t...
8 If X’ does not dominate X* then report X* as Pareto optimal point.10 End     MGA can be implemented in a number of diffe...
further improvement of the point #9 was not possible, and the point was declared as Pareto optimal. Next, allevaluated poi...
and illustrates the constrained optimization aspect of MGE algorithm:                            Minimize Deflection = ( P...
4. Multi-Gradient Pathfinder Algorithm     Multi-Gradient Pathfinder (MGP) is the first multi-objective optimization algor...
F1                                                                                     F2      FIG.5 illustrates the basic...
FIG. 6 shows Pareto optimal points found by MGP algorithm for the benchmark task (4). MGP hasstarted optimization from the...
The diagrams on FIG.7 illustrate all the points evaluated by MGP algorithm. All yellow markers are obscuredby green marker...
The diagrams on FIG.8A show 118 Pareto optimal points found by the price of 684 model evaluations, whichis corresponded wi...
As can be seen on FIG.9, MGP algorithm (green and red markers) performs a search in the area of globalPareto frontier, and...
FIG. 10A                                                               FIG.10B    FIG. 10A compares Pareto optimal points ...
Table 2 Comparison of traditional response surface methods with DDRSM approachRSM Aspects                      Traditional...
optimization step makes it more adaptive to a function topology changes, and allows for an increase in the accuracyof grad...
FIG. 11 shows Pareto optimal points found by MGP algorithm for the benchmark (11). The finitedifference method has been us...
result, MGP algorithm was able to find, and step along the global Pareto frontier on each optimization step. AllPareto opt...
FIG. 14 shows all points evaluated by MGP algorithm and by three other multi-objective optimization  algorithms Pointer, N...
FIG. 14 shows a screenshot of Pareto Explorer’s main window.    In addition to the design optimization environment impleme...
References1. Marler, R. T., and Arora, J. S. (2004), "Survey of Multi-objective Optimization Methods for Engineering",Stru...
Upcoming SlideShare
Loading in...5
×

Gradient-Based Multi-Objective Optimization Technology

615

Published on

Multi-Gradient Analysis (MGA), and two multi-objective optimization methods based on MGA are presented: Multi-Gradient Explorer (MGE), and Multi Gradient Pathfinder (MGP) methods. Dynamically Dimensioned Response Surface Method (DDRSM) for dynamic reduction of task dimension and fast estimation of gradients is also disclosed.
MGE and MGP are based on the MGA’s ability to analyze gradients and determine the area of simultaneous improvement (ASI) for all objective functions. MGE starts from a
given initial point, and approaches Pareto frontier sequentially by stepping into the ASI area until a Pareto optimal point is obtained. MGP starts from a Pareto-optimal point, and steps along the Pareto surface in the direction that allows for improvement on a subset
of the objective functions with higher priority. DDRSM works for optimization tasks with virtually any number (up to thousands) of design variables, and requires just 5-7 model evaluations per Pareto optimal point for the MGE and MGP algorithms regardless of task
dimension. Both algorithms are designed to optimize computationally expensive models, and are able to optimize models with dozens, hundreds, and even thousands of design variables.

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
615
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
18
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Transcript of "Gradient-Based Multi-Objective Optimization Technology"

  1. 1. Gradient-Based Multi-Objective Optimization Technology Vladimir Sevastyanov1 eArtius, Inc., Irvine, CA 92614, US EXTENDED ABSTRACT Multi-Gradient Analysis (MGA), and two multi-objective optimization methods based on MGA are presented: Multi-Gradient Explorer (MGE), and Multi Gradient Pathfinder (MGP) methods. Dynamically Dimensioned Response Surface Method (DDRSM) for dynamic reduction of task dimension and fast estimation of gradients is also disclosed. MGE and MGP are based on the MGA’s ability to analyze gradients and determine the area of simultaneous improvement (ASI) for all objective functions. MGE starts from a given initial point, and approaches Pareto frontier sequentially by stepping into the ASI area until a Pareto optimal point is obtained. MGP starts from a Pareto-optimal point, and steps along the Pareto surface in the direction that allows for improvement on a subset of the objective functions with higher priority. DDRSM works for optimization tasks with virtually any number (up to thousands) of design variables, and requires just 5-7 model evaluations per Pareto optimal point for the MGE and MGP algorithms regardless of task dimension. Both algorithms are designed to optimize computationally expensive models, and are able to optimize models with dozens, hundreds, and even thousands of design variables. 1. Introduction T here are two groups of multi-objective optimization methods: scalarization and non-scalarization methods [1]. Scalarization methods use a global criterion to combine multiple objective functions in a utility function, andrequire solving a sequence of single-objective problems. Absence of numerical methods designed specifically formulti-objective optimization caused the invention of such an artificial scalarization technique. The existing weightedsum approaches that are widely used for design optimization do not work well with the non-convex Pareto surfaces.Uniform distribution of Pareto optimal points cannot be guaranteed even if the weights are varying consistently andcontinuously. Hence, Pareto set will be incomplete and inaccurate [1]. Genetic Algorithm (GA) is one of the major techniques based on non-scalarization. It combines the use ofrandom numbers and heuristic strategies inspired by evolutionary biology. GAs are computationally extremelyintensive and resource-consuming, and do not provide adequate enough accuracy [1]. In order to overcome limitations of GAs and scalarization techniques, a new gradient-based technique has beeninvented at eArtius, Inc. (patented). The technique uses multi-gradient analysis (MGA), and allows the developingMulti-Gradient Explorer (MGE) algorithm of multi-objective optimization. Further research was inspired by two fundamental issues typical for traditional multi-objective optimizationapproaches, and by hardly increasing computational effort necessary for performing optimization: (a) necessity tosearch for optimal solutions in the entire design space while Pareto optimal points can only be found on Paretofrontier, and (b) necessity to cover the entire Pareto frontier by a large number of found Pareto optimal designs whilethe user needs just a few trade offs in his area of interest on the Pareto frontier. These two issues caused the use ofbrute force methods, such as parallelization of algorithms, in most of the prior art multi-objective optimizationtechnologies. However, even brute-force methods cannot resolve fundamental problems related to the famous “curse ofdimensionality” phenomenon. According to [2], adding extra dimensions to the design space requires an exponentialincrease in the number of Pareto optimal points to maintain the same quality of approximation for Pareto frontier. New Multi-Gradient Pathfinder (MGP) algorithm has been invented at eArtius (patent pending). MGP usesPareto frontier as a search space, and performs directed optimization on Pareto frontier in the area of interest1 Chief Executive Officer 1 American Institute of Aeronautics and Astronautics
  2. 2. determined by the user, which increases algorithm efficiency by orders of magnitude, and gives the user more controlover the optimization process. Another important area for improvements in optimization technology is related to response surface methodswhich are commonly used in engineering design to minimize the expense of running computationally expensiveanalyses and simulations. All known approximation techniques including Response Surface Methodology, KrigingModels, etc. are limited by 40-60 design variables [3] because of the same “curse of dimensionality” phenomenon.According to [2], adding extra dimensions to the design space requires exponential increase in the number of samplepoints necessary to build an adequate global surrogate model. A new response surface method named Dynamically Dimensioned Response Surface Method (DDRSM) hasbeen invented at eArtius (patent pending), which successfully avoids the “curse of dimensionality” limitations, andefficiently works with up to thousands of design variables without ever increasing the number of sample points. New eArtius design optimization technology is made up out of the optimization algorithms MGE, MGP, HMGE,HMGP, response surface method DDRSM, and implemented in the eArtius design optimization tool Pareto Explorer. 2. Multi-Gradient Analysis Any traditional gradient-based optimization method comprises sequential steps from an initial point to anoptimal point. Each step improves the current point with respect to the objective function. The most importantelement of such an algorithm is determining the direction of the next step. Traditional gradient-based algorithmsuse the fact that the gradient of the objective function indicates the direction of the steepest ascent of the objectivefunction but what if several objective functions need to be optimized? In this case we need to find a point improvingall objective functions simultaneously. The following diagrams (see FIG.1) illustrate graphically how MGAdetermines the area of simultaneous improvement for all objective functions. It is illustrated for the simplestmulti-objective optimization task with two independent variables and two objective functions that need to bemaximized. FIG. 1A FIG. 1B FIG. 1CFIG. 1A illustrates how the gradient G1 and the line L1 (G1 is perpendicular to L1) help to split the sub-region into the area of increased values A1 and the area of decreased values for the first objective function; FIG. 1B similarly illustrates splitting the sub-region for the second objective function;FIG. 1C illustrates that the area of simultaneous increasing ASI of both objective functions F1 and F2 is equal to intersection of areas A1 and A2: A1∩A2. The main problem of the Multi-Gradient Analysis is to find a point X ∈ ASI , which guarantees that the pointX 0 will be improved by the point X with respect to all objective functions. MGA is illustrated with two objective functions on FIG.1, but it works in the same way with any reasonablenumber of objective functions and unlimited number of design variables. The MGA pseudo-code:1 Begin2 Input initial point X*.3 Evaluate criteria gradients on X*.4 Determine ASI for all criteria.5 Determine the direction of simultaneous improvement for all objectives for the next step.6 Determine the length of the step.5 Perform the step, and evaluate new point X’ belonging to ASI.7 If X’ dominates X* then report improved point X’ and go to 10. 2 American Institute of Aeronautics and Astronautics
  3. 3. 8 If X’ does not dominate X* then report X* as Pareto optimal point.10 End MGA can be implemented in a number of different ways. Some of them are discussed in [4]. In fact, the sametechnique is widely used for constrained gradient-based optimization with a single objective function [5]. However,the technique was never used for multi-objective optimization. Since MGA technique results in an improved point, it can be used as an element in any multi-objectiveoptimization algorithm. The following two sections discuss two MGA-based multi-objective optimizationalgorithms. 3. Multi-Gradient Explorer Algorithm MGE uses a conventional approach for optimization practice. It starts from an initial point, and iterates towardPareto frontier until a Pareto optimal point is found. Then it takes another initial point, iterates again, and so on. The MGE pseudo-code:1 Begin2 Generate required number of initial points X1,…,XN.3 i=1.4 Declare current point: Xc= Xi.5 Apply MGA analysis to Xc for finding a point X’ in ASI.6 If X’ dominates Xc then Xc=X’ and go to 5.7 If X’ does not dominate Xc then declare Xc as Pareto optimal point; i=i+1 and go to 4.8 Report all the solutions found.9 End MGE algorithm can be used in two modes: (a) improvement of a given initial point, and (b) approximation ofthe entire Pareto frontier. In the mode (a) MGE usually performs about 4-7 steps, and finds several Pareto optimal points improving agiven initial design (see FIG.2.) Assuming that DDRSM response surface method is used for estimating gradients, itusually takes just about 15-30 model evaluations to approach Pareto frontier regardless of task dimension. Thus,MGE is the best choice for computationally expensive simulation models when covering the entire Pareto frontieris prohibitively expensive. In the mode (b) MGE sequentially starts from randomly distributed initial points. Since the initial points areuniformly distributed in the design space, it is expected that Pareto optimal points found in multiple iterations willcover the entire Pareto frontier (see FIG.3.) Minimize f1 = x12 + ( x2 − 1) Minimize f 2 = x12 + ( x2 + 1) 2 + 12 (1) Minimize f 3 = ( x1 − 1) + x + 2 2 2 2 − 2 ≤ x1 , x2 ≤ 2 Table 1 and FIG.2 illustrate MGE algorithm in the mode of improvement of a given initial point. Table 1 Improvement of a given design by MGE optimization algorithm Evaluation # f1 f2 f3 Initial Point 1 12.26 5.394 14.05 Pareto Optimal Point 9 3.65 1.38 2.84 As follows from Table 1, the initial point has been significantly improved with respect to all objective functions.The target Pareto optimal point was found after 9 model evaluations. After that, MGE spent 26 additional modelevaluations estimating gradients via DDRSM method, and tried to improve the point #9. MGE was stopped because 3 American Institute of Aeronautics and Astronautics
  4. 4. further improvement of the point #9 was not possible, and the point was declared as Pareto optimal. Next, allevaluated points have been compared against each other with respect to all objectives, and all dominated points weredeclared as transitional points. The rest of points have been declared as Pareto optimal (see FIG.2.) The majority ofevaluated points from #10 to #35 happened to be Pareto optimal in this optimization run. Thus, the user has 15 Paretooptimal points out of 35 model evaluations. FIG.2 shows results of improvement of a given point by MGE algorithm. MGE has started from the initial point (orange triangle marker on the diagrams), and performed a few steps towards Pareto frontier; MGE has found 15 Pareto optimal points by the price of 35 model evaluations. The following FIG.3 illustrates the ability of MGE algorithm to cover the entire Pareto frontier. In this scenarioMGE sequentially starts from randomly distributed initial points, and iterates towards Pareto frontier based on MGAtechnique. FIG. 3 shows Pareto optimal points found by MGE algorithm for the benchmark (1). MGE sequentiallystarted optimization from randomly distributed initial points, and covered the entire Pareto frontier evenly. FIG.3 shows that MGE algorithm approximates the entire Pareto frontier, and covers it evenly. MGE iscomputationally efficient. It has spent 2420 model evaluations, and found 1156 Pareto optimalpoints—2420/1156=2.1 model evaluations per Pareto optimal point. In addition to the unconstrained multi-objective optimization technique explained in this paper, and illustratedby the two previous benchmark problems, MGE algorithm has means for constrained multi-objective optimization. The following simple benchmark (2) formulates a good known two bar truss constrained optimization problem, 4 American Institute of Aeronautics and Astronautics
  5. 5. and illustrates the constrained optimization aspect of MGE algorithm: Minimize Deflection = ( P ⋅ d ) /( 2 A ⋅ E ⋅ sin(t ) ⋅ cos(t ) 2 Minimize Weight = (2 ⋅ d ⋅ A ⋅ g ) / sin(t ) where Stress = P /[ 2 ⋅ A ⋅ cos(t )] < 40 (2) t = deg ree ⋅ a sin(1) / 90 d = 1000; E = 2.1 ⋅ 10 4 ; g = 6 ⋅ 10 −6 A ∈ [20;50]; deg ree ∈ [ 45;65] FIG.4 shows constrained optimization results found by MGE optimization algorithm for the benchmark (2). FIG. 4 shows all points evaluated by MGE optimization algorithm. The diagrams illustrate both objective space (left) and design space (right.) There are three categories of points on the diagrams: Paretooptimal, feasible, and transitional. MGE sequentially started optimization from randomly distributed initialpoints, and covered the entire Pareto frontier evenly. MGE has spent 400 model evaluations; it has found 100 Pareto optimal points and 278 feasible points. MGE uses a technique similar to Modified Method of Feasible Directions (MMFD) [5] for constrainedoptimization. Since MMFD was designed for constrained single-objective optimization, it could not be used as it is inMGE algorithm, and it has been adjusted to the needs of multi-objective optimization. Current implementation of MGE algorithm uses the previously mentioned MMFD-like constrained optimizationapproach for tasks with a relatively small number of constraints, and automatically shifts to Hybrid Multi-GradientExplorer (HMGE) optimization algorithm for tasks with a larger number of constraints. MGE algorithm employs thehybrid HMGE code only in the infeasible area, and shifts back to the pure gradient based MGA technique as soon asa feasible point has been found. HMGE algorithm has proved a high efficiency and reliability with the most challenging real life constrainedoptimization tasks. It finds feasible areas faster and more reliably than pure gradient-based techniques. Thus, thecombination of MGE and HMGE is a powerful design optimization tool for real life tasks with up to thousands ofdesign variables, and up to hundreds of constraints. It is recommended to use MGE algorithm for multi-objective optimization of computationally expensivesimulation models when covering the entire Pareto frontier is prohibitively expensive. MGE allows improvement ona given design with respect to several objectives (see this scenario on FIG.2), and usually delivers several Paretooptimal points after 10-30 model evaluations. 5 American Institute of Aeronautics and Astronautics
  6. 6. 4. Multi-Gradient Pathfinder Algorithm Multi-Gradient Pathfinder (MGP) is the first multi-objective optimization algorithm which implements the ideaof directed optimization on Pareto frontier based on the user’s preferences. Directed optimization on Pareto frontier means that a search algorithm steps along Pareto frontier from a giveninitial Pareto optimal point towards a desired Pareto optimal point. The search algorithm is supposed to stay onPareto frontier all the time throughout the optimization process until the desired Pareto optimal point will be reached.Then all (or most) of the evaluated points will also be Pareto optimal. Moving along Pareto frontier improves some objectives and compromises other ones. This consideration gives aclue as to how directed optimization needs to be organized to become beneficial for users. In fact, it is enough toformulate which objective functions are preferable, and need to be improved first. This formulates a goal for thedirected search on Pareto frontier. In the case of L=2 objective functions, Pareto frontier is a line in the objective space. Thus, MGP algorithm hasjust two directions to choose from: to improve 1st or 2nd objective function. In the case of L>2 objective functions, Pareto frontier is a multi-dimensional surface, and the algorithm has aninfinite number of directions to move from a given point along the surface. In any case, the user needs to determine achange in direction based on his preferences. Based on the above considerations, the task of directed optimization on Pareto frontier can be formulated in thefollowing way: Minimize F ( X ) = [ F1 ( X ), F2 ( X ),..., Fm ( X )]T X PF ∈X (3) Minimize + P ( X ) = [ P1 ( X ), P2 ( X ),..., Pn ( X )] T X PF ∈X subject to : q j ( X ) ≤ 0; j = 1,2,...k Where X PF ∈ X - is a subset of the design space X, which belongs to Pareto frontier; m – the number ofnon-preferable objective functions F(X), and n – the number of preferable objective functions P(X) that determine thedirection of the move (directed search) on Pareto frontier. L=m+n – the total number of objective functions. Paretofrontier is determined by both sets of objectives F(X) and P(X). Operator Minimize+ applied to P(X) means that it is required to find the best points on Pareto frontier withrespect to the preferable objectives P(X). How MGP operates: First of all, the user needs to determine which objective(s) are preferable (more important) for him. In this way,the user indicates his area of interest on the Pareto frontier. MGP starts from a given Pareto optimal point and performs a required number of steps along Pareto frontier in adirection of simultaneous improvement of preferable objectives. On each step, MGP solves two tasks (see FIG.5,green and blue arrows): • Improves preferable objectives’ values; • Maintains a short distance from the current point to Pareto frontier. It is important to note that there are cases when a given initial point is not Pareto optimal. In this case MGP worksexactly as MGE algorithm. It approaches Pareto frontier first, and then starts stepping along the Pareto frontier in thedirection determined by preferable objectives. 6 American Institute of Aeronautics and Astronautics
  7. 7. F1 F2 FIG.5 illustrates the basic idea of MGP algorithm for the case when both objective functions F1 and F2 need to be minimized and F2 is considered as a preferable objective. On the first half-step, MGP steps in a direction of improvement of the preferable objective – see green arrows onFIG.5. On the second half-step, MGP steps in a direction of simultaneous improvement in ALL objectives—see bluearrows, and in this way maintains a short distance to Pareto frontier. Then MGP starts the next step from the newlyfound Pareto optimal point. Main features of MGP algorithm are explained in the following pseudo-code. 1 Begin 2 Input initial Pareto optimal point X* and required number of steps N. 3 i=1. 4 Declare current point: Xc= X*. 5 Evaluate gradients of all objective functions on Xc. 6 Determine ASI(1) for preferable objectives. 7 Make a step in ASI(1) improving only preferable objectives. 8 Determine ASI(2) for ALL objectives. 9 Make a step in ASI(2) improving ALL objectives; the resulting Pareto point is X**. 10 If i < N then declare current point Xc= X**; i=i+1; go to 5. 11 Report all the solutions found. 12 End The abbreviations ASI(1) and ASI(2) in the above pseudo-code stand for Area of Simultaneous Improvement(ASI) of preferable objectives and of all objectives correspondingly (see FIG.1A-1C). The multi-objective task formulation (4) determines three objectives to be minimized. According to theoptimization task formulation (3), two of them (f2 and f3) are preferable: Minimize f1 = x12 + ( x2 − 1) 2 Minimize + f 2 = x12 + ( x2 + 1) 2 + 1 (4) Minimize + f 3 = ( x1 − 1) + x + 2 2 2 2 − 2 ≤ x1 , x2 ≤ 2 The task formulation (4) is corresponded to the blue markers on FIG.6. 7 American Institute of Aeronautics and Astronautics
  8. 8. FIG. 6 shows Pareto optimal points found by MGP algorithm for the benchmark task (4). MGP hasstarted optimization from the same circled point twice: (a) with one preferable objective f3 – see green points; (b) with two preferable objectives f2 and f3 – see blue points. Transitional points (red and magenta) were evaluated to build local response surface models, and to estimate gradients. All evaluated points (optimal and non-optimal) are visualized on FIG.6, and we can make a few observationsconfirming that MGP performs directed optimization on Pareto frontier: (a) MGP algorithm performs search solely on Pareto frontier, and only in the area of interest; only a few of evaluated points are non-Pareto optimal. (b) The direction of movement along Pareto frontier depends on the selection of preferable objectives, as expected. The green trajectory clearly indicates improvement of f3, and the blue trajectory indicates simultaneous improvement of f2 and f3; (c) MGP is extremely efficient. The majority of evaluated points are Pareto optimal: 191 out of 238 for f1 as preferable objective, and 281 out of 316 for two preferable objectives f2 and f3. The benchmark (5) and FIG.7 illustrate that in the case of two objective functions, MGP is able to start from oneend of Pareto frontier, and cover it completely to another end. The benchmark problem (5) has been chosen because it has a simple classical Pareto front, and allows one tovisualize MGP behavior in both objective space and design space. Minimize + f1 = x12 + x2 Minimize f 2 = x2 + x1 , 2 x1 , x2 ∈ [ −10;10] (5) Operator Minimize+ in the task formulation (5) means that the objective f1 is preferable, and MGP needs tostep along Pareto frontier in the direction which improves the objective f1. The following FIG.7 illustrates a solution of a directed multi-objective optimization task (5) found by MGPalgorithm. FIG. 7 Pareto optimal and transitional points found by MGP algorithm for the benchmark (2). MGPstarts from the initial point, and sequentially steps along Pareto frontier until the end of the Pareto frontier is achieved. MGP has found 225 Pareto optimal points out of 273 model evaluations. 8 American Institute of Aeronautics and Astronautics
  9. 9. The diagrams on FIG.7 illustrate all the points evaluated by MGP algorithm. All yellow markers are obscuredby green markers on the diagrams. It means that transitional points are located very close to Pareto optimal points,and the majority of the points evaluated by MGP algorithm are Pareto optimal (225 of 273). MGP algorithm doesnot have to iterate towards Pareto frontier repeatedly. Instead, it literally steps along Pareto frontier. In fact, MGPhas spent some model evaluations to estimate gradients by the finite difference method, and was able to stay on thePareto frontier on each step throughout the optimization process. Straight parts of the Pareto frontier have notrequired evaluating transitional points at all. Every new point evaluated while MGP was stepping along straightfragments of Pareto frontier was a Pareto optimal point. This can be recognized by an absence of large yellowmarkers behind smaller green markers on a few parts of the Pareto front. However, stepping along the convex partof the Pareto frontier required more transitional points to be evaluated in order to maintain a short distance to thePareto frontier (see FIG.7.) The benchmark problem (6) and FIG. 8 illustrate the ability of MGP algorithm to step along Pareto frontier witha step size determined by the user, and the ability to find disjoint parts of Pareto frontier. Minimize F1 = 1 + ( A1 + B1 ) 2 + ( A2 + B2 ) 2 Minimize + F2 = 1 + ( x1 + 3) 2 + ( x2 + 1) 2 A1 = 0.5 ⋅ sin(1) − 2 ⋅ cos(1) + sin( 2) − 1.5 ⋅ cos( 2) (6) A2 = 1.5 ⋅ sin(1) − cos(1) + 2 ⋅ sin( 2) − 0.5 ⋅ cos( 2) B1 = 0.5 ⋅ sin( x1 ) − 2 ⋅ cos( x1 ) + sin( x2 ) − 1.5 ⋅ cos( x2 ) B2 = 1.5 ⋅ sin( x1 ) − cos( x1 ) + 2 ⋅ sin( x2 ) − 0.5 ⋅ cos( x2 ) x1 , x2 ∈ [ −π , π ] FIG.8A FIG.8B FIG. 8 shows all evaluated points (Pareto optimal and transitional) found by MGP algorithm for the benchmark (6) with different values of the step size S, which determines the distance between points on thePareto frontier. MGP starts from the initial point, and steps along Pareto frontier in the direction improving the preferable objective F2. The results on FIG.8A are corresponded with S=0.005, and the results FIG.8B were found with S=0.015. 9 American Institute of Aeronautics and Astronautics
  10. 10. The diagrams on FIG.8A show 118 Pareto optimal points found by the price of 684 model evaluations, whichis corresponded with the step size S=0.005. The diagrams on the FIG.8B show that with S=0.015, MGP coversPareto frontier by 55 Pareto optimal points, and spends just 351 model evaluations. In both cases the Paretofrontier is covered evenly and completely. The run with the smaller step size is almost two times morecomputationally expensive, but brings twice more Pareto optimal points; in other words, it is twice more accurate.Thus, the user always has a choice to save model evaluations by increasing the step size, or to increase theaccuracy of the solution by decreasing the step size. MGP algorithm has demonstrated a relatively low efficiency for the benchmark (6) compared with thebenchmark (5) because it spent a significant number of model evaluations in transition from one disjoint part ofPareto frontier to another (see yellow markers on FIG.8.) In this study most of the benchmark problems are used to illustrate the unusual capabilities of MGE and MGPalgorithms. Comparing optimization algorithms is not a key point of this paper. However, a few benchmarks will beused to compare MGP algorithm with three state of the art multi-objective optimization algorithms developed by aleading company of the Process Integration and Design Optimization (PIDO) market: Pointer, NSGA-II, andAMGA. These commercial algorithms represent the highest level of optimization technology developed by the bestcompanies and are currently available on the PIDO market. For the algorithms AMGA, NSGA-II, Pointer, and MGP the only default parameter values have been used tomake sure that all algorithms are in equal conditions. The following benchmark ZDT3 (3) has two objective functions, and 30 design variables: Minimize F1 = x1 ⎡ F F ⎤ Minimize + F2 = g ⋅ ⎢1 − 1 − 1 sin(10 π F1 )⎥ ⎣ g g ⎦ (7) 9 n g =1+ ∑ xi ; 0 ≤ xi ≤ 1, i = 1,..n; n = 30 n − 1 i =2 The benchmark (7) has dozens of local Pareto fronts, and this is a challenge for most of multi-objectiveoptimization algorithms. The following FIG.9 shows that optimization search in the entire design space is not productive comparedto directed optimization on Pareto frontier performed by MGP algorithm. FIG. 9 Optimization results comparison graph for algorithms MGP (eArtius), NSGA-II, AMGA, and Pointer. All optimization algorithms performed an equal number (523) of function evaluations. Graph displays the criteria space and two projections of the design space with all evaluated points for each optimization algorithm. MGP algorithm used DDRSM to estimate gradients. 10 American Institute of Aeronautics and Astronautics
  11. 11. As can be seen on FIG.9, MGP algorithm (green and red markers) performs a search in the area of globalPareto frontier, and it covered the Pareto frontier evenly and completely. Other algorithms perform searches in theentire design space, and have difficulties finding the global Pareto frontier. Only Pointer was able to find a fewPareto optimal points in the central part of the global Pareto frontier. AMGA and NSGA-II have not found a singlePareto optimal point after 523 model evaluations, and performed the majority of evaluations very far from theglobal Pareto frontier. 5. Comparison with Weighted Sum Method The most common approach to gradient-based multi-objective optimization is the weighted sum method [1],which employs the utility function (8): k U = ∑ wi Fi ( X ) (8) i =1 k were w is a vector of weights typically set by users such that and ∑w =1 i =1 i w > 0. If all of the weights are positive, the minimum of (8) is Pareto optimal [6]. In other words, minimizing theutility function (8) is sufficient for Pareto optimality. However, the formulation does not provide a necessarycondition for Pareto optimality [7]. The biggest problem with the weighted sum approach is that it is impossible to obtain points on non-convexportions of the Pareto optimal set in the criterion space. Theoretical reasons for this deficiency have been described in [8, 9, 10]. Also, varying the weightsconsistently and continuously may not necessarily result in an even distribution of Pareto optimal points and acomplete representation of the Pareto optimal set [8]. Let us consider a sample illustrating the above deficiencies. The following benchmark model (9) has a non-convex Pareto frontier. Minimize f1 = x1 Minimize + f 2 = 1 + x2 − x1 − 0.1 ⋅ sin(3π ⋅ x1 ) 2 (9) x1 ∈ [0;1]; x2 ∈ [ −2;2] Sequential Quadratic Programming (SQP) algorithm is one of the most popular gradient-basedsingle-objective optimization algorithms. An implementation of SQP has been used for minimizing the utilityfunction (3) for finding Pareto optimal points. Minimize U = w1 f1 + w2 f 2 ; (10) w1 , w2 ∈ [0;1]; w1 + w2 = 1 SQP algorithm has performed a single objective optimization for the utility function (10) 107 times, andperformed 1667 model evaluations in total. Every optimization run was performed with an incremented valueof w1 ∈ [0;1], and w2 = 1 − w1 . Since w1 values have covered the interval [0;1] evenly and completely, it was .expected that the diversity of found Pareto optimal points will be on a high enough level. However, 107 Paretooptimal points have covered relatively small left and right convex parts of the Pareto frontier, and just one of thePareto optimal points is located on the middle part of the Pareto frontier (see blue markers on FIG.10A and 10B). 11 American Institute of Aeronautics and Astronautics
  12. 12. FIG. 10A FIG.10B FIG. 10A compares Pareto optimal points found by MGP and SQP optimization algorithms for thebenchmark (2). MGP has found 153 Pareto optimal points out of 271 model evaluations. SQP has found107 Pareto optimal points out of 1667 model evaluations. FIG.10B compares Pareto optimal points found by MGE and SQP optimization algorithms for the benchmark (2). MGE algorithm has found 173 Pareto optimal points out of 700 model evaluations. As can be seen from FIG.10A-10B, non-convex Pareto frontier is a significant issue for SQP algorithm, anddoes not create any difficulties for MGP and MGE algorithms. Both MGP and MGE have covered the entire Paretofrontier evenly and completely. The weighted sum method substitutes the multi-objective optimization task (9) by a single-objectiveoptimization task (10). However, optimization task (10) is not equivalent to the task (9), and has a different set ofoptimal solutions visualized on FIG.10. Blue points on the diagrams represent a solution of the task (10), andmagenta points represent a solution of the task (9). Multi-Gradient Analysis (MGA) technique employed by both MGE and MGP algorithms resolves the issuescreated by using different kinds of scalarization techniques. MGA allows solving multi-objective optimization tasksas it is, without substituting them by a utility function such as the function U=w1f1+w2f2, used in this sample. At thesame time, MGA allows the benefits of gradient-based techniques such as high convergence and high accuracy. Also, MGA is much simpler compared to scalarization techniques. MGA determines a direction of simultaneousimprovement for all objective functions, and steps in this direction from any given point. In contrast to the weightedsum method, MGA does not require developing an additional logic on top of SQP algorithm for varying weights inthe utility function throughout an optimization process. It just takes any given point, and makes a step improving thepoint with respect to all objectives. Thus, MGA can be used as an element for developing any kinds ofmulti-objective optimization algorithms. Particularly, MGA has been used for designing two pure gradient-basedoptimization algorithms MGE and MGP, discussed in this paper, and two hybrid optimization algorithms HMGEand HMGP based on GA- and gradient- techniques. 6. Dynamically Dimensioned Response Surface Method Dynamically Dimensioned Response Surface Method (DDRSM) is a new method to estimate gradients, whichis equally efficient for low-dimensional and high-dimensional tasks. DDRSM (patent pending) requires just 5-7model evaluations to estimate gradients regardless of task dimension. eArtius DDRSM vs. Traditional RSM Table 2 shows the most important aspects of Response SurfaceMethods (RSM), and compares traditional RSM with eArtius DDRSM. 12 American Institute of Aeronautics and Astronautics
  13. 13. Table 2 Comparison of traditional response surface methods with DDRSM approachRSM Aspects Traditional RSM eArtius DDRSMPurpose Optimize fast surrogate functions Quick gradients estimation for direct instead of computationally expensive optimization of computationally expensive simulation models simulation modelsApproximation type Global approximation Local approximationDomain Entire design space A small sub regionUse of surrogate functions Optimization in entire design space Gradient estimation at a single pointAccuracy requirements High LowThe number of sample points Exponentially grows with increasing 5-7 sample points regardless of taskto build approximations the task dimension dimensionTime required to build an Minutes and hours MillisecondsapproximationTask dimension limitations 30-50 design variables Up to 5,000 design variablesSensitivity analysis Required to reduce task dimension Not required As follows from Table 2, the most common use of response surface methods is creating global approximationsbased on DOE sample points, and further optimization of such surrogate models. This approach requires maintaininga high level accuracy of the approximating surrogate function over the entire design space, which in turn requires alarge number of sample points. In contrast, DDRSM method builds local approximations in a small sub region around a given point, and usesthem for gradients estimation at the point. This reduces requirements to the accuracy of approximating modelsbecause DDRSM does not have to maintain a high level accuracy over the entire design space. There is a common fundamental problem for all response surface methods that is named “curse ofdimensionality” [2]. The curse of dimensionality is the problem caused by the exponential increase in volumeassociated with adding extra dimensions to a design space [2], which in turn requires an exponential increase in thenumber of sample points to maintain the same level of accuracy for response surface models. For instance, we use 25=32 sample points to build an RSM model for 5 design variables, and decided to increasethe number of design variables from 5 to 20. Now we need to use 220= 1,048,576 sample points to maintain the samelevel of accuracy for the RSM model. In real life we use just 100-300 sample points to build such RSM models, andthis causes quality degradation of the optimization results found by optimizing such RSM models. This is a strong limitation for all known response surface approaches. It forces engineers to artificially reduce anoptimization task dimension by assigning constant values to the most of design variables. DDRSM has successfully resolved the curse of dimensionality issue in the following way. DDRSM is based on a realistic assumption that most of real life design problems have a few significant designvariables, and the rest of design variables are not significant. Based on this assumption, DDRSM estimates the mostsignificant projections of gradients for all output variables on each optimization step. In order to achieve this, DDRSM generates 5-7 sample points in the current sub-region, and uses the points torecognize the most significant design variables for each objective function. Then DDRSM builds localapproximations for all output variables which are utilized to estimate the gradients. Since an approximation does not include non-significant variables, the estimated gradient only has projectionsthat correspond to significant variables. All other projections of the gradient are equal to zero. Ignoringnon-significant variables slightly reduces the gradient’s accuracy, but allows estimating gradients by the price of 5-7evaluations for tasks of practically any dimension. DDRSM recognizes the most significant design variables for each output variable (objective functions andconstraints) separately. Thus, each output variable has its own list of significant variables that will be included in itsapproximating function. Also, DDRSM recognizes significant variables repeatedly on each optimization step everytime gradients need to be estimated. This is important because the topology of objective functions and constraints candiverge in different parts of the design space throughout the optimization process, and specific topology details canbe associated with specific design variables. As follows from the previous explanation, DDRSM dynamically reduces the task dimension in each sub-region,and does it independently for each output variable by ignoring non-significant design variables. The same variablecan be critically important for one of the objective functions in the current sub-region, and not significant for otherobjective functions and constraints. Later in the optimization process, in a different sub-region, the topology of anoutput variable can be changed, and DDRSM will create another list of significant design variables corresponded tothe variable’s topology in current sub-region of the search space. Thus, dynamic use of DDRSM on each 13 American Institute of Aeronautics and Astronautics
  14. 14. optimization step makes it more adaptive to a function topology changes, and allows for an increase in the accuracyof gradients estimation. DDRSM combines elements of RSM and sensitivity analysis. Thus, it makes sense to compare DDRSM to thetraditional sensitivity analysis approach. DDRSM vs. Traditional Sensitivity Analysis The most popular sensitivity analysis tools are designed to beused before starting an optimization process. Thus, engineers are forced to determine a single static list of significantvariables for all objective and constraint functions based on their variations in entire design space. After thesensitivity analysis is completed, all non-significant design variables get a constant value and never get changed overthe optimization process. The above approach gives satisfactory results for tasks with a small number of output variables, and hasdifficulties when the number of constraint and objective functions is large. Generally speaking, each output variable has its own topology, own level of non-linearity, and own list ofsignificant variables. The same design variable can be significant for some of the output variables, andnon-significant for other ones. Also, the list of significant variables depends on the current sub-region location. Thus,it is difficult or even impossible to determine a list of design variables that are equally significant for dozens andhundreds of output variables. Also, traditional sensitivity analysis technology requires too many sample points for alarge number of design variables. This reduces the usefulness of the approach for high dimensional tasks. DDRSM completely eliminates the above described issues because it performs sensitivity analysis for eachoutput variable independently, and every time when gradients need to be estimated. Thus, DDRSM takes in accountspecific details of each output variable in general and its local topology in particular. Also, DDRSM is equallyefficient with dozens and thousands of design variables. Implementation of DDRSM The following MGA-DDRSM pseudo code shows basic elements of the MGAoptimization step when DDRSM approach is used to estimate gradients: 1 Begin 2 Input initial point X*. 3 Create a sub-region with center at X*. 4 Generate and evaluate 5-7 sample points in the sub-region. 5 Determine the most significant design variables for each objective function. 6 Create an approximation for each objective function based only on the most significantdesign variables. 7 Use approximations for evaluation of criteria gradients on X*; 8 Determine ASI for all criteria. 9 Determine the direction of next step. 10 Determine the length of the step. 11 Perform the step, and evaluate new point X’ belonging to ASI. 12 If X’ dominates X* then report X’ as an improved point, and go to 14. 13 If X’ does not dominate X* then declare X* as Pareto optimal point. 14 End The following benchmark problem (11) is intended to demonstrate (a) high efficiency of the DDRSM approachto estimate gradients compared with the finite difference method, and (b) the ability of DDRSM to recognizesignificant design variables. The benchmark ZDT1 (11) has 30 design variables, two objectives, and the Pareto frontier is convex. The globalPareto-optimal front corresponds to x1 ∈ [0;1], xi = 0, i = 2,...,10 . The optimization task formulation used is asfollows: Minimize F1 = x1 ⎡ F ⎤ Minimize + F2 = g ⎢1 − 1 ⎥ (11) ⎣ g⎦ 9 n g = 1+ ∑ xi , 0 ≤ xi ≤ 1, i = 1,..., n; n − 1 i =2 n = 30 14 American Institute of Aeronautics and Astronautics
  15. 15. FIG. 11 shows Pareto optimal points found by MGP algorithm for the benchmark (11). The finitedifference method has been used to estimate gradients. 18 Pareto optimal points were found out of 522 model evaluations. MGP algorithm started from the initial Pareto optimal point (see FIG.11), and performed 17 steps along Paretofrontier until it hit the end of the Pareto frontier. FIG.11 shows ideally accurate global Pareto optimal points found byMGP algorithm. The finite difference method was used in this optimization run to estimate gradients, and MGP hadto spend 31 model evaluations to estimate gradients on each optimization step. MGP has found 18 Pareto optimalpoints out of 522 model evaluations. The distance between green and red markers along the axis x10 on FIG.11 (right diagram), indicates the spacingparameter value (0.0001) of the finite difference equation. FIG. 12 shows Pareto optimal points found by MGP algorithm for the benchmark (11). DDRSM method has been used to estimate gradients. 18 Pareto optimal points were found out of 38 model evaluations. MGP algorithm started from the initial Pareto optimal point (see FIG.12), and performed 17 steps along Paretofrontier until it hit the end of the Pareto frontier. FIG.12 shows 18 Pareto optimal points found by MGP algorithm.This optimization run used the same algorithm parameters, but DDRSM method instead of the finite differencemethod to estimate gradients. This time MGP spent just 38 model evaluations, and found the same Pareto optimalsolutions with the same accuracy. As can be seen on FIG.12, DDRSM has generated a number of points randomly (see red markers on left andright diagrams.) The points have been used to build local approximations for estimating gradients. Clearly, both methods of gradient estimation allowed MGP to precisely determine the direction of improvementof the preferable objective F1 on each step, and the direction of simultaneous improvement for both objectives. As a 15 American Institute of Aeronautics and Astronautics
  16. 16. result, MGP algorithm was able to find, and step along the global Pareto frontier on each optimization step. AllPareto optimal points match the conditions x1 ∈ [0;1], xi = 0, i = 2,...,10 , and this means that the optimalsolutions are exact in both cases. However, DDRSM has spent 522/38=13.7 less model evaluations to find the samesolutions 18 Pareto optimal points. The following FIG.13 allows one to see the conceptual advantage of directed optimization on Pareto frontierperformed by MGP algorithm compared with traditional multi-objective optimization approach performed byNSGA-II, AMGA, and Pointer optimization algorithms. FIG. 13 shows Pareto optimal points found by four algorithms for the benchmark (11): NSGA-II, AMGA, and Pointer. MGP has spent 38 model evaluations, and found 18 Pareto optimal points. NSGA-II found 63 first rank points out of 3500 model evaluations. AMGA and Pointer have found 19 and 195 first rank points respectively out of 5000 model evaluations. As follows from FIG.13, optimization algorithms NSGA-II and Pointer were able to approach global Paretofrontier after 3500 and 5000 model evaluations respectively. However, the algorithms were not able to find preciselyaccurate Pareto optimal points, and to cover the entire Pareto frontier. AMGA algorithm was not able to evenapproach the global Pareto frontier after 5000 model evaluations. The following benchmark problem (12) is a challenging task because it has dozens of Pareto frontiers and fivedisjoint segments of the global Pareto frontier. The results of MGP algorithm for this benchmark will be comparedwith results of state of the art commercial multi-objective optimization algorithms developed by a leading designoptimization company: Pointer, NSGA-II and AMGA. Since the benchmark (12) has just 10 design variables and 2 objectives, the entire design space and objectivespace can be visualized on just 6 scatter plots. Thus, we can see the optimization search pattern for each algorithm,and compare directed optimization on Pareto frontier with the traditional optimization approach. Minimize F1 = x1; Minimize + F2 = g ⋅ h (12) where g = 1 + 10(n − 1) + ( x2 + x3 + ... + xn ) − 10 ⋅ [cos(4 ⋅ π ⋅ x2 ) + cos(4 ⋅ π ⋅ x3 ) + ... + cos(4 ⋅ π ⋅ xn )], n = 10; 2 2 2 h = 1 − F1 / g − ( F1 / g ) ⋅ sin(10 ⋅ π ⋅ F1 ); X ∈ [0;1] The following FIG.14 illustrates optimization results for the benchmark problem (12). 16 American Institute of Aeronautics and Astronautics
  17. 17. FIG. 14 shows all points evaluated by MGP algorithm and by three other multi-objective optimization algorithms Pointer, NSGA-II, and AMGA. MGP has used DDRSM for gradients estimation. It spent 185evaluations, and has covered all five segments of the global Pareto frontier. Each alternative algorithm spent 2000 model evaluations with much worse results: NSGA-II was able to approach 3 of 5 segments on the global Pareto frontier. AMGA and Pointer have not found a single Pareto optimal solution. The global Pareto frontier for the benchmark (12) belongs to the straight line {x1=0…1, x2=x3=…=x10=0}. Itwas critical for MGP algorithm to recognize that x1 is the most significant design variable. It was done byDDRSM, and x1 was included in each local approximation model used for gradients estimation. As a result, MGPwas stepping along x1 axis from 0 to 1, and covered all five segments of the global Pareto frontier (see FIG.14). Also,DDRSM helped to recognize that all other design variables are equal to zero for Pareto optimal points--one can seegreen points in the origin on the charts on FIG.14. Thus, in contrast with other algorithms, MGP performed all modelevaluations in a small area around Pareto frontier in the design space (see green and red markers on FIG.14), whichimproved accuracy and efficiency of the algorithm. 7. eArtius Design Optimization Tool eArtius has developed a commercial product Pareto Explorer, which is a multi-objective optimization and design environment combining a process integration platform with sophisticated, superior optimization algorithms, and powerful post-processing capabilities. Pareto Explorer 2010 implements the described above optimization algorithms, and provides a complete set of functionality necessary for a design optimization tool: • Intuitive and easy to use Graphical User Interface; advanced IDE paradigm similar to Microsoft Developer Studio 2010 (see FIG.22); • Interactive 2D/3D graphics based on OpenGL technology; • Graphical visualization of optimization process in real time; • Process integration functionality; • Statistical Analysis tools embedded in the system; • Design of Experiments techniques; • Response Surface Modeling; • Pre- and post-processing of design information; • Data import and export. All the diagrams included in this paper are generated by Pareto Explorer 2010. The diagrams give an idea aboutthe quality of data visualization, the ability to compare different datasets, and a flexible control over the diagramsappearance. 17 American Institute of Aeronautics and Astronautics
  18. 18. FIG. 14 shows a screenshot of Pareto Explorer’s main window. In addition to the design optimization environment implemented in Pareto Explorer, eArtius provides all thedescribed algorithms as plug-ins for Noesis OPTIMUS, ESTECO modeFrontier, and Simulia Isight designoptimization environments. Additional information about eArtius products and design optimization technology can be found atwww.eartius.com. 8. Conclusion Novel gradient-based algorithms MGE and MGP for multi-objective optimization have been developed ateArtius. Both algorithms utilize the ability of MGA analysis to find a direction of simultaneous improvement for allobjective functions, and provide superior efficiency with 2-5 evaluations per Pareto optimal point. Both algorithms allow the user to decrease the volume of search space by determining an area of interest, andreducing in this way the number of necessary model evaluations by orders of magnitude. MGE algorithm allows starting from a given design, and takes just 15-30 model evaluations to find an improveddesign with respect to all objectives. MGP algorithm goes further: it uses Pareto frontier as a search space, and performs directed optimization on thePareto frontier in the user’s area of interest determined by a selection of preferred objectives. Avoiding a search inthe entire design space and searching only in the area of interest directly on Pareto frontier reduces the requirednumber of model evaluations dramatically. MGP needs just 2-5 evaluations per step, and each step brings a few newPareto optimal points. Both MGE and MGP algorithms are the best choice for multi-objective optimization of computationallyexpensive simulation models taking hours or even days of computational time for performing a single evaluation. New response surface method DDRSM is also developed. DDRSM builds local approximation for outputvariables on each optimization step, and estimates gradients consuming just 5-7 evaluations. DDRSM dynamicallyrecognizes the most significant design variables for each objective and constraint, and filters out non-significantvariables. This allows overcoming the famous “curse of dimensionality” problem: efficiency of MGE and MGPalgorithms does not depend on the number of design variables. eArtius optimization algorithms are equally efficientwith low dimensional and high dimensional (up to 5000 design variables) optimization tasks. Also, DDRSMeliminates the necessity to use traditional response surface and sensitivity analysis methods, which simplifies thedesign optimization technology and saves time for engineers. 18 American Institute of Aeronautics and Astronautics
  19. 19. References1. Marler, R. T., and Arora, J. S. (2004), "Survey of Multi-objective Optimization Methods for Engineering",Structural and Multidisciplinary Optimization, 26, 6, 369-395.2. Bellman, R.E. 1957. Dynamic Programming. Princeton University Press, Princeton, NJ.3. Simpson, T. W., Booker, A. J., Ghosh, D., Giunta, A. A., Koch, P. N., and Yang, R.-J. (2004)Approximation Methods in Multidisciplinary Analysis and Optimization: A Panel Discussion, Structural andMultidisciplinary Optimization, 27:5 (302-313)4. Vladimir Sevastyanov, Oleg Shaposhnikov Gradient-based Methods for Multi-Objective Optimization. PatentApplication Serial No. 11/116,503 filed April 28, 2005.5. Garret N. Vanderplaats, 2005. Numerical Optimization Techniques for Engineering Design: With Applications,Fourth Edition, Vanderplaats Research & Development, Inc.6. Zadeh, L. A. 1963: Optimality and Non-Scalar-Valued Performance Criteria. IEEE Transactions on AutomaticControl AC-8, 59-60.7. Zionts, S. 1988: Multiple Criteria Mathematical Programming: An Updated Overview and Several Approaches.In: Mitra, G. (ed.) Mathematical Models for Decision Support, 135-167. Berlin: Springer-Verlag.8. Das, I.; Dennis, J. E. 1997: A Closer Look at Drawbacks of Minimizing Weighted Sums of Objectives for ParetoSet Generation in Multicriteria Optimization Problems. Structural Optimization 14, 63-69.9. Messac, A.; Sukam, C. P.; Melachrinoudis, E. 2000a: Aggregate Objective Functions and Pareto Frontiers:Required Relationships and Practical Implications. Optimization and Engineering 1, 171-188.10. Messac, A.; Sundararaj, G., J.; Tappeta, R., V.; Renaud, J., E. 2000b: Ability of Objective Functions to GeneratePoints on Nonconvex Pareto Frontiers. AIAA Journal 38, 1084-1091. 19 American Institute of Aeronautics and Astronautics

×