Wind farm layout optimization (WFLO) is the process of optimizing the location of turbines in a wind farm site, with the possible objective of maximizing the energy production or minimizing the average cost of energy. Conventional WFLO methods not only limit themselves to prescribing the site boundaries, they are also generally applicable to designing only small-to-medium scale wind farms (<100 turbines). Large-scale wind farms entail greater wake-induced turbine interactions, thereby increasing the computa- tional complexity and expense by orders of magnitude. In this paper, we further advance the Unrestricted WFLO framework by designing the layout of large-scale wind farms with 500 turbines (where energy pro- duction is maximized). First, the high-dimensional layout optimization problem (involving 2N variables for a N turbine wind farm) is reduced to a 6-variable problem through a novel mapping strategy, which allows for both global siting (overall land configuration) and local exploration (turbine micrositing). Sec- ondly, a surrogate model is used to substitute the expensive analytical WF energy production model; the high computational expense of the latter is attributed to the factorial increase in the number of calls to the wake model for evaluating every candidate wind farm layout that involves a large number of turbines. The powerful Concurrent Surrogate Model Selection (COSMOS) framework is applied to identify the best surrogate model to represent the wind farm energy production as a function of the reduced variable vector. To accomplish a reliable optimum solution, the surrogate-based optimization (SBO) is performed by implementing the Adaptive Model Refinement (AMR) technique within Particle Swarm Optimization (PSO). In AMR, both local exploitation and global exploration aspects are considered within a single optimization run of PSO, unlike other SBO methods that often either require multiple (potentially mis- leading) optimizations or are model-dependent. By using the AMR approach in conjunction with PSO and COSMOS, the computational cost of designing very large wind farms is reduced by a remarkable factor of 26, while preserving the reliability of this WFLO within 0.05% of the WFLO performed using the original energy production model.
This document describes a visually-informed decision-making platform (VIDMAP) for model-based design of wind farms. It aims to quantify and illustrate the criticality of information exchanged between different models in the wind farm layout optimization process. The platform consists of three main components: (1) uncertainty quantification to quantify variability in inputs and uncertainties introduced by upstream models, (2) sensitivity analysis to analyze sensitivity of downstream models, and (3) information visualization to visualize uncertainties and inter-model sensitivities. Sensitivity analysis is performed to quantify the sensitivity of an energy production model to first-level inputs and errors in upstream models like wind distribution, shear, turbine power response, and wake models.
One of the primary drawbacks plaguing wider acceptance of surrogate models is their low fidelity in general. This issue can be in a large part attributed to the lack of automated model selection techniques, particularly ones that do not make limiting assumptions regarding the choice of model types and kernel types. A novel model selection technique was recently developed to perform optimal model search concurrently at three levels: (i) optimal model type (e.g., RBF), (ii) optimal kernel type (e.g., multiquadric), and (iii) optimal values of hyper-parameters (e.g., shape parameter) that are conventionally kept constant. The error measures to be minimized in this optimal model selection process are determined by the Predictive Estimation of Model Fidelity (PEMF) method, which has been shown to be significantly more accurate than typical cross-validation-based error metrics. In this paper, we make the following important advancements to the PEMF-based model selection framework, now called the Concurrent Surrogate Model Selection or COS- MOS framework: (i) The optimization formulation is modified through binary coding to allow surrogates with differing num- bers of candidate kernels and kernels with differing numbers of hyper-parameters (which was previously not allowed). (ii) A robustness criterion, based on the variance of errors, is added to the existing criteria for model selection. (iii) A larger candidate pool of 16 surrogate-kernel combinations is considered for selection − possibly making COSMOS one of the most comprehensive surrogate model selection framework (in theory and implementation) currently available. The effectiveness of the COSMOS framework is demonstrated by successfully applying it to four benchmark problems (with 2-30 variables) and an airfoil design problem. The optimal model selection results illustrate how diverse models provide important tradeoffs for different problems.
This document presents a bi-level framework for visualizing trade-offs in wind farm design between capacity factor and land use. The lower level uses multi-objective optimization to explore the trade-off for different nameplate capacities. The upper level fits curves to pareto solutions to parametrically represent the trade-off as a function of nameplate capacity. A numerical experiment applies the framework to a case study exploring capacity factor and land area per MW installed. The framework aims to streamline wind farm planning by quantifying key design trade-offs.
Approximation models (or surrogate models) provide an efficient substitute to expen- sive physical simulations and an efficient solution to the lack of physical models of system behavior. However, it is challenging to quantify the accuracy and reliability of such ap- proximation models in a region of interest or the overall domain without additional system evaluations. Standard error measures, such as the mean squared error, the cross-validation error, and the Akaikes information criterion, provide limited (often inadequate) informa- tion regarding the accuracy of the final surrogate. This paper introduces a novel and model independent concept to quantify the level of errors in the function value estimated by the final surrogate in any given region of the design domain. This method is called the Re- gional Error Estimation of Surrogate (REES). Assuming the full set of available sample points to be fixed, intermediate surrogates are iteratively constructed over a sample set comprising all samples outside the region of interest and heuristic subsets of samples inside the region of interest (i.e., intermediate training points). The intermediate surrogate is tested over the remaining sample points inside the region of interest (i.e., intermediate test points). The fraction of sample points inside region of interest, which are used as interme- diate training points, is fixed at each iteration, with the total number of iterations being pre-specified. The estimated median and maximum relative errors within the region of in- terest for the heuristic subsets at each iteration are used to fit a distribution of the median and maximum error, respectively. The estimated statistical mode of the median and the maximum error, and the absolute maximum error are then represented as functions of the density of intermediate training points, using regression models. The regression models are then used to predict the expected median and maximum regional errors when all the sample points are used as training points. Standard test functions and a wind farm power generation problem are used to illustrate the effectiveness and the utility of such a regional error quantification method.
The analysis of complex system behavior often demands expensive experiments or computational simulations. Surrogates modeling techniques are often used to provide a tractable and inexpensive approximation of such complex system behavior. Owing to the lack of knowledge regarding the suitability of particular surrogate modeling techniques, model selection approach can be helpful to choose the best surrogate technique. Popular model selection approaches include: (i) split sample, (ii) cross-validation, (iii) bootstrapping, and (iv) Akaike's information criterion (AIC) (Queipo et al. 2005; Bozdogan et al. 2000). However, the effectiveness of these model selection methods is limited by the lack of accurate measures of local and global errors in surrogates.
This paper develops a novel and model-independent concept to quantify the local/global reliability of surrogates, to assist in model selection (in surrogate applications). This method is called the Generalized-Regional Error Estimation of Surrogate (G-REES). In this method, intermediate surrogates are iteratively constructed over heuristic subsets of the available sample points (i.e., intermediate training points), and tested over the remaining available sample points (i.e., intermediate test points). The fraction of sample points used as intermediate training points is fixed at each iteration, with the total number of iterations being pre-specified. The estimated median and maximum relative errors for the heuristic subsets at each iteration are used to fit a distribution of the median and maximum error, respectively. The statistical mode of the median and the maximum error distributions are then determined. These mode values are then represented as functions of the density of training points (at the corresponding iteration). Regression methods, called Variation of Error with Sample Density (VESD), are used for this purpose. The VESD models are then used to predict the expected median and maximum errors, when all the sample points are used as training points.
The effectiveness of the proposed model selection criterion is explored to find the best surrogate between candidates including: (i) Kriging, (ii) Radial Basis Functions (RBF), (iii) Extended Radial Basis Functions (ERBF), and (iv) Quadratic Response Surface (QRS), for standard test functions and a wind farm capacity factor function. The results will be compared with the relative accuracy of the surrogates evaluated on additional test points, and also with the prediction sum of square (PRESS) error given by leave-one-out cross-validation.
The application of G-REES to a standard test problem with two design variables (Branin-hoo function) show that the proposed method predicts the median and the maximum value of the global error with a higher level of confidence compared to PRESS. It also shows that model selection based on G-REES method is significantly more reliable than that currently performed using error measures such as PRESS. The
This document presents an adaptive model switching technique for variable fidelity optimization using population-based algorithms. The technique aims to provide reliable high-fidelity optimum designs with reasonable computational expense by leveraging multiple models of varying fidelity. It switches models by comparing the error distribution of the current model to the distribution of recent fitness function improvements over the population. The method was tested on airfoil and cantilever beam design problems, showing substantially better balance of optimum quality and efficiency than purely low- or high-fidelity optimizations.
In spite of the recent developments in surrogate modeling techniques, the low fidelity of these models often limits their use in practical engineering design optimization. When surrogate models are used to represent the behavior of a complex system, it is challenging to simultaneously obtain high accuracy over the entire design space. When such surrogates are used for optimization, it becomes challenging to find the optimum/optima with certainty. Sequential sampling methods offer a powerful solution to this challenge by providing the surrogate with reasonable accuracy where and when needed. When surrogate-based design optimization (SBDO) is performed using sequential sampling, the typical SBDO process is repeated multiple times, where each time the surrogate is improved by addition of new sample points. This paper presents a new adaptive approach to add infill points during SBDO, called Adaptive Sequential Sampling (ASS). In this approach, both local exploitation and global exploration aspects are considered for updating the surrogate during optimization, where multiple iterations of the SBDO process is performed to increase the quality of the optimal solution. This approach adaptively improves the accuracy of the surrogate in the region of the current global optimum as well as in the regions of higher relative errors. Based on the initial sample points and the fitted surrogate, the ASS method adds infill points at each iteration in the locations of: (i) the current optimum found based on the
fitted surrogate; and (ii) the points generated using cross-over between sample points that
have relatively higher cross-validation errors. The Nelder and Mead Simplex method is adopted as the optimization algorithm. The effectiveness of the proposed method is illustrated using a series of standard numerical test problems.
This paper advances the Domain Segmentation based on Uncertainty in the Surrogate (DSUS) framework which is a novel approach to characterize the uncertainty in surrogates. The leave-one-out cross-validation technique is adopted in the DSUS framework to measure local errors of a surrogate. A method is proposed in this paper to evaluate the performance of the leave-out-out cross-validation errors as local error measures. This method evaluates local errors by comparing: (i) the leave-one-out cross-validation error with (ii) the actual local error estimated within a local hypercube for each training point. The comparison results show that the leave-one-out cross-validation strategy can capture the local errors of a surrogate. The DSUS framework is then applied to key aspects of wind resource as- sessment and wind farm cost modeling. The uncertainties in the wind farm cost and the wind power potential are successfully characterized, which provides designers/users more confidence when using these models
This document describes a visually-informed decision-making platform (VIDMAP) for model-based design of wind farms. It aims to quantify and illustrate the criticality of information exchanged between different models in the wind farm layout optimization process. The platform consists of three main components: (1) uncertainty quantification to quantify variability in inputs and uncertainties introduced by upstream models, (2) sensitivity analysis to analyze sensitivity of downstream models, and (3) information visualization to visualize uncertainties and inter-model sensitivities. Sensitivity analysis is performed to quantify the sensitivity of an energy production model to first-level inputs and errors in upstream models like wind distribution, shear, turbine power response, and wake models.
One of the primary drawbacks plaguing wider acceptance of surrogate models is their low fidelity in general. This issue can be in a large part attributed to the lack of automated model selection techniques, particularly ones that do not make limiting assumptions regarding the choice of model types and kernel types. A novel model selection technique was recently developed to perform optimal model search concurrently at three levels: (i) optimal model type (e.g., RBF), (ii) optimal kernel type (e.g., multiquadric), and (iii) optimal values of hyper-parameters (e.g., shape parameter) that are conventionally kept constant. The error measures to be minimized in this optimal model selection process are determined by the Predictive Estimation of Model Fidelity (PEMF) method, which has been shown to be significantly more accurate than typical cross-validation-based error metrics. In this paper, we make the following important advancements to the PEMF-based model selection framework, now called the Concurrent Surrogate Model Selection or COS- MOS framework: (i) The optimization formulation is modified through binary coding to allow surrogates with differing num- bers of candidate kernels and kernels with differing numbers of hyper-parameters (which was previously not allowed). (ii) A robustness criterion, based on the variance of errors, is added to the existing criteria for model selection. (iii) A larger candidate pool of 16 surrogate-kernel combinations is considered for selection − possibly making COSMOS one of the most comprehensive surrogate model selection framework (in theory and implementation) currently available. The effectiveness of the COSMOS framework is demonstrated by successfully applying it to four benchmark problems (with 2-30 variables) and an airfoil design problem. The optimal model selection results illustrate how diverse models provide important tradeoffs for different problems.
This document presents a bi-level framework for visualizing trade-offs in wind farm design between capacity factor and land use. The lower level uses multi-objective optimization to explore the trade-off for different nameplate capacities. The upper level fits curves to pareto solutions to parametrically represent the trade-off as a function of nameplate capacity. A numerical experiment applies the framework to a case study exploring capacity factor and land area per MW installed. The framework aims to streamline wind farm planning by quantifying key design trade-offs.
Approximation models (or surrogate models) provide an efficient substitute to expen- sive physical simulations and an efficient solution to the lack of physical models of system behavior. However, it is challenging to quantify the accuracy and reliability of such ap- proximation models in a region of interest or the overall domain without additional system evaluations. Standard error measures, such as the mean squared error, the cross-validation error, and the Akaikes information criterion, provide limited (often inadequate) informa- tion regarding the accuracy of the final surrogate. This paper introduces a novel and model independent concept to quantify the level of errors in the function value estimated by the final surrogate in any given region of the design domain. This method is called the Re- gional Error Estimation of Surrogate (REES). Assuming the full set of available sample points to be fixed, intermediate surrogates are iteratively constructed over a sample set comprising all samples outside the region of interest and heuristic subsets of samples inside the region of interest (i.e., intermediate training points). The intermediate surrogate is tested over the remaining sample points inside the region of interest (i.e., intermediate test points). The fraction of sample points inside region of interest, which are used as interme- diate training points, is fixed at each iteration, with the total number of iterations being pre-specified. The estimated median and maximum relative errors within the region of in- terest for the heuristic subsets at each iteration are used to fit a distribution of the median and maximum error, respectively. The estimated statistical mode of the median and the maximum error, and the absolute maximum error are then represented as functions of the density of intermediate training points, using regression models. The regression models are then used to predict the expected median and maximum regional errors when all the sample points are used as training points. Standard test functions and a wind farm power generation problem are used to illustrate the effectiveness and the utility of such a regional error quantification method.
The analysis of complex system behavior often demands expensive experiments or computational simulations. Surrogates modeling techniques are often used to provide a tractable and inexpensive approximation of such complex system behavior. Owing to the lack of knowledge regarding the suitability of particular surrogate modeling techniques, model selection approach can be helpful to choose the best surrogate technique. Popular model selection approaches include: (i) split sample, (ii) cross-validation, (iii) bootstrapping, and (iv) Akaike's information criterion (AIC) (Queipo et al. 2005; Bozdogan et al. 2000). However, the effectiveness of these model selection methods is limited by the lack of accurate measures of local and global errors in surrogates.
This paper develops a novel and model-independent concept to quantify the local/global reliability of surrogates, to assist in model selection (in surrogate applications). This method is called the Generalized-Regional Error Estimation of Surrogate (G-REES). In this method, intermediate surrogates are iteratively constructed over heuristic subsets of the available sample points (i.e., intermediate training points), and tested over the remaining available sample points (i.e., intermediate test points). The fraction of sample points used as intermediate training points is fixed at each iteration, with the total number of iterations being pre-specified. The estimated median and maximum relative errors for the heuristic subsets at each iteration are used to fit a distribution of the median and maximum error, respectively. The statistical mode of the median and the maximum error distributions are then determined. These mode values are then represented as functions of the density of training points (at the corresponding iteration). Regression methods, called Variation of Error with Sample Density (VESD), are used for this purpose. The VESD models are then used to predict the expected median and maximum errors, when all the sample points are used as training points.
The effectiveness of the proposed model selection criterion is explored to find the best surrogate between candidates including: (i) Kriging, (ii) Radial Basis Functions (RBF), (iii) Extended Radial Basis Functions (ERBF), and (iv) Quadratic Response Surface (QRS), for standard test functions and a wind farm capacity factor function. The results will be compared with the relative accuracy of the surrogates evaluated on additional test points, and also with the prediction sum of square (PRESS) error given by leave-one-out cross-validation.
The application of G-REES to a standard test problem with two design variables (Branin-hoo function) show that the proposed method predicts the median and the maximum value of the global error with a higher level of confidence compared to PRESS. It also shows that model selection based on G-REES method is significantly more reliable than that currently performed using error measures such as PRESS. The
This document presents an adaptive model switching technique for variable fidelity optimization using population-based algorithms. The technique aims to provide reliable high-fidelity optimum designs with reasonable computational expense by leveraging multiple models of varying fidelity. It switches models by comparing the error distribution of the current model to the distribution of recent fitness function improvements over the population. The method was tested on airfoil and cantilever beam design problems, showing substantially better balance of optimum quality and efficiency than purely low- or high-fidelity optimizations.
In spite of the recent developments in surrogate modeling techniques, the low fidelity of these models often limits their use in practical engineering design optimization. When surrogate models are used to represent the behavior of a complex system, it is challenging to simultaneously obtain high accuracy over the entire design space. When such surrogates are used for optimization, it becomes challenging to find the optimum/optima with certainty. Sequential sampling methods offer a powerful solution to this challenge by providing the surrogate with reasonable accuracy where and when needed. When surrogate-based design optimization (SBDO) is performed using sequential sampling, the typical SBDO process is repeated multiple times, where each time the surrogate is improved by addition of new sample points. This paper presents a new adaptive approach to add infill points during SBDO, called Adaptive Sequential Sampling (ASS). In this approach, both local exploitation and global exploration aspects are considered for updating the surrogate during optimization, where multiple iterations of the SBDO process is performed to increase the quality of the optimal solution. This approach adaptively improves the accuracy of the surrogate in the region of the current global optimum as well as in the regions of higher relative errors. Based on the initial sample points and the fitted surrogate, the ASS method adds infill points at each iteration in the locations of: (i) the current optimum found based on the
fitted surrogate; and (ii) the points generated using cross-over between sample points that
have relatively higher cross-validation errors. The Nelder and Mead Simplex method is adopted as the optimization algorithm. The effectiveness of the proposed method is illustrated using a series of standard numerical test problems.
This paper advances the Domain Segmentation based on Uncertainty in the Surrogate (DSUS) framework which is a novel approach to characterize the uncertainty in surrogates. The leave-one-out cross-validation technique is adopted in the DSUS framework to measure local errors of a surrogate. A method is proposed in this paper to evaluate the performance of the leave-out-out cross-validation errors as local error measures. This method evaluates local errors by comparing: (i) the leave-one-out cross-validation error with (ii) the actual local error estimated within a local hypercube for each training point. The comparison results show that the leave-one-out cross-validation strategy can capture the local errors of a surrogate. The DSUS framework is then applied to key aspects of wind resource as- sessment and wind farm cost modeling. The uncertainties in the wind farm cost and the wind power potential are successfully characterized, which provides designers/users more confidence when using these models
This document presents a new 3-level approach to simultaneously select the best surrogate model type, kernel function, and hyper-parameters for approximation models. The approach uses Regional Error Estimation of Surrogates (REES) to evaluate error and select models. It compares a cascaded technique that performs sequential optimization versus a one-step technique. Numerical examples on benchmark problems show the one-step technique reduces maximum and median errors by at least 60% with lower computational cost compared to the cascaded approach. Future work involves applying the one-step method to more complex problems and developing an online platform for collaborative surrogate model selection.
A parsimonious SVM model selection criterion for classification of real-world ...o_almasi
This paper proposes and optimizes a two-term cost function consisting of a sparseness term and a generalized v-fold cross-validation term by a new adaptive particle swarm optimization (APSO). APSO updates its parameters adaptively based on a dynamic feedback from the success rate of the each particle’s personal best. Since the proposed cost function is based on the choosing fewer numbers of support vectors, the complexity of SVM models decreased while the accuracy remains in an acceptable range. Therefore, the testing time decreases and makes SVM more applicable for practical applications in real data sets. A comparative study on data sets of UCI database is performed between the proposed cost function and conventional cost function to demonstrate the effectiveness of the proposed cost function.
Surrogate-based design is an effective approach for modeling computationally expensive system behavior. In such application, it is often challenging to characterize the expected accuracy of the surrogate. In addition to global and local error measures, regional error measures can be used to understand and interpret the surrogate accuracy in the regions of interest. This paper develops the Regional Error Estimation of Surrogate (REES) method to quantify the level of the error in any given subspace (or region) of the entire domain, when all the available training points have been invested to build the surrogate. In this approach, the accuracy of the surrogate in each subspace is estimated by modeling the variations of the mean and the maximum error in that subspace with increasing number of training points (in an iterative process). A regression model is used for this purpose. At each iteration, the intermediate surrogate is constructed using a subset of the entire training data, and tested over the remaining points. The evaluated errors at the intermediate test points at each iteration are used for training the regression model that represents the error variation with sample points. The effectiveness of the proposed method is illustrated using standard test problems. To this end, the predicted regional errors of the surrogate constructed using all the training points are compared with the regional errors estimated over a large set of test points.
This paper explores the effectiveness of the recently devel- oped surrogate modeling method, the Adaptive Hybrid Functions (AHF), through its application to complex engineered systems design. The AHF is a hybrid surrogate modeling method that seeks to exploit the advantages of each component surrogate. In this paper, the AHF integrates three component surrogate mod- els: (i) the Radial Basis Functions (RBF), (ii) the Extended Ra- dial Basis Functions (E-RBF), and (iii) the Kriging model, by characterizing and evaluating the local measure of accuracy of each model. The AHF is applied to model complex engineer- ing systems and an economic system, namely: (i) wind farm de- sign; (ii) product family design (for universal electric motors); (iii) three-pane window design; and (iv) onshore wind farm cost estimation. We use three differing sampling techniques to inves- tigate their influence on the quality of the resulting surrogates. These sampling techniques are (i) Latin Hypercube Sampling
∗Doctoral Student, Multidisciplinary Design and Optimization Laboratory, Department of Mechanical, Aerospace and Nuclear Engineering, ASME student member.
†Distinguished Professor and Department Chair. Department of Mechanical and Aerospace Engineering, ASME Lifetime Fellow. Corresponding author.
‡Associate Professor, Department of Mechanical Aerospace and Nuclear En- gineering, ASME member (LHS), (ii) Sobol’s quasirandom sequence, and (iii) Hammers- ley Sequence Sampling (HSS). Cross-validation is used to evalu- ate the accuracy of the resulting surrogate models. As expected, the accuracy of the surrogate model was found to improve with increase in the sample size. We also observed that, the Sobol’s and the LHS sampling techniques performed better in the case of high-dimensional problems, whereas the HSS sampling tech- nique performed better in the case of low-dimensional problems. Overall, the AHF method was observed to provide acceptable- to-high accuracy in representing complex design systems.
This document discusses model-based calibration techniques used to develop an engine calibration to meet emissions standards. It focuses on using design of experiments methods and statistical modeling to optimize engine parameters. Specifically, it describes:
1) Using a two-stage regression approach to separate variables like spark timing into a "local model" and others into a "global model" to better characterize responses.
2) How screening experiments can help select appropriate variables and ranges for the design of experiments to avoid unstable operating points.
3) Techniques for modeling experimental data through "local models" of individual variables and a "global model" that combines local models to reproduce responses for any combination of variables.
This paper proposes a novel model management technique to be applied in population- based heuristic optimization. This technique adaptively selects different computational models (both physics-based and statistical models) to be used during optimization, with the overall goal to end with high fidelity solutions in a reasonable time period. For example, in optimizing an aircraft wing to obtain maximum lift-to-drag ratio, one can use low-fidelity models such as given by the vortex lattice method, or a high-fidelity finite volume model (that solves the full Navier-Stokes equations), or a surrogate model that substitutes the high-fidelity model.The information from models with different levels of fidelity is inte- grated into the heuristic optimization process using a novel model-switching metric. In this context, models could be surrogate models, low-fidelity physics-based analytical mod- els, and medium-to-high fidelity computational models (based on grid density). The model switching technique replaces the current model with the next higher fidelity model, when a stochastic switching criterion is met at a given iteration during the optimization process. The switching criteria is based on whether the uncertainty associated with the current model output dominates the latest improvement of the fitness function. In the case of the physics-based models, the uncertainty in their output is quantified through an inverse assessment process by comparing with high-fidelity model responses or experimental data (if available). To determine the fidelity of surrogate models, the Predictive Estimation of Model Fidelity (PEMF) method is applied. The effectiveness of the proposed method is demonstrated by applying it to airfoil optimization with the objective to maximize the lift to drag ratio of the wing under different flow regimes. It was found that the tuned low fidelity model dominates the optimization process in terms of computational time and function calls.
This document provides an overview of a study to select a site for a wind-solar hybrid power plant in Turkey using multi-criteria decision making methods. It first outlines the problem definition and literature review on methods like AHP, ANP, TOPSIS. It then proposes using AHP with benefits, opportunities, costs and risks (BOCR) criteria and the ideal matter element method for site selection. The document includes a flow chart of the AHP-BOCR method and shows an example of its application to the case study, including defining criteria and alternatives, collecting data, determining weights, and normalizing alternative performance values.
Performance improvement of a Rainfall Prediction Model using Particle Swarm O...ijceronline
The performances of the statistical methods of time series forecast can be improved by precise selection of their parameters. Various techniques are being applied to improve the modeling accuracy of these models. Particle swarm optimization is one such technique which can be conveniently used to determine the model parameters accurately. This robust optimization technique has already been applied to improve the performance of artificial neural networks for time series prediction. This study uses particle swarm optimization technique to determine the parameters of an exponential autoregressive model for time series prediction. The model is applied for annual rainfall prediction and it shows a fairly good performance in comparison to the statistical ARIMA model
A REVIEW ON OPTIMIZATION OF LEAST SQUARES SUPPORT VECTOR MACHINE FOR TIME SER...ijaia
Support Vector Machine has appeared as an active study in machine learning community and extensively
used in various fields including in prediction, pattern recognition and many more. However, the Least
Squares Support Vector Machine which is a variant of Support Vector Machine offers better solution
strategy. In order to utilize the LSSVM capability in data mining task such as prediction, there is a need to
optimize its hyper parameters. This paper presents a review on techniques used to optimize the parameters
based on two main classes; Evolutionary Computation and Cross Validation.
The document describes the Model Induced Metropolis-Hastings (MIMH) algorithm for efficiently sampling from high-performance regions of costly objective functions. MIMH performs Metropolis-Hastings random walks on a radial basis function network (RBFN) model of the objective function. After each walk, the endpoint is added to the RBFN training set to improve the model. Experiments show MIMH finds good solutions with significantly fewer objective function evaluations than other algorithms like Niching ES, and the number of evaluations can be reduced further by raising the acceptance probability exponent. MIMH provides an effective way to identify high-performance regions at low cost for initializing more greedy optimization methods.
The document summarizes electricity load forecasting techniques for power system planning. It discusses using curve fitting algorithms to forecast electricity load based on analyzing past load data from 2012. Specifically, it proposes using a Fourier series curve fitting model to predict future load based on factors like temperature, humidity, and time of day or year. The document also briefly describes other common load forecasting techniques including multiple regression, exponential smoothing, and neural networks.
This document summarizes a research article that proposes using Hidden Semi-Markov Models (HSMMs) for predictive maintenance applications. Some key points:
- HSMMs allow modeling the duration a system spends in each state, which provides more accurate modeling than traditional HMMs for applications where state duration is important.
- The proposed HSMM models state duration with a parametric distribution rather than a non-parametric one, reducing the number of parameters needed. It also does not constrain the type of duration distribution or observation process used.
- The paper describes adapting learning, inference and prediction algorithms for the proposed HSMM. It also proposes using the Akaike Information Criterion for automated model selection
Congestion Management in Power System by Optimal Location And Sizing of UPFCIOSR Journals
The document presents a particle swarm optimization (PSO) algorithm to optimally place and size a unified power flow controller (UPFC) to alleviate congestion in a power system. The PSO algorithm is used to determine the optimal generator dispatch as well as the optimal location and size of a single UPFC. Simulations on a 5-bus test system show that the UPFC is effective at reducing congestion levels both before and after compensation by regulating voltage and controlling active and reactive power flows. The proposed approach minimizes total generation costs, voltage violations, and UPFC investment costs.
Locational marginal pricing framework in secured dispatch scheduling under co...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Design of GCSC Stabilizing Controller for Damping Low Frequency OscillationsIJAEMSJORNAL
This paper presents a systematic procedure for modeling and simulation of a power system equipped with FACTS type Gate Controlled Series Compensator (GCSC) based stabilizer controller. Single Machine Infinite Bus (SMIB) power system was investigated for evaluation of GCSC stabilizing controller for enhancing the overall dynamic system performance. PSO algorithm is employed to compute the optimal parameters of damping controller. Eigenvalues of system under various operating condition and nonlinear time domain simulation is employed to verify the effectiveness and robustness of GCSC stabilizing controller in damping low frequency oscillations (LFO) modes.
Comparative Study on the Performance of A Coherency-based Simple Dynamic Equi...IJAPEJOURNAL
Earlier, a simple dynamic equivalent for a power system external area containing a group of coherent generators was proposed in the literature. This equivalent is based on a new concept of decomposition of generators and a two-level generator aggregation. With the knowledge of only the passive network model of the external area and the total inertia constant of all the generators in this area, the parameters of this equivalent are determinable from a set of measurement data taken solely at a set of boundary buses which separates this area from the rest of the system. The proposed equivalent, therefore, does not require any measurement data at the external area generators. This is an important feature of this equivalent. In this paper, the results of a comparative study on the performance of this dynamic equivalent aggregation with the new inertial aggregation in terms of accuracy are presented. The three test systems that were considered in this comparative investigation are the New England 39-bus 10-generator system, the IEEE 162-bus 17-generator system and the IEEE 145-bus 50-generator system.
Compensation of Data-Loss in Attitude Control of Spacecraft Systems rinzindorjej
In this paper, a comprehensive comparison of two robust estimation techniques namely, compensated closed-loop Kalman filtering and open-loop Kalman filtering is presented. A common problem of data loss in a real-time control system is investigated through these two schemes. The open-loop scheme, dealing with the data-loss, suffers from several shortcomings. These shortcomings are overcome using compensated scheme, where an accommodating observation signal is obtained through linear prediction technique -- a closed-loop setting and is adopted at a posteriori update step. The calculation and employment of accommodating observation signal causes computational complexity. For simulation purpose, a linear time invariant spacecraft model is however, obtained from the nonlinear spacecraft attitude dynamics through linearization at nonzero equilibrium points -- achieved off-line through Levenberg-Marguardt iterative scheme. Attempt has been made to analyze the selected example from most of the perspectives in order to display the performance of the two techniques.
IRJET- A Comparative Forecasting Analysis of ARIMA Model Vs Random Forest Alg...IRJET Journal
This document presents a comparative analysis of forecasting energy demand using two different methods: an ARIMA time series model and a Random Forest machine learning algorithm. Both methods are applied to monthly and yearly timespan data from a small-scale industrial load dataset. The accuracy of forecasts from each method is compared. The document provides background on the importance of energy forecasting for power grid management. It also describes the ARIMA and Random Forest models in more detail for short-term and long-term load forecasting.
Level set methods are a popular way to solve the image segmentation problem. The solution contour is found by solving an optimization problem where a cost functional is minimized. The purpose of this paper is the level set approach to simultaneous tissue segmentation and bias correction of Magnetic Resonance Imaging (MRI) images. A modified level set approach to joint segmentation and bias correction of images with intensity in homogeneity. A sliding window is used to transform the gradient intensity domain to another domain, where the distribution overlap between different tissues is significantly suppressed. Tissue segmentation and bias correction are simultaneously achieved via a multiphase level set evolution process. The proposed methods are very robust to initialization, and are directly compatible with any type of level set implementation. Experiments on images of various modalities demonstrated the superior performance over state-of-the-art methods.
Wind Farm Layout Optimization (WFLO) is a typical model-based complex system design process, where the popular use of low-medium fidelity models is one of the primary sources of uncertainties propagating into the esti- mated optimum cost of energy (COE). Therefore, the (currently lacking) understanding of the degree of uncertainty inherited and introduced by different models is absolutely critical (i) for making informed modeling decisions, and (ii) for being cognizant of the reliability of the obtained results. A framework called the Visually-Informed Decision-Making Platform (VIDMAP) was recently introduced to quantify and visualize the inter-model sensi- tivities and the model inherited/induced uncertainties in WFLO. Originally, VIDMAP quantified the uncertainties and sensitivities upstream of the energy production model. This paper advances VIDMAP to provide quantifica- tion/visualization of the uncertainties propagating through the entire optimization process, where optimization is performed to determine the micro-siting of 100 turbines with a minimum COE objective. Specifically, we deter- mine (i) the sensitivity of the minimum COE to the top-level system model (energy production model), (ii) the uncertainty introduced by the heuristic optimization algorithm (PSO), and (iii) the net uncertainty in the minimum COE estimate. In VIDMAP, the eFAST method is used for sensitivity analysis, and the model uncertainties are quantified through a combination of Monte Carlo simulation and probabilistic modeling. Based on the estimated sensitivity and uncertainty measures, a color-coded model-block flowchart is then created using the MATLAB GUI.
A collection of the PDF presentation published at the Clean Energy Council (CEC) Wind Forum 2016 in Melbourne. Key issues discussed were colocation of wind and solar, noise impacts, planning requirements across Australia and developments in the technology.
This document presents a new 3-level approach to simultaneously select the best surrogate model type, kernel function, and hyper-parameters for approximation models. The approach uses Regional Error Estimation of Surrogates (REES) to evaluate error and select models. It compares a cascaded technique that performs sequential optimization versus a one-step technique. Numerical examples on benchmark problems show the one-step technique reduces maximum and median errors by at least 60% with lower computational cost compared to the cascaded approach. Future work involves applying the one-step method to more complex problems and developing an online platform for collaborative surrogate model selection.
A parsimonious SVM model selection criterion for classification of real-world ...o_almasi
This paper proposes and optimizes a two-term cost function consisting of a sparseness term and a generalized v-fold cross-validation term by a new adaptive particle swarm optimization (APSO). APSO updates its parameters adaptively based on a dynamic feedback from the success rate of the each particle’s personal best. Since the proposed cost function is based on the choosing fewer numbers of support vectors, the complexity of SVM models decreased while the accuracy remains in an acceptable range. Therefore, the testing time decreases and makes SVM more applicable for practical applications in real data sets. A comparative study on data sets of UCI database is performed between the proposed cost function and conventional cost function to demonstrate the effectiveness of the proposed cost function.
Surrogate-based design is an effective approach for modeling computationally expensive system behavior. In such application, it is often challenging to characterize the expected accuracy of the surrogate. In addition to global and local error measures, regional error measures can be used to understand and interpret the surrogate accuracy in the regions of interest. This paper develops the Regional Error Estimation of Surrogate (REES) method to quantify the level of the error in any given subspace (or region) of the entire domain, when all the available training points have been invested to build the surrogate. In this approach, the accuracy of the surrogate in each subspace is estimated by modeling the variations of the mean and the maximum error in that subspace with increasing number of training points (in an iterative process). A regression model is used for this purpose. At each iteration, the intermediate surrogate is constructed using a subset of the entire training data, and tested over the remaining points. The evaluated errors at the intermediate test points at each iteration are used for training the regression model that represents the error variation with sample points. The effectiveness of the proposed method is illustrated using standard test problems. To this end, the predicted regional errors of the surrogate constructed using all the training points are compared with the regional errors estimated over a large set of test points.
This paper explores the effectiveness of the recently devel- oped surrogate modeling method, the Adaptive Hybrid Functions (AHF), through its application to complex engineered systems design. The AHF is a hybrid surrogate modeling method that seeks to exploit the advantages of each component surrogate. In this paper, the AHF integrates three component surrogate mod- els: (i) the Radial Basis Functions (RBF), (ii) the Extended Ra- dial Basis Functions (E-RBF), and (iii) the Kriging model, by characterizing and evaluating the local measure of accuracy of each model. The AHF is applied to model complex engineer- ing systems and an economic system, namely: (i) wind farm de- sign; (ii) product family design (for universal electric motors); (iii) three-pane window design; and (iv) onshore wind farm cost estimation. We use three differing sampling techniques to inves- tigate their influence on the quality of the resulting surrogates. These sampling techniques are (i) Latin Hypercube Sampling
∗Doctoral Student, Multidisciplinary Design and Optimization Laboratory, Department of Mechanical, Aerospace and Nuclear Engineering, ASME student member.
†Distinguished Professor and Department Chair. Department of Mechanical and Aerospace Engineering, ASME Lifetime Fellow. Corresponding author.
‡Associate Professor, Department of Mechanical Aerospace and Nuclear En- gineering, ASME member (LHS), (ii) Sobol’s quasirandom sequence, and (iii) Hammers- ley Sequence Sampling (HSS). Cross-validation is used to evalu- ate the accuracy of the resulting surrogate models. As expected, the accuracy of the surrogate model was found to improve with increase in the sample size. We also observed that, the Sobol’s and the LHS sampling techniques performed better in the case of high-dimensional problems, whereas the HSS sampling tech- nique performed better in the case of low-dimensional problems. Overall, the AHF method was observed to provide acceptable- to-high accuracy in representing complex design systems.
This document discusses model-based calibration techniques used to develop an engine calibration to meet emissions standards. It focuses on using design of experiments methods and statistical modeling to optimize engine parameters. Specifically, it describes:
1) Using a two-stage regression approach to separate variables like spark timing into a "local model" and others into a "global model" to better characterize responses.
2) How screening experiments can help select appropriate variables and ranges for the design of experiments to avoid unstable operating points.
3) Techniques for modeling experimental data through "local models" of individual variables and a "global model" that combines local models to reproduce responses for any combination of variables.
This paper proposes a novel model management technique to be applied in population- based heuristic optimization. This technique adaptively selects different computational models (both physics-based and statistical models) to be used during optimization, with the overall goal to end with high fidelity solutions in a reasonable time period. For example, in optimizing an aircraft wing to obtain maximum lift-to-drag ratio, one can use low-fidelity models such as given by the vortex lattice method, or a high-fidelity finite volume model (that solves the full Navier-Stokes equations), or a surrogate model that substitutes the high-fidelity model.The information from models with different levels of fidelity is inte- grated into the heuristic optimization process using a novel model-switching metric. In this context, models could be surrogate models, low-fidelity physics-based analytical mod- els, and medium-to-high fidelity computational models (based on grid density). The model switching technique replaces the current model with the next higher fidelity model, when a stochastic switching criterion is met at a given iteration during the optimization process. The switching criteria is based on whether the uncertainty associated with the current model output dominates the latest improvement of the fitness function. In the case of the physics-based models, the uncertainty in their output is quantified through an inverse assessment process by comparing with high-fidelity model responses or experimental data (if available). To determine the fidelity of surrogate models, the Predictive Estimation of Model Fidelity (PEMF) method is applied. The effectiveness of the proposed method is demonstrated by applying it to airfoil optimization with the objective to maximize the lift to drag ratio of the wing under different flow regimes. It was found that the tuned low fidelity model dominates the optimization process in terms of computational time and function calls.
This document provides an overview of a study to select a site for a wind-solar hybrid power plant in Turkey using multi-criteria decision making methods. It first outlines the problem definition and literature review on methods like AHP, ANP, TOPSIS. It then proposes using AHP with benefits, opportunities, costs and risks (BOCR) criteria and the ideal matter element method for site selection. The document includes a flow chart of the AHP-BOCR method and shows an example of its application to the case study, including defining criteria and alternatives, collecting data, determining weights, and normalizing alternative performance values.
Performance improvement of a Rainfall Prediction Model using Particle Swarm O...ijceronline
The performances of the statistical methods of time series forecast can be improved by precise selection of their parameters. Various techniques are being applied to improve the modeling accuracy of these models. Particle swarm optimization is one such technique which can be conveniently used to determine the model parameters accurately. This robust optimization technique has already been applied to improve the performance of artificial neural networks for time series prediction. This study uses particle swarm optimization technique to determine the parameters of an exponential autoregressive model for time series prediction. The model is applied for annual rainfall prediction and it shows a fairly good performance in comparison to the statistical ARIMA model
A REVIEW ON OPTIMIZATION OF LEAST SQUARES SUPPORT VECTOR MACHINE FOR TIME SER...ijaia
Support Vector Machine has appeared as an active study in machine learning community and extensively
used in various fields including in prediction, pattern recognition and many more. However, the Least
Squares Support Vector Machine which is a variant of Support Vector Machine offers better solution
strategy. In order to utilize the LSSVM capability in data mining task such as prediction, there is a need to
optimize its hyper parameters. This paper presents a review on techniques used to optimize the parameters
based on two main classes; Evolutionary Computation and Cross Validation.
The document describes the Model Induced Metropolis-Hastings (MIMH) algorithm for efficiently sampling from high-performance regions of costly objective functions. MIMH performs Metropolis-Hastings random walks on a radial basis function network (RBFN) model of the objective function. After each walk, the endpoint is added to the RBFN training set to improve the model. Experiments show MIMH finds good solutions with significantly fewer objective function evaluations than other algorithms like Niching ES, and the number of evaluations can be reduced further by raising the acceptance probability exponent. MIMH provides an effective way to identify high-performance regions at low cost for initializing more greedy optimization methods.
The document summarizes electricity load forecasting techniques for power system planning. It discusses using curve fitting algorithms to forecast electricity load based on analyzing past load data from 2012. Specifically, it proposes using a Fourier series curve fitting model to predict future load based on factors like temperature, humidity, and time of day or year. The document also briefly describes other common load forecasting techniques including multiple regression, exponential smoothing, and neural networks.
This document summarizes a research article that proposes using Hidden Semi-Markov Models (HSMMs) for predictive maintenance applications. Some key points:
- HSMMs allow modeling the duration a system spends in each state, which provides more accurate modeling than traditional HMMs for applications where state duration is important.
- The proposed HSMM models state duration with a parametric distribution rather than a non-parametric one, reducing the number of parameters needed. It also does not constrain the type of duration distribution or observation process used.
- The paper describes adapting learning, inference and prediction algorithms for the proposed HSMM. It also proposes using the Akaike Information Criterion for automated model selection
Congestion Management in Power System by Optimal Location And Sizing of UPFCIOSR Journals
The document presents a particle swarm optimization (PSO) algorithm to optimally place and size a unified power flow controller (UPFC) to alleviate congestion in a power system. The PSO algorithm is used to determine the optimal generator dispatch as well as the optimal location and size of a single UPFC. Simulations on a 5-bus test system show that the UPFC is effective at reducing congestion levels both before and after compensation by regulating voltage and controlling active and reactive power flows. The proposed approach minimizes total generation costs, voltage violations, and UPFC investment costs.
Locational marginal pricing framework in secured dispatch scheduling under co...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Design of GCSC Stabilizing Controller for Damping Low Frequency OscillationsIJAEMSJORNAL
This paper presents a systematic procedure for modeling and simulation of a power system equipped with FACTS type Gate Controlled Series Compensator (GCSC) based stabilizer controller. Single Machine Infinite Bus (SMIB) power system was investigated for evaluation of GCSC stabilizing controller for enhancing the overall dynamic system performance. PSO algorithm is employed to compute the optimal parameters of damping controller. Eigenvalues of system under various operating condition and nonlinear time domain simulation is employed to verify the effectiveness and robustness of GCSC stabilizing controller in damping low frequency oscillations (LFO) modes.
Comparative Study on the Performance of A Coherency-based Simple Dynamic Equi...IJAPEJOURNAL
Earlier, a simple dynamic equivalent for a power system external area containing a group of coherent generators was proposed in the literature. This equivalent is based on a new concept of decomposition of generators and a two-level generator aggregation. With the knowledge of only the passive network model of the external area and the total inertia constant of all the generators in this area, the parameters of this equivalent are determinable from a set of measurement data taken solely at a set of boundary buses which separates this area from the rest of the system. The proposed equivalent, therefore, does not require any measurement data at the external area generators. This is an important feature of this equivalent. In this paper, the results of a comparative study on the performance of this dynamic equivalent aggregation with the new inertial aggregation in terms of accuracy are presented. The three test systems that were considered in this comparative investigation are the New England 39-bus 10-generator system, the IEEE 162-bus 17-generator system and the IEEE 145-bus 50-generator system.
Compensation of Data-Loss in Attitude Control of Spacecraft Systems rinzindorjej
In this paper, a comprehensive comparison of two robust estimation techniques namely, compensated closed-loop Kalman filtering and open-loop Kalman filtering is presented. A common problem of data loss in a real-time control system is investigated through these two schemes. The open-loop scheme, dealing with the data-loss, suffers from several shortcomings. These shortcomings are overcome using compensated scheme, where an accommodating observation signal is obtained through linear prediction technique -- a closed-loop setting and is adopted at a posteriori update step. The calculation and employment of accommodating observation signal causes computational complexity. For simulation purpose, a linear time invariant spacecraft model is however, obtained from the nonlinear spacecraft attitude dynamics through linearization at nonzero equilibrium points -- achieved off-line through Levenberg-Marguardt iterative scheme. Attempt has been made to analyze the selected example from most of the perspectives in order to display the performance of the two techniques.
IRJET- A Comparative Forecasting Analysis of ARIMA Model Vs Random Forest Alg...IRJET Journal
This document presents a comparative analysis of forecasting energy demand using two different methods: an ARIMA time series model and a Random Forest machine learning algorithm. Both methods are applied to monthly and yearly timespan data from a small-scale industrial load dataset. The accuracy of forecasts from each method is compared. The document provides background on the importance of energy forecasting for power grid management. It also describes the ARIMA and Random Forest models in more detail for short-term and long-term load forecasting.
Level set methods are a popular way to solve the image segmentation problem. The solution contour is found by solving an optimization problem where a cost functional is minimized. The purpose of this paper is the level set approach to simultaneous tissue segmentation and bias correction of Magnetic Resonance Imaging (MRI) images. A modified level set approach to joint segmentation and bias correction of images with intensity in homogeneity. A sliding window is used to transform the gradient intensity domain to another domain, where the distribution overlap between different tissues is significantly suppressed. Tissue segmentation and bias correction are simultaneously achieved via a multiphase level set evolution process. The proposed methods are very robust to initialization, and are directly compatible with any type of level set implementation. Experiments on images of various modalities demonstrated the superior performance over state-of-the-art methods.
Wind Farm Layout Optimization (WFLO) is a typical model-based complex system design process, where the popular use of low-medium fidelity models is one of the primary sources of uncertainties propagating into the esti- mated optimum cost of energy (COE). Therefore, the (currently lacking) understanding of the degree of uncertainty inherited and introduced by different models is absolutely critical (i) for making informed modeling decisions, and (ii) for being cognizant of the reliability of the obtained results. A framework called the Visually-Informed Decision-Making Platform (VIDMAP) was recently introduced to quantify and visualize the inter-model sensi- tivities and the model inherited/induced uncertainties in WFLO. Originally, VIDMAP quantified the uncertainties and sensitivities upstream of the energy production model. This paper advances VIDMAP to provide quantifica- tion/visualization of the uncertainties propagating through the entire optimization process, where optimization is performed to determine the micro-siting of 100 turbines with a minimum COE objective. Specifically, we deter- mine (i) the sensitivity of the minimum COE to the top-level system model (energy production model), (ii) the uncertainty introduced by the heuristic optimization algorithm (PSO), and (iii) the net uncertainty in the minimum COE estimate. In VIDMAP, the eFAST method is used for sensitivity analysis, and the model uncertainties are quantified through a combination of Monte Carlo simulation and probabilistic modeling. Based on the estimated sensitivity and uncertainty measures, a color-coded model-block flowchart is then created using the MATLAB GUI.
A collection of the PDF presentation published at the Clean Energy Council (CEC) Wind Forum 2016 in Melbourne. Key issues discussed were colocation of wind and solar, noise impacts, planning requirements across Australia and developments in the technology.
The analysis of complex system behavior often demands expensive experiments or computational simula- tions. Surrogate modeling techniques are often used to provide a tractable and inexpensive approximation of such complex system behavior. Owing to the lack of any general guidelines regarding the suitability of different surrogate models for different applications, model selection approach can be helpful to choose the best surrogate technique. This paper investigates the effectiveness of a recently developed method for surrogate error quantification called, Regional Error Estimation of Surrogate (REES), to select the best surrogate model based on the level of accuracy. The REES method is developed based on the concept that the accuracy of the approximation methods is related to the amount of available resources. In the REES method, intermediate surrogates are iteratively constructed over heuristic subsets of the available sample points (i.e., intermediate training points) and tested over the remaining available sample points (i.e., intermediate test points). The statistical mode of the median and the maximum error distributions are selected to represent the overall and maximum error at each iteration. The estimated modes of the median and maximum error distributions are then represented as functions of the number of interme- diate training points using a regression model. The regression models are used to predict the overall and minimum accuracy of the final surrogate. These two error measures are then applied to select the best surrogate. The proposed model selection technique is applied to select the best surrogate among (i) Kriging, (ii) Radial Basis Functions (RBF), (iii) Extended Radial basis Functions (E-RBF), and (iv) Quadratic Response Surface (QRS), for standard test functions and a wind farm power generation func- tion. The REES-based model selection is compared with (i) model-selection based on cross-validation errors and (ii) model-selection based on error estimated on a large set of additional test points; the lat- ter is assumed to provide the correct model selection. The REES-based model selection is found to be significantly more accurate than that based on cross-validation errors.
The performance expectations for commercial wind turbines, from a variety of geograph- ical regions with differing wind regimes, present significant techno-commercial challenges to manufacturers. The determination of which commercial turbine types perform the best under differing wind regimes can provide unique insights into the complex demands of a concerned target market. In this paper, a comprehensive methodology is developed to explore the suitability of commercially available wind turbines (when operating as a group/array) to the various wind regimes occurring over a large target market. The three major steps of this methodology include: (i) characterizing the geographical variation of wind regimes in the target market, (ii) determining the best performing turbines (in terms of minimum COE accomplished) for different wind regimes, and (iii) developing a metric to investigate the performance-based expected market suitability of currently available tur- bine feature combinations. The best performing turbines for different wind regimes are determined using the Unrestricted Wind Farm Layout Optimization (UWFLO) method. Expectedly, the larger sized and higher rated-power turbines provide better performance at lower average wind speeds. However, for wind resources higher than class-4, the perfor- mances of lower-rated power turbines are fairly competitive, which could make them better choices for sites with complex terrain or remote location. In addition, turbines with direct drive are observed to perform significantly better than turbines with more conventional gear-based drive-train. The market considered in this paper is mainland USA, for which wind map information is obtained from NREL. Interestingly, it is found that overall higher rated-power turbines with relatively lower tower heights are most favored in the onshore US market.
Approximation models (or surrogate models) provide an efficient substitute to expen- sive physical simulations and an efficient solution to the lack of physical models of system behavior. However, it is challenging to quantify the accuracy and reliability of such ap- proximation models in a region of interest or the overall domain without additional system evaluations. Standard error measures, such as the mean squared error, the cross-validation error, and the Akaikes information criterion, provide limited (often inadequate) informa- tion regarding the accuracy of the final surrogate. This paper introduces a novel and model independent concept to quantify the level of errors in the function value estimated by the final surrogate in any given region of the design domain. This method is called the Re- gional Error Estimation of Surrogate (REES). Assuming the full set of available sample points to be fixed, intermediate surrogates are iteratively constructed over a sample set comprising all samples outside the region of interest and heuristic subsets of samples inside the region of interest (i.e., intermediate training points). The intermediate surrogate is tested over the remaining sample points inside the region of interest (i.e., intermediate test points). The fraction of sample points inside region of interest, which are used as interme- diate training points, is fixed at each iteration, with the total number of iterations being pre-specified. The estimated median and maximum relative errors within the region of in- terest for the heuristic subsets at each iteration are used to fit a distribution of the median and maximum error, respectively. The estimated statistical mode of the median and the maximum error, and the absolute maximum error are then represented as functions of the density of intermediate training points, using regression models. The regression models are then used to predict the expected median and maximum regional errors when all the sample points are used as training points. Standard test functions and a wind farm power generation problem are used to illustrate the effectiveness and the utility of such a regional error quantification method.
This paper proposes a novel model management technique to be applied in population- based heuristic optimization. This technique adaptively selects different computational models (both physics-based and statistical models) to be used during optimization, with the overall goal to end with high fidelity solutions in a reasonable time period. For example, in optimizing an aircraft wing to obtain maximum lift-to-drag ratio, one can use low-fidelity models such as given by the vortex lattice method, or a high-fidelity finite volume model (that solves the full Navier-Stokes equations), or a surrogate model that substitutes the high-fidelity model.The information from models with different levels of fidelity is inte- grated into the heuristic optimization process using a novel model-switching metric. In this context, models could be surrogate models, low-fidelity physics-based analytical mod- els, and medium-to-high fidelity computational models (based on grid density). The model switching technique replaces the current model with the next higher fidelity model, when a stochastic switching criterion is met at a given iteration during the optimization process. The switching criteria is based on whether the uncertainty associated with the current model output dominates the latest improvement of the fitness function. In the case of the physics-based models, the uncertainty in their output is quantified through an inverse assessment process by comparing with high-fidelity model responses or experimental data (if available). To determine the fidelity of surrogate models, the Predictive Estimation of Model Fidelity (PEMF) method is applied. The effectiveness of the proposed method is demonstrated by applying it to airfoil optimization with the objective to maximize the lift to drag ratio of the wing under different flow regimes. It was found that the tuned low fidelity model dominates the optimization process in terms of computational time and function calls.
This paper advances the Domain Segmentation based on Uncertainty in the Surrogate (DSUS) framework which is a novel approach to characterize the uncertainty in surrogates. The leave-one-out cross-validation technique is adopted in the DSUS framework to measure local errors of a surrogate. A method is proposed in this paper to evaluate the performance of the leave-out-out cross-validation errors as local error measures. This method evaluates local errors by comparing: (i) the leave-one-out cross-validation error with (ii) the actual local error estimated within a local hypercube for each training point. The comparison results show that the leave-one-out cross-validation strategy can capture the local errors of a surrogate. The DSUS framework is then applied to key aspects of wind resource as- sessment and wind farm cost modeling. The uncertainties in the wind farm cost and the wind power potential are successfully characterized, which provides designers/users more confidence when using these models.
Surrogate-based design is an effective approach for modeling computationally expensive system behavior. In such application, it is often challenging to characterize the expected accuracy of the surrogate. In addition to global and local error measures, regional error measures can be used to understand and interpret the surrogate accuracy in the regions of interest. This paper develops the Regional Error Estimation of Surrogate (REES) method to quantify the level of the error in any given subspace (or region) of the entire domain, when all the available training points have been invested to build the surrogate. In this approach, the accuracy of the surrogate in each subspace is estimated by modeling the variations of the mean and the maximum error in that subspace with increasing number of training points (in an iterative process). A regression model is used for this purpose. At each iteration, the intermediate surrogate is constructed using a subset of the entire train- ing data, and tested over the remaining points. The evaluated errors at the intermediate test points at each iteration are used for training the regression model that represents the error variation with sample points. The effectiveness of the proposed method is illustrated using standard test problems. To this end, the predicted regional errors of the surrogate constructed using all the training points are compared with the regional errors estimated over a large set of test points.
Wind resources vary significantly in strength from one location to another over a wide geographical region. The major turbine manufacturers offer a family/series of wind tur- bines to suit the market needs of different wind regimes. The current state of the art in wind farm design however does not provide quantitative guidelines regarding what turbine feature combinations are suitable for different wind regimes, when turbines are operating as a group in an optimized layout. This paper provides a unique exploration of the best tradeoffs between the cost and the capacity factor of wind farms (of specified nameplate capacity), provided by the currently available turbines for different wind classes. To this end, the best performing turbines for different wind resource strengths are identified by minimizing the cost of energy through wind farm layout optimization. Exploration of the “cost - capacity factor” tradeoffs are then performed for the wind resource strengths cor- responding to the wind classes defined in the 7-class system. The best tradeoff turbines are determined by searching for the non-dominated set of turbines out of the pool of best performing turbines of different rated powers. The medium priced turbines are observed to provide the most attractive tradeoffs − 15% more capacity factor than the cheapest tradeoff turbines and only 5% less capacity factor than the most expensive tradeoff turbines. It was found that although the “cost - capacity factor” tradeoff curve expectedly shifted towards higher capacity factors with increasing wind class, the trend of the tradeoff curve remained practically similar. Further analysis showed that the “rated power - rotor diameter” com- bination and the “rotor diameter/hub height” ratios are very important considerations in the current selection and further evolution of turbine designs. We found that larger rotor diameters are not preferred for mid-range turbines with rated powers between 1.5 - 2.5 MW, and “rotor diameter/hub height” ratios greater than 1.1 are not preferred by any of the wind classes.
The performance of a wind farm is affected by several key factors that can be classified into two cate- gories: the natural factors and the design factors. Hence, the planning of a wind farm requires a clear quantitative understanding of how the balance between the concerned objectives (e.g., socia-economic, engineering, and environmental objectives) is affected by these key factors. This understanding is lacking in the state of the art in wind farm design. The wind farm capacity factor is one of the primary perfor- mance criteria of a wind energy project. For a given land (or sea area) and wind resource, the maximum capacity factor of a particular number of wind turbines can be reached by optimally adjusting the layout of turbines. However, this layout adjustment is constrained owing to the limited land resource. This paper proposes a Bi-level Multi-objective Wind Farm Optimization (BMWFO) framework for planning effective wind energy projects. Two important performance objectives considered in this paper are: (i) wind farm Capacity Factor (CF) and (ii) Land Area per MW Installed (LAMI). Turbine locations, land area, and nameplate capacity are treated as design variables in this work. In the proposed framework, the Capacity Factor - Land Area per MW Installed (CF - LAMI) trade-off is parametrically represented as a function of the nameplate capacity. Such a helpful parameterization of trade-offs is unique in the wind energy literature. The farm output is computed using the wind farm power generation model adopted from the Unrestricted Wind Farm Layout Optimization (UWFLO) framework. The Smallest Bounding Rectangle (SBR) enclosing all turbines is used to calculate the actual land area occupied by the farm site. The wind farm layout optimization is performed in the lower level using the Mixed-Discrete Particle Swarm Optimization (MDPSO), while the CF - LAMI trade-off is parameterized in the upper level. In this work, the CF - LAMI trade-off is successfully quantified by nameplate capacity in the 20 MW to 100 MW range. The Pareto curves obtained from the proposed framework provide important in- sights into the trade-offs between the two performance objectives, which can significantly streamline the decision-making process in wind farm development.
This document presents the results of a sensitivity analysis of key factors that influence the power output of array-like and optimized wind farms. The analysis investigated how sensitive farm output is to factors such as incoming wind speed, ambient turbulence, land area per MW installed, land aspect ratio, and nameplate capacity. It found that for array-like farms, incoming wind speed has the dominant impact on power generation across different wake models. For optimized farms, incoming wind speed remains the dominant factor at lower wind speeds, but design factors like land area and aspect ratio influence output more when wind speeds are near rated levels. Overall, natural factors like wind speed have a greater contribution to farm output than design factors.
In developing complex engineering systems, model-based design approaches often face critical challenges due to pervasive uncertainties and high computational expense. These challenges could be alleviated to a significant extent though informed modeling decisions, such as model substitution, parameter estimation, localized re-sampling, or grid refine- ment. Informed modeling decisions therefore necessitate (currently lacking) design frame- works that effectively integrate design automation and human decision-making. In this paper, we seek to address this necessity in the context of designing wind farm layouts, by taking an information flow perspective of this typical model-based design process. Specif- ically, we develop a visual representation of the uncertainties inherited and generated by models and the inter-model sensitivities. This framework is called the Visually-Informed Decision-Making Platform (VIDMAP) for wind farm design. The eFAST method is used for sensitivity analysis, in order to determine both the first-order and the total-order in- dices. The uncertainties in the independent inputs are quantified based on their observed variance. The uncertainties generated by the upstream models are quantified through a Monte Carlo simulation followed by probabilistic modeling of (i) the error in the output of the models (if high-fidelity estimates are available), or (ii) the deviation in the outputs estimated by different alternatives/versions of the model. The GUI in VIDMAP is cre- ated using value-proportional colors for each model block and inter-model connector, to respectively represent the uncertainty in the model output and the impact (downstream) of the information being relayed by the connector. Wind farm layout optimization (WFLO) serves as an excellent platform to develop and explore VIDMAP, where WFLO is generally performed using low fidelity models, as high-fidelity models (e.g. LES) tend to be compu- tationally prohibitive in this context. The final VIDMAP obtained sheds new light into the sensitivity of wind farm energy estimation on the different models and their associated uncertainties.
In spite of the recent developments in surrogate modeling techniques, the low fidelity of these models often limits their use in practical engineering design optimization. When surrogate models are used to represent the behavior of a complex system, it is challenging to simultaneously obtain high accuracy over the entire design space. When such surrogates are used for optimization, it becomes challening to find the optimum/optima with certainty. Sequential sampling methods offer a powerful solution to this challenge by providing the surrogate with reasonable accuracy where and when needed. When surrogate-based design optimization (SBDO) is performed using sequential sampling, the typical SBDO process is repeated multiple times, where each time the surrogate is improved by addition of new sam- ple points. This paper presents a new adaptive approach to add infill points during SBDO, called Adaptive Sequential Sampling (ASS). In this approach, both local exploitation and global exploration aspects are considered for updating the surrogate during optimization, where multiple iterations of the SBDO process is performed to increase the quality of the optimal solution. This approach adaptively improves the accuracy of the surrogate in the region of the current global optimum as well as in the regions of higher relative errors. Based on the initial sample points and the fitted surrogate, the ASS method adds infill points at each iteration in the locations of: (i) the current optimum found based on the fitted surrogate; and (ii) the points generated using cross-over between sample points that have relatively higher cross-validation errors. The Nelder and Mead Simplex method is adopted as the optimization algorithm. The effectiveness of the proposed method is illus- trated using a series of standard numerical test problems.
This document describes a dissertation that aims to improve 3D stereo reconstruction of human faces by combining it with a generic morphable face model. The dissertation first discusses background topics like facial landmark annotation, 3D morphable face models, texture representation, stereo reconstruction and face model deformation. It then describes the proposed scheme which involves steps like landmark annotation, pose estimation, shape fitting, texture extraction, stereo reconstruction from image pairs and deformation of the face model. The results show that fusing the stereo reconstruction with a single image reconstruction using a morphable model leads to a more accurate 3D face model compared to using either method alone. Finally, the deformed face model is visualized on a smartphone using a cardboard viewer.
Este documento presenta una introducción a los sistemas de información. Define conceptos clave como sistemas informáticos, sistemas de información, empresas, análisis de sistemas, diseño de sistemas y usuarios. Explica los diferentes modelos informáticos de una empresa como marketing y ventas, fabricación y producción, compras, recursos humanos, financiero y facturación. Finalmente, introduce conceptos adicionales como sistemas abiertos, sistemas cerrados y las siglas CTI.
Este resumen presenta la información profesional y educativa relevante de Dardo Dagfal. Dagfal es un diseñador industrial y gráfico argentino con experiencia trabajando en Argentina, Brasil y como consultor independiente desde 1987. Ha completado varios posgrados y cursos en diseño, ingeniería, software y marketing. Actualmente se desempeña como diseñador independiente y consultor en diseño estratégico.
Precision Marketing for when you need more Marketing Horsepower. 3degreeZ Marketing
This deck is for companies who want to have success in their marketing programs across various channels (Social, email, PPC, Advertisement, Content, and others) and provides Strategic and program level tactics for success.
Uso obras teatrales dramtaizac en idc by eliud gamez srEliud Gamez Gomez
Este documento argumenta en contra del uso de marionetas, obras teátrales y payasos para enseñar la doctrina de Cristo. Señala que la Palabra de Dios, no el entretenimiento, es lo que convierte a la gente. La iglesia no debe proporcionar entretenimiento social sino predicar el evangelio. Además, la dramatización implica fingir y pretender ser otra persona, lo cual está prohibido por Dios y es similar a la hipocresía. La conclusión es que la dramatización no es la manera correcta de predicar el evangel
This presentation was given during the February 2009 Coos Library Directors' meeting to convince them to look into adopting an open source integrated library system (ILS) such as Koha or Evergreen. The Coos County Library Service District is located on the southern Oregon coast.
The planning of a wind farm, which minimizes the project costs and maximizes the power generation capacity, presents significant challenges to today’s wind energy industry. An optimal wind farm planning strategy that accounts for the key factors (that can be designed) influencing the net power generation offers a powerful solution to these daunting challenges. This paper explores the influences of (i) the number of turbines, (ii) the farm size, and (iii) the use of a combination of turbines with differing rotor diameters, on the optimal power generated by a wind farm. We use a recently developed method of arranging turbines in a wind farm (the Unrestricted Wind Farm Layout Optimization (UWFLO)) to maximize the farm efficiency. Response surface based cost models are used to estimate the cost of the wind farm as a function of the the turbine rotor diameters and number of tur- bines. Optimization is performed using a Particle Swarm Optimization (PSO) algorithm. A robust mixed-discrete version of the PSO algorithm is implemented to appropriately account for the discrete choice of feasible rotor diameters. The use of an optimal combi- nation of turbines with differing rotor diameters was observed to significantly improve the net power generation. Exploration of the influences of (i) the number of turbines, and (ii) the farm size, on the cost per KW of power produced, provided interesting observations.
The document proposes a flexible optimization framework to maximize the annual energy production of wind farms by simultaneously optimizing turbine layout and turbine type selection. It develops models for wind distribution, power generation, cost, and uses a mixed-discrete particle swarm optimization method. The framework is tested on a case study wind farm and results show up to a 6% increase in energy production over a conventional array layout approach by allowing optimization of turbine types.
The development of utility-scale wind farms that can produce energy at a cost comparable to that of conventional energy resources presents significant challenges to today’s wind energy industry. The consideration of the combined impact of key design and environmental factors on the performance of a wind farm is a crucial part of the solution to this challenge. The state of the art in optimal wind project planning includes wind farm layout design and more recently turbine selection. The scope of farm layout optimization and the predicted wind project performance however depends on several other critical site-scale factors, which are often not explicitly accounted for in the wind farm planning literature. These factors include: (i) the land area per MW installed (LAMI), and (ii) the nameplate capacity (in MW) of the farm. In this paper, we develop a framework to quantify and analyze the roles of these crucial design factors in optimal wind farm planning. A set of sample values of LAMI and installed farm capacities is first defined. For each sample farm definition, simultaneous optimization of the farm layout and turbine selection is performed to maximize the farm capacity factor (CF). To this end, we apply the recently de- veloped Unrestricted Wind Farm Layout Optimization (UWFLO) method. The CF of the optimized farm is then represented as a function of the nameplate capacity and the LAMI, using response surface methodologies. The variation of the optimized CF with these site-scale factors is investigated for a representative wind site in North Dakota. It was found that, a desirable CF value corresponds to a cutoff “LAMI vs nameplate capacity” curve – the identification of this cutoff curve is critical to the development of an economically viable wind energy project.
Multi-Objective Wind Farm Design: Exploring the Trade-off between Capacity Fa...Weiyang Tong
This document describes a bi-level framework for visualizing trade-offs in wind farm design between capacity factor and land use. The lower level uses multi-objective optimization to explore the trade-off for different nameplate capacities. The upper level fits curves to the Pareto fronts to parametrically represent the trade-off as a function of nameplate capacity. The framework was tested on a case study comparing layouts with 13 to 67 turbines.
The development of large scale wind farms that can produce energy at a cost comparable to that of other conventional energy resources presents significant challenges to today’s wind energy industry. The consideration of the key design and environmental factors that influence the performance of a wind farm is a crucial part of the solution to this challenge. In this paper, we develop a methodology to account for the configuration of the farm land (length-to-breadth ratio and North-South-East-West orientation) within the scope of wind farm optimization. This approach appropriately captures the correlation between the (i) land configuration, (ii) the farm layout, and (iii) the selection of turbines-types. Simultaneous optimization of the farm layout and turbine selection is performed to minimize the Cost of Energy (COE), for a set of sample land configurations. The optimized COE and farm efficiency are then represented as functions of the land aspect ratio and the land orientation. To this end, we apply a recently developed response surface method known as the Reliability-Based Hybrid Functions. The overall wind farm design methodology is applied to design a 25MW farm in North Dakota. This case study provides helpful insights into the influence of the land configuration on the optimum farm performance that can be obtained for a particular site.
This paper presents a new method (the Unrestricted Wind Farm Layout Optimization (UWFLO)) of arranging turbines in a wind farm to achieve maximum farm efficiency. The powers generated by individual turbines in a wind farm are dependent on each other, due to velocity deficits created by the wake effect. A standard analytical wake model has been used to account for the mutual influences of the turbines in a wind farm. A variable induction factor, dependent on the approaching wind velocity, estimates the velocity deficit across each turbine. Optimization is performed using a constrained Particle Swarm Optimization (PSO) algorithm. The model is validated against experimental data from a wind tunnel experiment on a scaled down wind farm. Reasonable agreement between the model and experimental results is obtained. A preliminary wind farm cost analysis is also performed to explore the effect of using turbines with different rotor diameters on the total power generation. The use of differing rotor diameters is observed to play an important role in improving the overall efficiency of a wind farm.
The performance expectations for commercial wind turbines, from a variety of geograph- ical regions with differing wind regimes, present significant techno-commercial challenges to manufacturers. The determination of which commercial turbine types perform the best under differing wind regimes can provide unique insights into the complex demands of a concerned target market. In this paper, a comprehensive methodology is developed to explore the suitability of commercially available wind turbines (when operating as a group/array) to the various wind regimes occurring over a large target market. The three major steps of this methodology include: (i) characterizing the geographical variation of wind regimes in the target market, (ii) determining the best performing turbines (in terms of minimum COE accomplished) for different wind regimes, and (iii) developing a metric to investigate the performance-based expected market suitability of currently available tur- bine feature combinations. The best performing turbines for different wind regimes are determined using the Unrestricted Wind Farm Layout Optimization (UWFLO) method. Expectedly, the larger sized and higher rated-power turbines provide better performance at lower average wind speeds. However, for wind resources higher than class-4, the perfor- mances of lower-rated power turbines are fairly competitive, which could make them better choices for sites with complex terrain or remote location. In addition, turbines with direct drive are observed to perform significantly better than turbines with more conventional gear-based drive-train. The market considered in this paper is mainland USA, for which wind map information is obtained from NREL. Interestingly, it is found that overall higher rated-power turbines with relatively lower tower heights are most favored in the onshore US market.
A Response Surface Based Wind Farm Cost (RS-WFC) model, is developed to evaluate the economics of wind farms. The RS-WFC model is developed using Extended Radial Basis Functions (E-RBF) for onshore wind farms in the U.S.. This model is then used to explore the in uence of di erent design and economic parameters, including number of turbines, rotor diameter and labor cost, on the cost of a wind farm. The RS-WFC model is composed of three parts that estimate (i) the installation cost, (ii) the annual Operation and Maintenance (O&M) cost, and (iii) the total annual cost of a wind farm. The accuracy of the cost model is favorably established through comparison with pertinent commercial data. Moreover, the RS-WFC model is integrated with an analytical power generation model of a wind farm. A recently developed Unrestricted Wind Farm Layout Optimization (UWFLO) model is used to determine the power generated by a farm. The ratio of the total annual cost and the energy generated by the wind farm in one year (commonly known as the Cost of Energy, COE) is minimized in this paper. The results show that the COE could decreasesigni cantlythroughlayoutoptimization,toobtainmillionsofannualcostsavings.
Metaheuristics-based Optimal Reactive Power Management in Offshore Wind Farms...Aimilia-Myrsini Theologi
The aim of the thesis is to optimally coordinate the reactive power sources in offshore wind farms in a predictive manner based to the principle of minimizing the wind farm power losses, as well the variations of the transformers tap positions. First, an accurate Neural Network-based wind speed forecasting algorithm was developed in order to counteract the uncertainty of the wind and finally, the optimal management of the available reactive sources is tackled by a metaheuristics-based method. Two different cases were investigated: a far-offshore wind farm with HVDC interconnection link and the AC connected Dutch wind farm BORSSELE.
Multi-Objective WindFarm Optimization Simultaneously Optimizing COE and Land ...Weiyang Tong
This document summarizes research into optimizing the cost of energy (COE) and land footprint of wind farms under different land plot availability scenarios. The researchers use a multi-objective mixed-discrete particle swarm optimization algorithm to simultaneously minimize COE and land footprint per MW installed. They model wind farm energy production and costs, propose a layout-based land usage model, and define the multi-objective optimization problem with mixed integer variables and nonlinear constraints. A case study is presented to investigate how varying land plot availability impacts the optimal tradeoffs between COE and land footprint, and regulates the resulting optimal wind farm layout designs.
Wind farm development is an extremely complex process, most often driven by three im- portant performance criteria: (i) annual energy production, (ii) lifetime costs, and (iii) net impact on surroundings. Generally, planning a commercial scale wind farm takes several years. Undesirable concept-to-installation delays are primarily attributed to the lack of an upfront understanding of how different factors collectively affect the overall performance of a wind farm. More specifically, it is necessary to understand the balance between the socio-economic, engineering, and environmental objectives at an early stage in the design process. This paper proposes a Wind Farm Tradeoff Visualization (WiFToV) framework that aims to develop first-of-its-kind generalized guidelines for the conceptual design of wind farms, especially at early stages of wind farm development. Two major performance objectives are considered in this work: (i) cost of energy (COE) and (ii) land area per MW installed (LAMI). The COE is estimated using the Wind Turbine Design Cost and Scaling Model (WTDCS) and the Annual Energy Production (AEP) model incorporated by the Unrestricted Wind Farm Layout Optimization (UWFLO) framework. The LAMI is esti- mated using an optimal-layout based land usage model, which is treated as a post-process of the wind farm layout optimization. A Multi-Objective Mixed-Discrete Particle Swarm Optimization (MO-MDPSO) algorithm is used to perform the bi-objective optimization, which simultaneously optimizes the location and types of turbines. Together with a novel Pareto translation technique, the proposed WiFToV framework allows the exploration of the trade-off between COE and LAMI, and their variations with respect to multiple values of nameplate capacity.
Cost Aware Expansion Planning with Renewable DGs using Particle Swarm Optimiz...IJERA Editor
This Paper is an attempt to develop the expansion-planning algorithm using meta heuristics algorithms. Expansion Planning is always needed as the power demand is increasing every now and then. Thus for a better expansion planning the meta heuristic methods are needed. The cost efficient Expansion planning is desired in the proposed work. Recently distributed generation is widely researched to implement in future energy needs as it is pollution free and capability of installing it in rural places. In this paper, optimal distributed generation expansion planning with Particle Swarm Optimization (PSO) and Cuckoo Search Algorithm (CSA) for identifying the location, size and type of distributed generator for future demand is predicted with lowest cost as the constraints. Here the objective function is to minimize the total cost including installation and operating cost of the renewable DGs. MATLAB based `simulation using M-file program is used for the implementation and Indian distribution system is used for testing the results.
Wind resources vary significantly in strength from one location to another over a wide geographical region. The major turbine manufacturers offer a family/series of wind tur- bines to suit the market needs of different wind regimes. The current state of the art in wind farm design however does not provide quantitative guidelines regarding what turbine feature combinations are suitable for different wind regimes, when turbines are operating as a group in an optimized layout. This paper provides a unique exploration of the best tradeoffs between the cost and the capacity factor of wind farms (of specified nameplate capacity), provided by the currently available turbines for different wind classes. To this end, the best performing turbines for different wind resource strengths are identified by minimizing the cost of energy through wind farm layout optimization. Exploration of the “cost - capacity factor” tradeoffs are then performed for the wind resource strengths cor- responding to the wind classes defined in the 7-class system. The best tradeoff turbines are determined by searching for the non-dominated set of turbines out of the pool of best performing turbines of different rated powers. The medium priced turbines are observed to provide the most attractive tradeoffs − 15% more capacity factor than the cheapest tradeoff turbines and only 5% less capacity factor than the most expensive tradeoff turbines. It was found that although the “cost - capacity factor” tradeoff curve expectedly shifted towards higher capacity factors with increasing wind class, the trend of the tradeoff curve remained practically similar. Further analysis showed that the “rated power - rotor diameter” com- bination and the “rotor diameter/hub height” ratios are very important considerations in the current selection and further evolution of turbine designs. We found that larger rotor diameters are not preferred for mid-range turbines with rated powers between 1.5 - 2.5 MW, and “rotor diameter/hub height” ratios greater than 1.1 are not preferred by any of the wind classes.
This document discusses using design of experiments to optimize the energy consumption of a 3-story office building in New Delhi. It involves 4 phases: 1) An experimental setup identifies 26 design variables and runs 352 simulations. 2) ANOVA identifies significant variables affecting lighting/cooling energy. 3) Response surface models are developed and validated via Latin hypercube sampling. 4) Optimization techniques like genetic algorithms are applied to minimize lifecycle costs and energy use, identifying optimal designs. The methodology shows design of experiments can efficiently screen variables and create surrogates that optimize building design faster than simulation alone.
This document outlines a methodology for building energy simulation optimization that includes global sensitivity analysis (GSA), surrogate modeling (SM), and genetic algorithm (GA) optimization. A case study applies this methodology to a building model with 26 design variables. GSA identifies significant variables for lighting and cooling energy. SM builds response surface models to approximate simulation outputs based on significant variables. GA optimization then uses the SM to efficiently search for optimal designs. Validation shows SM predictions are within 10% error of simulations. The methodology enables faster optimization of building designs compared to directly coupling simulation with optimization.
Currently, the quality of wind measure of a site is assessed using Wind Power Density (WPD). This paper proposes to use a more credible metric namely, one we call the Wind Power Potential (WPP). While the former only uses wind speed information, the latter exploits both wind speed and wind direction distributions, and yields more credible estimates. The new measure of quality of a wind resource, the Wind Power Potential Evaluation (WPPE) model, investigates the effect of wind velocity distribution on the optimal net power generation of a farm. Bivariate normal distribution is used to characterize the stochastic variation of wind conditions (speed and direction). The net power generation for a particular farm size and installed capacity are maximized for different distributions of wind speed and wind direction, using the Unrestricted Wind Farm Layout Optimization (UWFLO) methodology. A response surface is constructed, using the recently developed Reliability Based Hybrid Functions (RBHF), to represent the computed maximum power generation as a function of the parameters of the wind velocity (speed and direction) distribution. To this end, for any farm site, we can (i) estimate the parameters of wind velocity distribution using recorded wind data, and (ii) predict the max- imum power generation for a specified farm size and capacity, using the developed response surface. The WPPE model is validated through recorded wind data at four differing stations obtained from the North Dakota Agricultural Weather Network (NDAWN). The results illustrate the variation of wind conditions and, subsequently, its influence on the quality of a wind resource.
1. Surrogate-based Particle Swarm Optimization
for Large-scale Wind Farm Layout Design
Ali Mehmani*, Weiyang Tong*, Souma Chowdhury#, and Achille Messac#
* Syracuse University, Department of Mechanical and Aerospace Engineering
# Mississippi State University, Department of Aerospace Engineering
11th World Congress on Structural and Multidisciplinary Optimization
June 7 - 12, 2015, Sydney Australia
Research supported by the NSF Award: CMMI 1437746
2. Large-scale Wind Farm Layout Design – Overview
2
• Large utility-scale wind farms can involved more than 500 MW
installed capacity (consisting of hundreds of wind turbines)
• Such large utility-scale wind farms are central to the growth of the wind
energy industry as a energy source that can compete with conventional
energy resources (without financial incentives).
• Planning the layout of such a large scale wind farm however poses
substantial technical challenges – it entails a complex and extremely
time-consuming design optimization process.
It includes various mutually correlated factors and large-
scale effects, especially large number of turbine-wake
interactions, and energy losses due to the wake effects.
3. Research Motivation
3
• Wind farm layout optimization (WFLO) is the process of optimizing
the location of turbines in a wind farm site, with the objective of
minimizing the average cost of energy.
• WFLO methods in the literature limit themselves to majorly designing
small-to-medium scale farms (< 100 turbines), as their case studies.
• The wind farm layout optimization for large-scale wind farms is a very
high-dimensional and highly nonlinear optimization problem.
• Surrogate-based optimization (SBO) approaches can be applied to
alleviate the computational burden in large-scale WFLO.
• Direct surrogate modeling of the O(103)-dim problem is fraught with uncertainties.
• The need to maintain adequate accuracy of the surrogate model during the
optimization process (for a highly multi-modal problem) poses critical challenges.
4. Research Objectives
4
• Develop a design domain reduction strategy for reducing the very
high-dimensional (O(103)) WFLO process into a low-dimensional
design optimization process (O(101)) .
• Implement an adaptive model refinement technique in surrogate-
based optimization to achieve computational efficiency while
promoting high accuracy of the end/optimum results.
5. Presentation Outline
5
• Layout Optimization of Large Scale Wind Farms
• Surrogate-based PSO for Large Scale WFLO
Domain Reduction through Novel Layout Mapping
Surrogate Model Selection
Adaptive Model Refinement
• Numerical Experiments: Results and Discussion
• Concluding remarks
PSO: Particle Swarm Optimization
6. Layout Optimization of Large Scale Wind Farms: Review
6
The current approaches on solving large-scale layout optimization
problem is mostly limited to quantifying the layout using the
streamwise and the spanwise spacings between turbines (assuming a
specified number of turbines are uniformly distributed in pre-defined boundaries).
Fuglsang et al. [1] defined the large scale wind farm layout as a function of the
spacing between rows and columns.
Perez et al. [2] used the numbers of rows and columns, the streamwise and the
spanwise spacing between neighboring turbines, the turbine rotor diameter, and a
specified rectangular boundary to determine the large scale wind farm layout.
Wagner et al. [3] developed a framework for the large scale wind farm layout in
which the initial location of turbines is restricted to an array-like layout, and a
radial displacement around each turbine is allowed.
[1] P. Fuglsang and T. Kenneth, Technical report, Risoe National Lab, Roskilde (Denmark), 1998.
[2] G. Perez et al., Wind Energy Association Offshore, 2013.
[3] M. Wagner et al., European Wind Energy Association Annual Event, 2011.
7. Surrogate-based PSO of Large Scale WFLO
7
The proposed approach here is capable of optimizing the location of turbines
for large wind farms, (i.e., 500-turbine scale wind farms) without
prescribing the farm boundaries.
Mapping of the Layout
Surrogate model selection
Step 1:
• The high-dimensional layout optimization
problem (involving 2N variables for a N turbine
wind farm) is reduced to a 6-variable problem
through a novel mapping strategy.
Step 2:
• A surrogate model is used to substitute the
expensive analytical WF energy production
model.
• The powerful Concurrent Surrogate Model
Selection (COSMOS) framework is applied to
identify the best surrogate model to represent
the wind farm energy production as a function
of the reduced variable vector.
Step 3:
• To accomplish a reliable optimum solution,
the surrogate-based optimization (SBO) is
performed by implementing the Adaptive
Model Refinement (AMR) technique within
Particle Swarm Optimization (PSO).
Surrogate-based optimization
8. 8
Mapping of the Layout for a Large Scale Wind Farm
Design factors Lower bound Upper bound
rmax 5D 15D
smax 5D 15D
A − 20 20
B − 20 20
σ 0 1
Mapping
Wind Farm Layout
Wind Farm Layout
(X,Y)
rmax
smax
A
B
σ
φ
Input:
Output:
nput and out put st ruct ure of t he W ind Farm Layout M apping
Product ion M odel
on, first, the wind farm power generation model is adopted from the Unre-
arm Layout Optimization (UWFLO) framework [129] to estimate the total
The developed mapping strategy allows for both global siting (overall land
configuration) and local exploration (turbine micro siting).
Rmax : maximum allowable streamwise spacing
Smax : maximum allowable spanwise spacing
A, B : control parameters for defining the spacing of rows and columns
σ : normalized local radial displacement which controls turbine micro-siting
Φ : farm site orientation
9. 9
Surrogate model selection using COSMOS
Concurrent Surrogate Model Selection (COSMOS) framework is applied to select the
best surrogate model to represent the average annual energy production of a large-scale
wind farm as a function of the mapping factors.
Training data
average annual
energy production
probability of wind speed
and direction.power generation[1]
COSMOS
Best Surrogate model combination
(Model type-Kernel function-Hyperparameter)
[1] Chowdhury and Messac et al. (2013)
10. Surrogate-based optimization
10
To reach a reliable optimum solution at a reasonable cost, surrogate-
based optimization is performed with Adaptive Model Refinement
(AMR).
AMR is a novel model-independent approach to refine the surrogate model
during optimization.
Decisions regarding when to refine the surrogate model is guided by
the Adaptive Model Switching (AMS) technique.
Decisions regarding the batch size for the samples to be added is
guided by the Predictive Estimation of Model Fidelity (PEMF).
11. Adaptive Model Refinement – Model Switching
11
The switching criteria is based on whether the predicted model uncertainty
dominates the uncertainty associated with the improvement of the fitness func.
over the population.
pcr is the indicator of conservativeness
(user controlled)
Model Switching: Hypothesis Testing
Distribution of FF improvement (KDE)Distribution of Model Error (LogN)
Rejection of the test;
Don’t REFINE surrogate
Acceptance of the test;
REFINE surrogate
12. 12
The inputs and outputs of PEMF in the AMR method are
• The desired fidelity is determined using the history of the fitness function improvement in the
optimization process
• The desired batch size is estimated using the inverse of regression functions used to represent the
variation of error with sample density in PEMF
Adaptive Model Refinement – Batch size estimation
PEMF[1]
[1] Mehmani and Messac, SMO (2015)
13. 13
Numerical Experiments
Maximizing energy production of large-scale 500-turbine wind farm
This constraint is defined based on the average land usage of
US commercial wind farms in 2009
Assumptions:
1. The GE-1.5MW-XLE turbine is chosen as the specified turbine-type in this problem,
2. The minimum streamwise (smin) and spanwise (rmin) are set to the same value: 4D,
3. The wind data in this problem is obtained from the North Dakota Agricultural Weather Network
(NDAWN),
4. Initial sample size: N({Xin}) = 200 .The model refinement will be performed if the size of data
set is less than N({X}) = 500
14. 14
Numerical Experiments: results and discussion
improvement of the model fidelity through the
sequential model refinement process using the AMR
method.
The farm layout optimization is started using the best surrogate model selected using
COSMOS (Kriging model with Linear correlation function).
COSMOS
Training
Data
Kriging-Linear
Computational cost of the energy production
model is reduced by a factor of 30.
To reach a reliable optimum solution at a reasonable cost, surrogate-based optimization is
performed with Adaptive Model Refinement (AMR).
15. 15
Numerical Experiments: results and discussion
Convergence history of the optimization using AMR
Size of data set used to refine (update) the active
surrogate model in the AMR approach
Avg.AnnualEnergy
Production
While retaining an accuracy of within 0.05%, AMR improved the efficiency of the optimization process
by a remarkable factor of 26, when compared to optimization using the standard energy production model.
16. 16
Concluding remarks
This paper presented a new approach to optimizing large-scale (500-turbine) wind
farms at an reasonable computational efficiency while reaching reliable optimum
results (i.e., attractive cost-accuracy tradeoffs).
A novel stochastic mapping strategy allowed the reduction of the 1000-dim layout
problem into a 6-variable layout problem, which allows both global exploration
and local micro-siting flexibility.
The COSMOS framework was then applied to select the globally-best surrogate
model to represent the energy production of the wind farm as a fast function of the
reduced set of layout variables.
Surrogate-based optimization was then preformed using the Adaptive Model
Refinement approach, implemented through Particle Swarm Optimization.
The 500-turbine WFLO results indicated that “AMR+PSO” improved
the efficiency of the optimization process by a factor of 26, while
retaining an accuracy of within 0.05% (compared to the results of
WFLO that uses the original energy production model).
20. Predictive Estimation of Model Fidelity (PEMF)
20
PEMF - Error Measure: (1) Model Independent, (2) Predictive, and
(3) Minimally sensitive to outlier samples
en by:
eRAE(Xi) =
|
F(Xi) − ˆF(Xi)
F(Xi)
| if F(Xi) ̸= 0
|F(Xi) − ˆF(Xi)| if F(Xi) = 0
(8)
ere F is the actual function value at Xi, given by high fi-
ty simulation or experimental data, and ˆF is the function
ue estimated by the surrogate model.
In the original PEMF method, the distribution functions
be fitted over the median and the maximum errors at each
ation were selected using the chi-square goodness-of-fit
erion [38]. The following distributions were considered:
normal, Gamma, Weibull, logistic, log logistic, t-location
le, inverse gaussian, and generalized extreme value distri-
ion. However, in order to control the computational ex-
se of PEMF within model selection, only the lognormal
ribution is used. This distribution has been previously ob-
ved (from numerical experiments) to be effective in gen-
. The PDFs of the median and the maximum errors, pmed
pmax, can thus be expressed as
pmed =
1
Emedsmed
√
2p
exp(
(ln(Emed − µmed))2
2s2
med
)
pmax =
1
√ exp(
(ln(Emax − µmax))2
)
(9)
• The PDFs of the median and the maximum errors:
• The modal values of the median/max. error at any iteration
• The inputs and outputs of the PEMF method
for
istributions of the median error over all Mt combi-
ons
rmine the mode of the median and maximum error
ibutions; Emo,t
med and Emo,t
max
r
uct a final surrogate using all N sample points
e estimated Emo,t
med and Emo,t
max ∀t, to quantify their
on with # training points (nt) using regression func-
RN: The modal values of the median and the max-
errors in the final surrogate; emed and emax
e PEMF method, for a set of N sample points, inter-
urrogates are constructed at each iteration, t, using
tic subsets of nt training points (called intermedi-
ng points). These intermediate surrogates are then
r the corresponding remaining N − nt points (called
ate test points). The median error is then estimated
of the Mt intermediate surrogates at that iteration,
ametric probability distribution is fitted to yield the
ue, Emo,t
med . The smart use of the modal value of the
rror promotes a monotonic variation of error with
oint density, unlike mean or root mean squared error
highly susceptible to outliers [31]. This approach
MF an important advantage over conventionalcross-
n-based error measures, as illustrated by Mehmani
max
pmed =
1
Emedsmed
√
2p
exp(
(ln(Emed − µmed))2
2s2
med
)
pmax =
1
Emaxsmax
√
2p
exp(
(ln(Emax − µmax))2
2s2
max
)
(9)
In the above equations, Emed and Emax respectively rep-
resent the median and the maximum relative absolute errors
estimated over a heuristic subset of training points at any
given iteration in PEMF. The parameters, (µmed,smed) and
(µmax,smax) represent the generic parameters of the lognor-
mal distribution. The modal values of the median and the
maximum error at any iteration, t, can then be expressed as
Emo
med|t = exp(µmed − s2
med)|t
Emo
max|t = exp(µmax − s2
max)|t
where nt− 1 < nt ≤ N
(10)
Once we have the history of median and maximum er-
rors at different sample size (< N), the variation of the
modal values of the errors with sample density is then mod-
eled using the multiplicative (E = a0na1 ) or the exponen-
tial (E = a0ea1n) regression functions. The choice of these
regression functions leverage the monotonically decreasing
21. Predicted Median Error
MedianofRAEs
Number of Training Points
t1 t2 t3 t4
It. 3It. 1 It. 2
Momed
It. 4
Momax Mode of maximum
error distribution at
each iterationPredicted Maximum Error
PEMF: Variation of Error with Sample Density (VESD)
21
22. Regional & Global Error Prediction :
comparison of PEMF with cross-validation
Kriging RBF ERBF
0
50
100
150
200
250
300
RelativeError[%]
R
PEMF
R
CV
Kriging RBF ERBF
0
20
40
60
80
100
RelativeError[%]
R
PEMF
max
R
CV
max
Kriging RBF ERBF
0
100
200
300
400
500
600
RelativeError[%]
R
PEMF
max
R
CV
max
Kriging RBF ERBF
0
100
200
300
400
500
RelativeError[%]
R
PEMF
R
CV
Regional
Error
Prediction:
Branin-Hoo Function
Global
Error
Prediction:
Mean or Median Error Maximum Error
Mean or Median Error Maximum Error
6.1 %
263.3 %
488.2 %
56.5 %
528.3 %
19.7 %
22
R[%]R[%]
R[%]R[%]
The PEMF method is up to two orders of magnitude more
accurate than the popular leave-one-out cross-validation
23. Prediction Estimation of Model Fidelity: Summary
PEMF vs. Other Measures
PEMF CV RMSE AIC BIC RMSEKriging
Model-independent ✓ ✓ ✗ ✗ ✗ ✗
Global Error Measure ✓ ✓ ✓ ✓ ✓ ✗
Local Error Measure ✓ ✗ ✗ ✗ ✗ ✓
Model Uncertainty Quantification ✓ ✓ ✓ ✗ ✗ ✗
Providing Maximum Error ✓ ✓ ✗ ✗ ✗ ✗
Providing Variance Error ✓ ✗ ✗ ✗ ✗ ✓
Expected Accuracy (if more resource available) ✓ ✗ ✗ ✗ ✗ ✓
Function Behavior with Sample Density ✓ ✗ ✗ ✗ ✗ ✗
Accuracy
Robustness
23
26. Concurrent Surrogate Model Selection (COSMOS)
We developed a novel 3-level model selection framework called Concurrent
Surrogate Model Selection (COSMOS).
This framework enables the designers to identify a globally best surrogate
model for any given application
26
In COSMOS, the selection criteria depend on the type of application and the
user preference. These criteria are predicted using PEMF
PEMF
27. COSMOS
COSMOS is uniquely formulated using a mixed integer nonlinear programming
(MINLP) problem.
To escape the potentially high computational cost of
theCascaded technique, thethree-level automated model
selection could also be performed by solving a single
(uniquely formulated) mixed integer nonlinear program-
ming (MINLP) problem. The major components and
the flow of information in the One-Step technique is il-
lustrated in Fig. 6. The general form of this MINLP
problem can be expressed as
Min
m,k,u
{ Em o
m ed, Em o
m ax , Eσ2
m ed, Eσ2
m ax , Em o
m ed,α }
subject to (5)
m ≤ NM , m ∈ Z> 0
k ≤ NK (m), k ∈ Z> 0
u = [u11
u12
... u21
u22
... um k
... uN M N K
]
um i n
m k ≤ um k ≤ um ax
m k
Min
z,u
{ Em o
m ed, Em o
m ax , Eσ2
m ed,
subject to
z ≤ N(Φp ), z ∈ Z> 0
0 ≤ u ≤ 1
In Eq. 6, z is the intege
combined model-kernel typ
uous variables that represen
ues; and N (Φp) represents
which is the total number
typesavailableunder thept h
It should be noted that a
used for each hyper-parame
are scaled based on the use
bounds. The upper and lo
Integer design variable that denotes the model type
Number of available Model type
Integer design variable that denotes the Basis (or Kernel) function
Number of available basis function for mth model type
Continuous variables that represent the hyper-parameter
values for the kth kernel of the mth candidate surrogate
27
28. 28
Concurrent Surrogate Model Selection (COSMOS)
A new model selection approach, which simultaneously selects the
best model type, kernel function, and hyper-parameter.
Types of model Types of basis/kernel Hyper-parameter(s)
• RBF,
• Kriging,
• E-RBF,
• SVR,
• QRS,
• …
• Linear
• Gaussian
• Multiquadric
• Inverse multiquadric
• …
• Shape parameter in RBF,
• Smoothness and width
parameters in Kriging,
• Kernel parameter in SVM,
• …
• Searching for Globally-competitive surrogate models
• Necessitates a model-independent surrogate model selection Technique.
A complex MINLP
problem is formulated
and solved
29. COSMOS
To solve this optimization problem; the global pool of model-
kernel candidates is divided into P smaller pool of model-
kernel candidates based on the number of constituent hyper-
parameters in them.
Optimal model selection is performed
separately (separate MINLPs are run in
parallel) for each class.
he
he
ch
ves
n-
F.
ec-
nt.
ex-
es
≫
of
odel
gle
m-
nd
il-
LP
5)
binations which include p hyper-parameter(s). Subse-
quently, optimal model selection isperformed separately
(in parallel) for each candidatepool. Each model-kernel
combination/ candidate within a particular candidate
pool (Φp) is then assigned a single unique integer code,
as opposed to two separate integer codes, as given by
Eq. 5. The candidate model-kernels considered in this
paper are listed in Table 1, where the integer code as-
signed to each candidate is shown under their respec-
tive hyper-parameter class (Φp). For the Φ0 class of
model-kernel combinations, PEMF is applied to all the
candidates, followed by theapplication of a Pareto filter
to determine the final set of non-dominated or Pareto
optimal surrogate models. For all Φp with p > 0, the
mixed integer non-linear programming (MINLP) prob-
lem (Eq. 5) is reformulated as described in Eq. 6
Min
z,u
{ Em o
m ed, Em o
m ax , Eσ2
m ed, Eσ2
m ax , Em o
m ed,α }
subject to (6)
z ≤ N(Φp), z ∈ Z> 0
0 ≤ u ≤ 1
In Eq. 6, z is the integer variable that denotes the
combined model-kernel type; u is the vector of contin-
uous variables that represent the hyper-parameter val-
ues; and N (Φp) represents the size of the set of Φp,
which is the total number of candidate model-kernel
typesavailableunder thept h
hyper-parameter class(Φp).
It should be noted that a consistent range of (0,1) is
used for each hyper-parameter wherethehyper-parameters
Integer design variable that
denotes the model-kernel type
Continuous variables
that represent the
hyper-parameter
Once the Pareto optimal surrogate models
for each p-class have been obtained, a Pareto
filter is applied to determine the globally
optimal set of surrogate models
29
30. Surrogate model candidates
Concurrent Surrogate Model Selection (COSMOS): Optimizing Model Type, Kernel Function, and Hyper-parameters 7
Table 1 Candidate model-kernel combinations and their integer-codes
Surrogate Kernel Φ0 Φ1 Φ2 Hyper-Parameter(s)
Radial Basis Function
Linear 1 - - -
Cubic 2 - - -
Gaussian - 1 - Shape parameter, σ
Multiquadric - 2 - Shape parameter, σ
Kriging
Linear - 3 - Correlation parameter, θ
Exponential - 4 - Correlation parameter, θ
Gaussian - 5 - Correlation parameter, θ
Spherical - 6 - Correlation parameter, θ
Support Vector Regression
Linear - 7 - Penalty parameter, C
Gaussian - - 1 Kernel parameter, γ and Penalty parameter, C
Sigmoid - - 2 Kernel parameter, γ and Penalty parameter, C
Table 2 Range of hyper-parameters
Surrogate Hyper-parameter Lower
bound
Upper
bound
RBF Shape parameter, σ 0.1 3.0
Kriging Correlation parameter, θ 0.1 20
SVR Kernel width parameter, γ 0.1 10
SVR Penalty parameter, C 0.1 100
ons and their integer-codes
nel Φ0 Φ1 Φ2 Hyper-Parameter(s)
ar 1 - - -
c 2 - - -
ssian - 1 - Shape parameter, σ
iquadric - 2 - Shape parameter, σ
ar - 3 - Correlation parameter, θ
onential - 4 - Correlation parameter, θ
ssian - 5 - Correlation parameter, θ
erical - 6 - Correlation parameter, θ
ar - 7 - Penalty parameter, C
ssian - - 1 Kernel parameter, γ and Penalty parameter, C
moid - - 2 Kernel parameter, γ and Penalty parameter, C
Table 2 Range of hyper-parameters
Surrogate Hyper-parameter Lower
bound
Upper
bound
RBF Shape parameter, σ 0.1 3.0
Kriging Correlation parameter, θ 0.1 20
SVR Kernel width parameter, γ 0.1 10
SVR Penalty parameter, C 0.1 100
PEMF method, as given by 30
32. Adaptive Model Switching (AMS)
32
The AMS metric is a hypothesis testing that is defined by a comparison between
(I) the distribution of the relative fitness function improvement, and
(II) the distribution of the error associated with the model.
pcr regulates the trade-off between
reliability and computational cost
Rejection of the test; Don’t Refine a model Acceptance of the test; Refine a model
Fitness Func. Improvement (KDE)
Distribution of Model Error (LogN)
33. Number of Training Points
t1 t2 t3 t4
Adaptive Model Refinement (AMR)
33
MedianofRAEs
Fina
l
Momed
FF improvement (KDE)PEMF Error (LogN)
Rejection of the test;
Don’t REFINE surrogate
Acceptance of the test;
REFINE surrogate
MODEL REFINEMENT BASED ON PEMF
In the previous sub-section, the formulations and c
of the AMR metric is defined. Based on the AMR me
model refinement will beperformedat thet∗ -th iteration
under thecondition that
QP
SMCURR
≥ Qt= t∗
Θ
whereSMCURR representsthecurrent surrogatemodel in
timization process. Model refinement isperformed to ef
improve the fidelity of the current surrogate model to
the“desired fidelity” for theupcoming iterationsof SBO
paper, the desired fidelity, ε∗
mod, is determined using the
of thefitness function improvement in the optimization
which isgiven by:
ε∗
mod = |1−
Qt= t∗
Θ − Qt= t∗ − τ
Θ
Qt= t∗
Θ
| × εCURR
mod
In Equation11, εCURR
mod isthepredictedmodal error v
sociated with the current surrogate model; and τ (∈ Z
user-defined parameter which regulates the occurrence
“surrogate model refinement” in the proposed SBO ap
Numerical experiments exploring different values for t
dicated that the3 ≤ τ ≤ 5 can bethesuitablechoice.
34. 34
AMR is a novel model-independent approach to refine the surrogate model during optimization,
with the objective to maintain a desired level of fidelity and robustness “where” and “when”
needed.
Reconstruction of the model is performed by sequentially adding a batch of new samples at any
given iteration (of SBO[1-3]) when a refinement metric is met.
[1] Forrester et al. (2008)
[2] Rai et al. (2006)
[3] Romero et al. (2011)
Adaptive Model Refinement (AMR)
35. AMR: Location of Infill Points
35
The location of the new infill points in the input space is determined based on a
hypercube enclosing promising current candidate designs in the optimization process.
infill points :
Lower and Upper bounds
of the jth dimension
lower and upper bounds of
the entire set of the current
candidate solutions
The distance-based criterion is then applied to select the optimum setting for the new
infill points
Euclidean Distance
37. Large-scale Wind Farm Layout Design:
Energy production model
37
average annual
energy production
probability of wind speed and direction, estimated by
Multivariate and Multimodal Wind Distribution model.
power generation[1]
number of turbines
Power generated by Turbine- j.
For any given incoming wind speed and direction, the power generated by the
individual turbines is determined by the power generation model developed by [1]
[1] Chowdhury and Messac et al. (2013)
38. Large-scale Wind Farm Layout Design: Assumptions
38
1. The GE-1.5MW-XLE turbine is chosen as the specified turbine-type in this problem.
2. The minimum streamwise (smin) and spanwise (rmin) are set to the same value: 4D
3. The wind data in this problem is obtained from the North Dakota Agricultural Weather
Network (NDAWN)
model
Below are other assumptions applied to the numerical experiments:
1. The GE-1.5MW-XLE turbine is chosen as the specified turbine-type in this problem.
The features of this turbine are listed in Table 9.2.
Table 9.2: Feat ures of t he GE-1.5M W -X LE t urbine [130]
Turbine feature Value
Rated power (Pr 0) 1.5MW
Rated wind speed (Ur 0) 11.5m/ s
Cut-in wind speed (Uin0) 3.5m/ s
Cut-out wind speed (Uout0) 20.0m/ s
Rotor-diameter (D) 82.5m
Hub-height (H ) 80.0m
2. The minimum streamwise (smin ) and spanwise (rmin) are set to the same value: 4D;
and
3. The wind data this problem is obtained from the North Dakota Agricultural Weather
Network (NDAWN). The local wind distribution is shown in Fig. 9.5, and the onshore
192
rm scenario is assumed, and the ambient turbulence (10%) is constant over the
arm site.
nd data used in this problem is obtained from the North Dakota Agricultural
er Network (NDAWN) [113]. We use thedaily averaged data for wind speed and
n, measured at the Baker station between the years 2000 and 2009. Fig. 9.4
he Baker station, and further details are provided in Table 9.3.
Figure 9.4: Baker st at ion set up [113]
ble 9.3: Det ails of t he N DAW N st at ion at Baker, N D [113]
Parameter Value
Location Baker, ND
Period of Record 01/ 01/ 2000 to 12/ 31/ 2009
192
arm scenario is assumed, and the ambient turbulence (10%) is constant over the
farm site.
nd data used in this problem is obtained from the North Dakota Agricultural
er Network (NDAWN) [113]. We use thedaily averaged data for wind speed and
on, measured at the Baker station between the years 2000 and 2009. Fig. 9.4
the Baker station, and further details are provided in Table 9.3.
Figure 9.4: Baker st at ion set up [113]
able 9.3: Det ails of t he N DAW N st at ion at Baker, N D [113]
Parameter Value
Location Baker, ND
Period of Record 01/ 01/ 2000 to 12/ 31/ 2009
Latitude 48.167
Longitude -99.648
Elevation 512m
Measurement height 3m
Baker station setup
Wind rose diagram for the site at Baker