The document provides biographical and research information about Amir Parnianifard. It summarizes his educational background, current affiliations with Glasgow College and the University of Electronic Science and Technology of China, and contact information. It then lists his main research interests as engineering design optimization, surrogate modeling, uncertainty quantification, and other topics in computational intelligence and optimal control. The document provides an overview of differential evolution algorithms and includes MATLAB code examples for implementing differential evolution optimization and other related modeling techniques.
Deep Learning: Introduction & Chapter 5 Machine Learning BasicsJason Tsai
Given lecture for Deep Learning 101 study group with Frank Wu on Dec. 9th, 2016.
Reference: https://www.deeplearningbook.org/
Initiated by Taiwan AI Group (https://www.facebook.com/groups/Taiwan.AI.Group/)
This presentation provides an introduction to the Genetic algorithms topic, it shows the GA operators and parameters , advantages, limitations and the related applications.
Deep Learning: Introduction & Chapter 5 Machine Learning BasicsJason Tsai
Given lecture for Deep Learning 101 study group with Frank Wu on Dec. 9th, 2016.
Reference: https://www.deeplearningbook.org/
Initiated by Taiwan AI Group (https://www.facebook.com/groups/Taiwan.AI.Group/)
This presentation provides an introduction to the Genetic algorithms topic, it shows the GA operators and parameters , advantages, limitations and the related applications.
▸ Machine Learning / Deep Learning models require to set the value of many hyperparameters
▸ Common examples: regularization coefficients, dropout rate, or number of neurons per layer in a Neural Network
▸ Instead of relying on some "expert advice", this presentation shows how to automatically find optimal hyperparameters
▸ Exhaustive Search, Monte Carlo Search, Bayesian Optimization, and Evolutionary Algorithms are explained with concrete examples
** AI & Deep Learning with Tensorflow Training: https://www.edureka.co/ai-deep-learning-with-tensorflow **
This Edureka PPT on "Restricted Boltzmann Machine" will provide you with detailed and comprehensive knowledge of Restricted Boltzmann Machines, also known as RBM. You will also get to know about the layers in RBM and their working.
This PPT covers the following topics:
1. History of RBM
2. Difference between RBM & Autoencoders
3. Introduction to RBMs
4. Energy-Based Model & Probabilistic Model
5. Training of RBMs
6. Example: Collaborative Filtering
Follow us to never miss an update in the future.
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
My presentation gives a brief overview about soft computing and it's concepts. Such as..Neural networks, Machine learning, Artificial Intelligence etc...
▸ Machine Learning / Deep Learning models require to set the value of many hyperparameters
▸ Common examples: regularization coefficients, dropout rate, or number of neurons per layer in a Neural Network
▸ Instead of relying on some "expert advice", this presentation shows how to automatically find optimal hyperparameters
▸ Exhaustive Search, Monte Carlo Search, Bayesian Optimization, and Evolutionary Algorithms are explained with concrete examples
** AI & Deep Learning with Tensorflow Training: https://www.edureka.co/ai-deep-learning-with-tensorflow **
This Edureka PPT on "Restricted Boltzmann Machine" will provide you with detailed and comprehensive knowledge of Restricted Boltzmann Machines, also known as RBM. You will also get to know about the layers in RBM and their working.
This PPT covers the following topics:
1. History of RBM
2. Difference between RBM & Autoencoders
3. Introduction to RBMs
4. Energy-Based Model & Probabilistic Model
5. Training of RBMs
6. Example: Collaborative Filtering
Follow us to never miss an update in the future.
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
My presentation gives a brief overview about soft computing and it's concepts. Such as..Neural networks, Machine learning, Artificial Intelligence etc...
Paper Study: Melding the data decision pipelineChenYiHuang5
Melding the data decision pipeline: Decision-Focused Learning for Combinatorial Optimization from AAAI2019.
Derive the math equation from myself and match the same result as two mentioned CMU papers [Donti et. al. 2017, Amos et. al. 2017] while applying the same derivation procedure.
We consider the problem of finding anomalies in high-dimensional data using popular PCA based anomaly scores. The naive algorithms for computing these scores explicitly compute the PCA of the covariance matrix which uses space quadratic in the dimensionality of the data. We give the first streaming algorithms
that use space that is linear or sublinear in the dimension. We prove general results showing that any sketch of a matrix that satisfies a certain operator norm guarantee can be used to approximate these scores. We instantiate these results with powerful matrix sketching techniques such as Frequent Directions and random projections to derive efficient and practical algorithms for these problems, which we validate over real-world data sets. Our main technical contribution is to prove matrix perturbation
inequalities for operators arising in the computation of these measures.
-Proceedings: https://arxiv.org/abs/1804.03065
-Origin: https://arxiv.org/abs/1804.03065
CONSTRUCTING A FUZZY NETWORK INTRUSION CLASSIFIER BASED ON DIFFERENTIAL EVOLU...IJCNCJournal
This paper presents a method for constructing intrusion detection systems based on efficient fuzzy rulebased
classifiers. The design process of a fuzzy rule-based classifier from a given input-output data set can
be presented as a feature selection and parameter optimization problem. For parameter optimization of
fuzzy classifiers, the differential evolution is used, while the binary harmonic search algorithm is used for
selection of relevant features. The performance of the designed classifiers is evaluated using the KDD Cup
1999 intrusion detection dataset. The optimal classifier is selected based on the Akaike information
criterion. The optimal intrusion detection system has a 1.21% type I error and a 0.39% type II error. A
comparative study with other methods was accomplished. The results obtained showed the adequacy of the
proposed method
An Improved Iterative Method for Solving General System of Equations via Gene...Zac Darcy
Various algorithms are known for solving linear system of equations. Iteration methods for solving the
large sparse linear systems are recommended. But in the case of general n× m matrices the classic
iterative algorithms are not applicable except for a few cases. The algorithm presented here is based on the
minimization of residual of solution and has some genetic characteristics which require using Genetic
Algorithms. Therefore, this algorithm is best applicable for construction of parallel algorithms. In this
paper, we describe a sequential version of proposed algorithm and present its theoretical analysis.
Moreover we show some numerical results of the sequential algorithm and supply an improved algorithm
and compare the two algorithms.
An Improved Iterative Method for Solving General System of Equations via Gene...Zac Darcy
Various algorithms are known for solving linear system of equations. Iteration methods for solving the
large sparse linear systems are recommended. But in the case of general n× m matrices the classic
iterative algorithms are not applicable except for a few cases. The algorithm presented here is based on the
minimization of residual of solution and has some genetic characteristics which require using Genetic
Algorithms. Therefore, this algorithm is best applicable for construction of parallel algorithms. In this
paper, we describe a sequential version of proposed algorithm and present its theoretical analysis.
Moreover we show some numerical results of the sequential algorithm and supply an improved algorithm
and compare the two algorithms.
Kernal based speaker specific feature extraction and its applications in iTau...TELKOMNIKA JOURNAL
Extraction and classification algorithms based on kernel nonlinear features are popular in the new direction of research in machine learning. This research paper considers their practical application in the iTaukei automatic speaker recognition system (ASR) for cross-language speech recognition. Second, nonlinear speaker-specific extraction methods such as kernel principal component analysis (KPCA), kernel independent component analysis (KICA), and kernel linear discriminant analysis (KLDA) are summarized. The conversion effects on subsequent classifications were tested in conjunction with Gaussian mixture modeling (GMM) learning algorithms; in most cases, computations were found to have a beneficial effect on classification performance. Additionally, the best results were achieved by the Kernel linear discriminant analysis (KLDA) algorithm. The performance of the ASR system is evaluated for clear speech to a wide range of speech quality using ATR Japanese C language corpus and self-recorded iTaukei corpus. The ASR efficiency of KLDA, KICA, and KLDA technique for 6 sec of ATR Japanese C language corpus 99.7%, 99.6%, and 99.1% and equal error rate (EER) are 1.95%, 2.31%, and 3.41% respectively. The EER improvement of the KLDA technique-based ASR system compared with KICA and KPCA is 4.25% and 8.51% respectively.
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
HEAP SORT ILLUSTRATED WITH HEAPIFY, BUILD HEAP FOR DYNAMIC ARRAYS.
Heap sort is a comparison-based sorting technique based on Binary Heap data structure. It is similar to the selection sort where we first find the minimum element and place the minimum element at the beginning. Repeat the same process for the remaining elements.
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
Online aptitude test management system project report.pdfKamal Acharya
The purpose of on-line aptitude test system is to take online test in an efficient manner and no time wasting for checking the paper. The main objective of on-line aptitude test system is to efficiently evaluate the candidate thoroughly through a fully automated system that not only saves lot of time but also gives fast results. For students they give papers according to their convenience and time and there is no need of using extra thing like paper, pen etc. This can be used in educational institutions as well as in corporate world. Can be used anywhere any time as it is a web based application (user Location doesn’t matter). No restriction that examiner has to be present when the candidate takes the test.
Every time when lecturers/professors need to conduct examinations they have to sit down think about the questions and then create a whole new set of questions for each and every exam. In some cases the professor may want to give an open book online exam that is the student can take the exam any time anywhere, but the student might have to answer the questions in a limited time period. The professor may want to change the sequence of questions for every student. The problem that a student has is whenever a date for the exam is declared the student has to take it and there is no way he can take it at some other time. This project will create an interface for the examiner to create and store questions in a repository. It will also create an interface for the student to take examinations at his convenience and the questions and/or exams may be timed. Thereby creating an application which can be used by examiners and examinee’s simultaneously.
Examination System is very useful for Teachers/Professors. As in the teaching profession, you are responsible for writing question papers. In the conventional method, you write the question paper on paper, keep question papers separate from answers and all this information you have to keep in a locker to avoid unauthorized access. Using the Examination System you can create a question paper and everything will be written to a single exam file in encrypted format. You can set the General and Administrator password to avoid unauthorized access to your question paper. Every time you start the examination, the program shuffles all the questions and selects them randomly from the database, which reduces the chances of memorizing the questions.
1. Amir Parnianifard
PhD, PMP®, IEEE Senior Member
Glasgow College, University of Electronic Science and
Technology of China, Chengdu, Sichuan, P.R.China
611731.
E-Mail: amir.p@uestc.edu.cn ; amir.parnianifard@glasgow.ac.uk
Computational Intelligence
Assisted Engineering Design
Optimization (using MATLAB®)
2. Research Interest
✓ Engineering Design Optimization
✓ Surrogate Modelling
✓ Uncertainty Quantification
✓ Robust Design
✓ Digital-Twins
✓ Computational Intelligence
✓ Optimal Control
✓ Robot Trajectory Planning
✓ Wireless Sensor Networks.
Note: All MATLAB® codes provided in this
presentation, have been written and
augmented by the presenter (Amir
Parnianifard).
3. Engineering Optimization
Optimization is the process of finding values of
the variables that minimize or maximize the
objective function while satisfying the
constraints.
𝑀𝑖𝑛 𝑜𝑟 𝑀𝑎𝑥: 𝑓 𝑋
𝑆𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜: 𝑔𝑗 𝑋 ≤ 0,
𝑗 = 1,2, … , 𝐽
✓ Linear versus Nonlinear Programming
✓ Continuous Optimization versus Discrete Optimization.
✓ Unconstrained Optimization versus Constrained Optimization.
✓ Single or Many Objectives.
✓ Deterministic Optimization versus Stochastic Optimization.
4. Eng. Optimization using Evolutionary Algorithms
➢ In computational intelligence, an Evolutionary
Algorithm (EA) is a subset of evolutionary
computation, a generic population-based
metaheuristic optimization algorithm. An EA uses
mechanisms inspired by biological evolution, such as
reproduction, mutation, recombination, and
selection.
➢ Genetic Algorithm (GA) is the most popular type of
EA. A search heuristic known as a genetic algorithm
was motivated by Charles Darwin's theory of natural
selection[57], [58].
➢ Differential Evolution (DE) is based on vector
differences and is therefore primarily suited for
numerical optimization problems.
5. Differential Evolution Algorithm
The algorithmic procedure of DE can be summarized as follows:
1. Randomly select the 𝑁 individuals uniformly on the intervals 𝑥𝑗
𝐿
, 𝑥𝑗
𝑈
, where 𝑥𝑗
𝐿
and 𝑥𝑗
𝑈
are the upper
and lower bounds of decision parameters.
2. Each of the 𝑁 individuals undergoes mutation, recombination, and selection. Mutation can expand the
search space.
3. For a given exact individual 𝑢, randomly select three parameters such that the indices 𝑖, 𝑟1 , 𝑟2 and 𝑟3
are distinct.
4. Add the weighted difference of two of the vectors to the third 𝑟𝑖+1 = 𝑟1 + 𝛼 𝑟2 − 𝑟3 , which 𝛼 called a
mutation factor and can be a constant from 0,2 , and 𝑟𝑖+1 is called the donor vector.
5. These three parameters are replaced to selected individual 𝑢 if the 𝑟𝑎𝑛𝑑 () > 𝐶𝑅, where 𝐶𝑅 is a
constant probability in 𝑈 0,1 .
6. If the fitness function of the updated individual is less than the original individual 𝑢, then 𝑢 is removed
from the set of individuals, and an updated one is replaced.
7. Until a stopping condition is met, mutation, recombination, and selection proceed.
6. DE Optimizer (MATLAB Code)
Input: Fitness function of problem, Lower and Upper limits of design variables, Number of Initial Papulation,
and Maximum Number of Iteration by Algorithm
Output: The Optimum results for design variables and the relevant fitness function (in the optimum point).
function [Opt_var,Opt_cost] = DEO(objFcn,LoB,UpB,NoPapulation,MaxIteration)
% This is the code for Differential evolution (DE)
%% Parameter Adjustement
nVar = length(LoB);
CR = 0.5; % Crossover Rate
FF = 1-exp(-norm(0.05*(UpB-LoB)))* rand();
%Initial Papulation
individuals = (UpB-LoB).*rand(NoPapulation ,nVar) + LoB;
% figure('Position', [400, 400, 800, 300]); hold on
%% Optimization
for iter = 1 : MaxIteration
for i = 1:NoPapulation
ind0 = individuals;
ind0(i,:) = [];
X = individuals(i,:);
Cost_X = objFcn(X);
cost(i,1) = Cost_X;
var_X(i,:) = X;
idx = randperm(NoPapulation-1,3);
X1 = ind0(idx(1),:);
X2 = ind0(idx(2),:);
X3 = ind0(idx(3),:);
d_rand = randperm(nVar,1);
for d = 1:nVar
if d == d_rand || rand <= CR
u(d) = X1(d) + FF *rand()* (X2(d) - X3(d));
if u(d) < LoB(d) || u(d) > UpB(d)
u(d) = (UpB(d) - LoB(d))/2 + LoB(d);
end
else
u(d) = X(d);
end
end
if objFcn(u) < Cost_X
individuals(i,:) = u;
end
end
[best_cost(iter),nr] = min(cost);
best_var(iter,:) = var_X(nr,:);
end
[Opt_cost,nr] = min(best_cost);
Opt_var = best_var(nr,:);
end
7. Metamodeling (Learning From Data)
❑ Shortcoming: High Computation Cost in case of using
evolutionary algorithm for simulation-optimization
• In solving complex real- world engineering optimization
problems, an evolutionary algorithm may require thousands
of function evaluations in order to provide a satisfactory
solution, whereas each evaluation requires hours of
computer run-time.
❑ Less-Expensive methods to handle complex simulation
modeling and complex problems
• To overcome computation cost, researchers have applied
sampling-based learning methods such as Artificial Neural
Networks, Radial Basis Functions (RBF), Kriging (Gaussian
Process) and polynomial regression model. These methods
can ‘learn’ the problem behaviors and approximate the
function value. These approximation models can speed up
the function evaluation as well as estimation the function
value with an acceptable accuracy.
8. Metamodel Based Simulation-Optimization
Step 1: Choosing an experimental design for generating design points.
Step 2: Run original simulation model (function evaluation) according to
each generated design points and collect the relevant responses.
Step 3: Fitting the metamodel over the observed input-output dataset.
Step 4: Using fitted metamodel (cheep) instead of original simulation
model (expensive) for optimization and sensitivity analysis.
9. Experimental Designs
The idea of the classical DOE
is to reach as much
information as possible from
a limited number of
experiments.
A good experimental design
has to be space filling and
non- collapsing rather than to
concentrate on the boundary.
10. Experimental Designs
• One of the common method for designing simulation experiments is LHS.
• LHS was first introduced by McKay et al. (1979).
• In general, for 𝑛 input variables, 𝑚 sample points are produced randomly
into 𝑚 intervals or scenarios (with equal probability). For the particular
instance the LHS design for 𝑚 = 4,𝑛 = 2 is shown in here
Latin Hypercube Sampling (LHS)
𝑥2
𝑥1
1. In LHS, each input range divided into 𝑚 subranges (integer) with equal probability magnitudes and numbered from
1 to 𝑚.
2. LHS place all 𝑚 intervals by random value between lower and upper bounds relevant to each interval, since each
integers 1,2, … , 𝑚 appears exactly once in each row and each column of the design space. Note that, within each cell
of design, the exact input value can be sampled by any distribution, e.g., uniform, Gaussian or etc.
Three common choices are available to ensure appropriate space filling of sample points in LHS design:
1. Minimax: This design tries to minimizing the maximum distances in design space between any location for each design
point and its nearest design points.
2. Maximin: In this design attempt to be maximized the minimum distance between any two design points.
3. Desired Correlational function: Inspired by (Iman & Conover, 1982) in the case of non-independent multivariate input
variables the desired correlation matrix can be used to produce distribution free sample points in LHS.
11. Latin Hypercube Sampling (LHS)
function [LHS_Samp] = LHS_Design(NSamp,MinRangeSamp,MaxRangeSamp,Type)
[~,NVar]=size(MinRangeSamp);
switch Type
case 'maximin'
Train_lhs = lhsdesign(NSamp,NVar,'criterion','maximin');
case 'none'
Train_lhs = lhsdesign(NSamp,NVar,'criterion','none');
case 'correlation'
Train_lhs = lhsdesign(NSamp,NVar,'criterion','correlation');
end
LHS_Samp=[];
for sa = 1:NSamp
for va = 1:NVar
L = MinRangeSamp(1,va);
U = MaxRangeSamp(1,va);
X = Train_lhs(sa,va)*(U-L)+L;
XR(1,va) = X;
end
LHS_Samp = [LHS_Samp;XR];
end
end
Note: the MATLAB® function “lhsdesign”
is located in Deep Learning Toolbox.
Make sure install Deep Learning
Toolbox.
Input: Number of sample points, Lower
and Upper bound for design variables,
and type of LHS.
Output: Designed sample points in
upper and lower range.
MATLAB® code
12. Metamodel 1: Polynomial Regression (PR)
Commonly, the main purpose of PR is the approximation of
true response function based on Taylor series expansion
around a set of design points.
𝑦 = 𝑓 𝑋 = መ
𝛽0 +
𝑖=1
𝑚
መ
𝛽𝑖 𝑥𝑖 + 𝜀
𝑦 = 𝑓 𝑋
= መ
𝛽0 +
𝑖=1
𝑘
መ
𝛽𝑖 𝑥𝑖 +
𝑖=1
𝑘
መ
𝛽𝑖𝑖 𝑥𝑖
2
+
𝑖=1
𝑘
𝑖<𝑗=2
𝑘
መ
𝛽𝑖j 𝑥𝑖𝑥𝑗 + 𝜀
First Order
Second Order 𝒚 = 𝑿𝜷 + 𝜺
𝜷 = 𝑿′
𝑿 −1
𝑿′
𝒚
The least-squares estimators must satisfy
𝜕𝐿
𝜕𝛽
, thus the least squares estimator of 𝛽 is obtained by:
𝒚 =
𝑦1
𝑦2
⋮
𝑦𝑛
𝑿 =
1 𝑥11 𝑥12
1 𝑥21 𝑥22
⋮ ⋮ ⋮
1 𝑥𝑛1 𝑥𝑛2
… 𝑥1𝑚
… 𝑥2𝑚
⋮
… 𝑥𝑛𝑚
𝜷 =
𝛽1
𝛽2
⋮
𝛽𝑛
, 𝜺 =
𝜀1
𝜀2
⋮
𝜀𝑛
Note: To train different order of PR, the MATLAB® toolbox below suggested to be used.
https://www.mathworks.com/matlabcentral/fileexchange/34765-polyfitn
13. Metamodel 2: Radial Basis Function (RBF)
The RBF employs a linear combination of independent symmetric
functions based on the Euclidean distances to compute the
approximation function of response (interpolation model).
ℎ x = 𝑒𝑥𝑝 −γ x − x𝑛
2
Each (x𝑛, 𝑦𝑛) influences ℎ x based on x − x𝑛
Standard form of RBF:
መ
𝑓 x =
𝑛=1
𝑁
𝑤𝑛𝑒𝑥𝑝 −γ x − x𝑛
2
The learning algorithm to find w1, … , wN by minimizing the sum square
error of model based on the training data points (x1, 𝑦1), …., (x𝑁, 𝑦𝑁)
The RBF interpolate the sample points, the SSE is expected to be 𝑆𝑆𝐸 ≈ 0.
𝑀𝑖𝑛𝑖𝑚𝑖𝑧𝑒 𝑆𝑆𝐸 =
𝑛=1
𝑁
𝑓 𝑥𝑛 − መ
𝑓 𝑥𝑛
2
N equations N unknowns: 𝑓 𝑥𝑛 = መ
𝑓 𝑥𝑛 for 𝑛 = 1, … , 𝑁
𝑋0
𝑑𝑖 = 𝑋0 − 𝐶𝑖
𝑋1
14. Radial Basis Function (RBF)
𝝋 =
𝑒𝑥𝑝 −γ x1 − x1
2 … 𝑒𝑥𝑝 −γ x1 − x𝑁
2
𝑒𝑥𝑝 −γ x2 − x1
2
… 𝑒𝑥𝑝 −γ x2 − x𝑁
2
⋮ ⋮ ⋮
𝑒𝑥𝑝 −γ x𝑁 − x1
2 … 𝑒𝑥𝑝 −γ x𝑁 − x𝑁
2
𝒘 =
𝑤1
𝑤2
⋮
𝑤𝑛
, 𝒚 =
𝑦1
𝑦2
⋮
𝑦𝑛
𝝋. 𝒘 = 𝒀
If 𝝋 is invertible then 𝒘 = 𝝋−𝟏
. 𝒀
“exact interpolation”
MATLAB® code
function Ps = RBF_Sur(X_S,R_S,x0)
Nf = size(R_S,2);
for f = 1:Nf
Y = R_S(:,f);
N = size(X_S,1);
phi = zeros(N,N);
gamma = 0.05;
for i = 1 : N
xi = X_S(i,:);
for j = 1 : N
xj = X_S(j,:);
rb = exp(-gamma*norm(xi-xj)^2) ;
phi(i,j) = rb ;
end
end
W = inv(phi)*Y;
for s = 1 : N
xs = X_S(s,:);
rbs = exp(-gamma*norm(x0-xs)^2);
hx(s) = W(s) * rbs ;
end
Ps(f) = sum(hx);
end
end
MATLAB code for RBF:
Input: Input data (designed training sample points), output data (collected
responses from original model over designed sample points), and X0 (new
sample point that we are interested to predict response for this point using RBF.
Output: The approximated response for point X0 .
15. Metamodel Validation
R-square Index: The 𝑅2
coefficient is defined as: 𝑅2
= 1 −
σ𝑗=1
𝑚
𝑦𝑗 − ෝ
𝑦𝑗
2
σ𝑗=1
𝑚
𝑦𝑗 − ഥ
𝑦𝑗
2 = 1 −
𝑀𝑒𝑎𝑛 𝑆𝑞𝑢𝑎𝑟𝑒 𝐸𝑟𝑟𝑜𝑟
𝑉𝑎𝑟𝑖𝑎𝑛𝑐𝑒
Relative Maximum Absolute Error (RMAE)
While the larger magnitude of R-square indicates better overall accuracy, the smaller amount of RMAE indicates the
smaller local error.
𝑅𝑀𝐴𝐸 =
max 𝑦1 − ො
𝑦1 , 𝑦2 − ො
𝑦2 , … , 𝑦𝑚 − ො
𝑦𝑚
1
𝑚
σ𝑗=1
𝑚
𝑦𝑗 − ෝ
𝑦𝑗
2
Cross-validation
The cross-validation method can be used when collecting new data or further information about simulation model is costly.
The cross validation uses an existed data and does not require to re-run of the expensive simulation. This method also
called leave-𝑘-out cross-validation to validate metamodel (i.e., in each run 𝑘 sample points would be removed from an
initial training sample points). The leave-one-out cross validation (𝑘 = 1) is briefly explained in follows that is most
popular than others.
16. Leave One-Out Cross-Validation
Step 1: Delete 𝑠𝑡ℎ input combination and relevant output from the complete set of 𝑙
combination (𝑠 = 1,2, … , 𝑙).
Step 2: Approximate the new model by employing 𝑙 − 1 remain rows 𝑠−1.
Step 3: Predict output for left-out point (𝑠−1) with metamodel which obtained from Step
2.
Step 4: Implementing the preceding three previous steps for all input combination
(sample points) and computing 𝑛 predictions (ො
𝑦𝑠).
Step 5: The prediction result can be compared with the associated output in original
simulation model. The total comparison can be done through a scatter plot or eyeball to
decide whether or not metamodel is acceptable
17. Uncertainty in Model
In practice, most engineering problems have been affected by
different sources of variations. One of the main challenges of SO
is address uncertainty in the model, by a variety of approaches,
such as robust optimization, stochastic programming, random
dynamic programming, and fuzzy programming. Uncertainty is
undeniable which effect on the accuracy of simulation results
while making variability on them. In uncertain condition, robust
SO allows us to define the optimal set point for input variables
while keeping the output as closer as possible to ideal point, also
with at least variation.
18. Metamodel-Based Robust Simulation Optimization
Robust simulation-optimization is about solving simulation model with uncertain data in a computationally
tractable way. The main goal of robustness strategy is to investigate the best level of input factors setting for
obtaining desirable output goal which is insensitive to the changeability of uncertain parameters.
Step 1: Conduct crossed array experimental design
Crossed two sets of sample points with dimension 𝑠 × 𝑟, when (𝑠 = 1,2, … , 𝑙) and (𝑟 = 1,2, … , 𝑚). First
for design factors (𝑋 = 1,2, … , 𝑛𝑥) as an inner array (𝑠 × 𝑛𝑥). Second for uncertain variables (𝑍 =
1,2, … , 𝑛𝑧) as an outer array (𝑟 × 𝑛𝑧). In this context, LHS method is used in order to design sample points
in both the inner and the outer array sets.
Due to the stochastic nature of the simulation model, repeated runs of the simulation model with the
same combination of input variables, lead to different outputs. Furthermore, each input combination (𝑠 =
1,2, … , 𝑙) repeats 𝑚 times (𝑟 = 1,2, . . . , 𝑚), while 𝑚 is the number of scenarios with uncertain variables.
If 𝑌 = (𝑦1, 𝑦2, ⋯ , 𝑦𝑠) is the 𝑠-dimensional vector with the simulation output, then the mean and variance
for each input combination are computed.
Algorithmic Steps (Inspired by https://doi.org/10.1007/s00158-023-03493-0 )
19. Metamodel-Based Robust Simulation Optimization
Step 2: Run simulation model
Run the simulation model for each crossed (𝑠 × 𝑟)
sample point and collect simulation outputs (𝑦𝑠𝑟).
Step 3: Compute mean and variance for each input
combination
Given that 𝑠 = (1,2, … , 𝑙) is the vector of sample points
(input combination). Each input combination (𝑠) is
repeated 𝑚 times based on uncertainty scenarios.
Therefore, 𝑌 is the 𝑠 × 𝑚 matrix of simulation’s outputs,
and mean ഥ
𝑦𝑠 and variance 𝜎s
2 of the simulation model
are computed by
ഥ
𝑦𝑠 =
𝑟=1
𝑚
𝑤𝑟. 𝑦𝑠𝑟 , 𝑓𝑜𝑟 (𝑠 = 1,2, … , 𝑙) 𝜎𝑠
2 =
𝑟=1
𝑚
𝑤𝑟. 𝑦𝑠𝑟
2 −
𝑟=1
𝑚
𝑤𝑟. 𝑦𝑠𝑟
2
, 𝑓𝑜𝑟 (𝑠 = 1,2, … , 𝑙)
20. Metamodel-Based Robust Simulation Optimization
Robust dual
surface model
𝑀𝑖𝑛 𝑜𝑟 𝑀𝑎𝑥: 𝑴𝒆𝒂𝒏
𝑆𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜: 𝑺𝒕𝒅 ≤ 𝜺
Step 4: Fit input/output data with surrogate model
Fit one surrogate model (e.g., Kriging, RBF, or PR) for mean and another one for variance of output. Here,
Kriging as a modern global surrogate model is used.
Step 5: Validate both surrogate models, using leave-one-out cross-validation
Step 6: Estimate the Pareto frontier by obtaining robust optimal solutions performing dual response
surface methodology
The optimal result depends on value of 𝜀 which can be chosen by the decision maker. Varying this
magnitude provides the capture of Pareto frontier (also called Pareto optimal efficiency) to make trade-off
between mean and variance of the model.
21. Metamodel-Based Robust
Simulation Optimization
Start
DOE for design
variables (inner array)
DOE for uncertain
variables (outer array)
Run simulation model
and gain output
Compute mean for
each input
combination
Compute variance for
each input
combination
Fit metamodel over
variance
Fit metamodel over
mean
Cross-validation
Cross-validation
Accept both metamodels
Robust optimization
model
Estimate Pareto
frontier
Yes No
Finish
Flowchart of the proposed approach
23. Introduction
Optimal Placement of Antennas
(Maximize coverage, minimize
overlapping, minimize energy
consumption)
Wireless Sensor Networks
(Localization, positioning of the sensor
nodes)
𝐷
Transmitter
Receiver
𝑦
𝑥
(𝑥𝑡, 𝑦𝑡)
(𝑥𝑟, 𝑦𝑟)
𝑑𝑟,𝑡
Optimization of Packet Transmission
(Scheduling and Node Parent Selection, Traffic Aware
Scheduling Algorithm, IEEE 802.15.4e time slotted
channel hopping)
Engineering Application (Examples in Electronics and Electrical Engineering)
24. Introduction
Robotics
(To optimal trajectory path design, real-
time intelligent control, and optimizing
robot performance.)
Electronics Circuit Design
(Evolutionary electronics, analog circuits,
digital electronics, logic optimization)
Engineering Application (Examples in Electronics and Electrical Engineering)
25. 1. Geometry Optimization
2. Uncertainty Quantification
Engineering Application (Example for heat transfer of nanofluids flow)
26. References
Sample Publications by Presenter (Amir Parnianifard) in the scope of this presentation
Parnianifard, A, Azfanizam, A. S., Ariffin, M. K. A., & Ismail, M. I. S. (2018). Kriging-Assisted Robust Black-Box Simulation Optimization in Direct
Speed Control of DC Motor Under Uncertainty. IEEE Transactions on Magnetics, 54(7), 1–10.
Parnianifard, Amir, Azfanizam, A. S., Ariffin, M. K. A., & Ismail, M. I. S. (2018). An overview on robust design hybrid metamodeling : Advanced
methodology in process optimization under uncertainty. International Journal of Industrial Engineering Computations, 9(1), 1–32.
Parnianifard, Amir, Azfanizam, A. S., Ariffin, M. K. A., & Ismail, M. I. S. (2019a). Comparative study of metamodeling and sampling design for
expensive and semi-expensive simulation models under uncertainty. SIMULATION, 96(1), 89–110.
Parnianifard, Amir, Azfanizam, A. S., Ariffin, M. K. A., & Ismail, M. I. S. (2019b). Crossing weighted uncertainty scenarios assisted distribution-
free metamodel-based robust simulation optimization. Engineering with Computers, 36(1), 139–150.
Parnianifard, Amir, Chaudhary, S., Mumtaz, S., Wuttisittikulkij, L., & Imran, M. A. (2023). Expedited surrogate-based quantification of
engineering tolerances using a modified polynomial regression. Structural and Multidisciplinary Optimization, 66(3), 61.
Parnianifard, Amir, Mumtaz, S., Chaudhary, S., Imran, M. A., & Wuttisittikulkij, L. (2022). A data driven approach in less expensive robust
transmitting coverage and power optimization. Scientific Reports, 12, 17725.
Parnianifard, Amir, Rezaie, V., Chaudhary, S., Imran, M. A., & Wuttisittikulkij, L. (2021). New Adaptive Surrogate-Based Approach Combined
Swarm Optimizer Assisted Less Tuning Cost of Dynamic Production- Inventory Control System. IEEE Access, 9, 144054–144066.
Parnianifard, Amir, Zemouche, A., Chancharoen, R., Imran, M. A., & Wuttisittikulkij, L. (2020). Robust optimal design of FOPID controller for five
bar linkage robot in a Cyber-Physical System: A new simulation-optimization approach. PLOS ONE, 15(11), e0242613.
27. References
Books
Blondin, M. J., & Pardalos, P. M. (2019). Computational Intelligence and Optimization Methods for Control Engineering (Vol. 150). Retrieved
from http://link.springer.com/10.1007/978-3-030-25446-9
Dellino, G., & Meloni, C. (2015). Uncertainty Management in Simulation- Optimization of Complex Systems. Boston, MA, USA: Springer. Springer.
Kanevski, M., Pozdnoukhov, A., & Timonin, V. (2009). Machine Learning for Spatial Environmental Data. EPFL press.
Kleijnen, Jack, P. C. (2020). Chapter 10 Kriging: methods and applications. In Model order reduction (Vol. 1, pp. 1–466).
Kleijnen, J. P. C. C. (2015). Design and analysis of simulation experiments. Springer Proceedings in Mathematics & Statistics (Vol. 231). Springer,
Cham.
Myers, R., C.Montgomery, D., & Anderson-Cook, M, C. (2016). Response Surface Methodology: Process and Product Optimization Using Designed
Experiments-Fourth Edittion. John Wiley & Sons.
28. References
Review Papers:
Amaran, S., Sahinidis, N. V., Sharda, B., & Bury, S. J. (2016). Simulation optimization: a review of algorithms and applications. Annals of
Operations Research, 240(1), 351–380.
Jin, R., Du, X., & Chen, W. (2003). The use of metamodeling techniques for optimization under uncertainty. Structural and Multidisciplinary
Optimization (Vol. 25).
Kleijnen, Jack, P. C. (2007). Kriging Metamodeling in Simulation: A Review. Tilburg University, CentER Discussion Paper; Vol. 2007- 13.
Operations research.
Li, Y. F., Ng, S. H., Xie, M., & Goh, T. N. (2010). A systematic comparison of metamodeling techniques for simulation optimization in Decision
Support Systems. Applied Soft Computing, 10(4), 1257–1273.
Soares do Amaral, J. V., Montevechi, J. A. B., Miranda, R. de C., & Junior, W. T. de S. (2022). Metamodel-based simulation optimization: A
systematic literature review. Simulation Modelling Practice and Theory, 114(August 2021).
Wang, G., & Shan, S. (2007). Review of Metamodeling Techniques in Support of Engineering Design Optimization. Journal of Mechanical Design,
129(4), 370–380.