Spatial mode division de-multiplexing of optical signals has many real-world applications, such as quantum computing and both classical and quantum optical communication. In this context, it is crucial to develop devices able to efficiently sort optical signals according to the optical mode they belong to and route them on different paths. Depending on the mode selected, this problem can be very hard to tackle. Recently, researchers have proposed using multi-objective evolutionary algorithms (MOEAs) ---and NSGA-II in particular--- combined with Linkage Learning (LL) to automate the process of design mode sorter. However, given the very large-search scale of the problem, the existing evolutionary-based solutions have a very slow convergence rate. In this paper, we proposed a novel approach for mode sorter design that combines (1) stochastic linkage learning, (2) the adaptive geometry estimation-based MOEA (AGE-MOEA-II), and (3) an adaptive mutation operator. Our experiments with two- and three-objectives (beams) show that our approach is faster (better convergence rate) and produces better mode sorters (closer to the ideal solutions) than the state-of-the-art approach. A direct comparison with the vanilla NSGA-II and AGE-MOEA-II also further confirms the importance of adopting LL in this domain.
An Improved Pareto Front Modeling Algorithm for Large-scale Many-Objective Op...Annibale Panichella
A key idea in many-objective optimization is to approximate the optimal Pareto front using a set of representative non-dominated solutions. The produced solution set should be close to the optimal front (convergence) and well-diversified (diversity). Recent studies have shown that measuring both convergence and diversity depends on the shape (or curvature) of the Pareto front. In recent years, researchers have proposed evolutionary algorithms that model the shape of the non-dominated front to define environmental selection strategies that adapt to the underlying geometry. This paper proposes a novel method for non-dominated front modeling using the Newton-Raphson iterative method for roots finding. Second, we compute the distance (diversity) between each pair of non-dominated solutions using geodesics, which are generalizations of the distance on Riemann manifolds (curved topological spaces). We have introduced an evolutionary algorithm within the Adaptive Geometry Estimation based MOEA (AGE-MOEA) framework, which we called AGE-MOEA-II. Computational experiments with 17 problems from the WFG and SMOP benchmarks show that AGE-MOEA-II outperforms its predecessor AGE-MOEA as well as other state-of-the-art many-objective algorithms, ie, NSGA-III, MOEA/D, VaEA, and LMEA.
A beamforming comparative study of least mean square, genetic algorithm and g...TELKOMNIKA JOURNAL
Multipath environment is a limitation fact in optimized usage of wireless networks. Using smart antenna and beamforming algorithms contributed to that subscribers get a higher-gain signal and better directivity as well as reduce the consumed power for users and the mobile base stations by adjusting the appropriate weights for each element in the antenna array that leads to reducing interference anddirecting the main beam to wanted user. In this paper, the performance of three of beamforming algorithms in multipath environment in terms of directivity and side lobe level reduction has been studied and compared, which are least mean square (LMS), genetic algorithm (GA) and grey wolf optimization (GWO) technique. The simulation result appears that LMS algorithm aids us to get the best directivity followed by the GWO, and we may get most sidelobe level reduction by using the GA algorithm, followed by LMS algorithm in second rank.
SENSITIVITY ANALYSIS IN A LIDARCAMERA CALIBRATIONcscpconf
In this paper, variability analysis was performed on the model calibration methodology between
a multi-camera system and a LiDAR laser sensor (Light Detection and Ranging). Both sensors
are used to digitize urban environments. A practical and complete methodology is presented to
predict the error propagation inside the LiDAR-camera calibration. We perform a sensitivity
analysis in a local and global way. The local approach analyses the output variance with
respect to the input, only one parameter is varied at once. In the global sensitivity approach, all
parameters are varied simultaneously and sensitivity indexes are calculated on the total
variation range of the input parameters. We quantify the uncertainty behaviour in the intrinsic
camera parameters and the relationship between the noisy data of both sensors and their
calibration. We calculated the sensitivity indexes by two techniques, Sobol and FAST (Fourier
amplitude sensitivity test). Statistics of the sensitivity analysis are displayed for each sensor, the
sensitivity ratio in laser-camera calibration data
Sensitivity analysis in a lidar camera calibrationcsandit
In this paper, variability analysis was performed o
n the model calibration methodology between
a multi-camera system and a LiDAR laser sensor (Lig
ht Detection and Ranging). Both sensors
are used to digitize urban environments. A practica
l and complete methodology is presented to
predict the error propagation inside the LiDAR-came
ra calibration. We perform a sensitivity
analysis in a local and global way. The local appro
ach analyses the output variance with
respect to the input, only one parameter is varied
at once. In the global sensitivity approach, all
parameters are varied simultaneously and sensitivit
y indexes are calculated on the total
variation range of the input parameters. We quantif
y the uncertainty behaviour in the intrinsic
camera parameters and the relationship between the
noisy data of both sensors and their
calibration. We calculated the sensitivity indexes
by two techniques, Sobol and FAST (Fourier
amplitude sensitivity test). Statistics of the sens
itivity analysis are displayed for each sensor, the
sensitivity ratio in laser-camera calibration data
The Positive Effects of Fuzzy C-Means Clustering on Supervised Learning Class...Waqas Tariq
Selection of inputs is one of the most substantial components of classification algorithms for data mining and pattern recognition problems since even the best classifier will perform badly if the inputs are not selected very well. Big data and computational complexity are main cause of bad performance and low accuracy for classical classifiers. In other words, the complexity of classifier method is inversely proportional with its classification efficiency. For this purpose, two hybrid classifiers have been developed by using both type-1 and type-2 fuzzy c-means clustering with cascaded a classifier. In this proposed classifier, a large number of data points are reduced by using fuzzy c-means clustering before applied to a classifier algorithm as inputs. The aim of this study is to investigate the effect of fuzzy clustering on well-known and useful classifiers such as artificial neural networks (ANN) and support vector machines (SVM). Then the role of positive effects of these proposed algorithms were investigated on applied different data sets.
The Positive Effects of Fuzzy C-Means Clustering on Supervised Learning Class...CSCJournals
Selection of inputs is one of the most substantial components of classification algorithms for data mining and pattern recognition problems since even the best classifier will perform badly if the inputs are not selected very well. Big data and computational complexity are main cause of bad performance and low accuracy for classical classifiers. In other words, the complexity of classifier method is inversely proportional with its classification efficiency. For this purpose, two hybrid classifiers have been developed by using both type-1 and type-2 fuzzy c-means clustering with cascaded a classifier. In this proposed classifier, a large number of data points are reduced by using fuzzy c-means clustering before applied to a classifier algorithm as inputs. The aim of this study is to investigate the effect of fuzzy clustering on well-known and useful classifiers such as artificial neural networks (ANN) and support vector machines (SVM). Then the role of positive effects of these proposed algorithms were investigated on applied different data sets.
An Improved Pareto Front Modeling Algorithm for Large-scale Many-Objective Op...Annibale Panichella
A key idea in many-objective optimization is to approximate the optimal Pareto front using a set of representative non-dominated solutions. The produced solution set should be close to the optimal front (convergence) and well-diversified (diversity). Recent studies have shown that measuring both convergence and diversity depends on the shape (or curvature) of the Pareto front. In recent years, researchers have proposed evolutionary algorithms that model the shape of the non-dominated front to define environmental selection strategies that adapt to the underlying geometry. This paper proposes a novel method for non-dominated front modeling using the Newton-Raphson iterative method for roots finding. Second, we compute the distance (diversity) between each pair of non-dominated solutions using geodesics, which are generalizations of the distance on Riemann manifolds (curved topological spaces). We have introduced an evolutionary algorithm within the Adaptive Geometry Estimation based MOEA (AGE-MOEA) framework, which we called AGE-MOEA-II. Computational experiments with 17 problems from the WFG and SMOP benchmarks show that AGE-MOEA-II outperforms its predecessor AGE-MOEA as well as other state-of-the-art many-objective algorithms, ie, NSGA-III, MOEA/D, VaEA, and LMEA.
A beamforming comparative study of least mean square, genetic algorithm and g...TELKOMNIKA JOURNAL
Multipath environment is a limitation fact in optimized usage of wireless networks. Using smart antenna and beamforming algorithms contributed to that subscribers get a higher-gain signal and better directivity as well as reduce the consumed power for users and the mobile base stations by adjusting the appropriate weights for each element in the antenna array that leads to reducing interference anddirecting the main beam to wanted user. In this paper, the performance of three of beamforming algorithms in multipath environment in terms of directivity and side lobe level reduction has been studied and compared, which are least mean square (LMS), genetic algorithm (GA) and grey wolf optimization (GWO) technique. The simulation result appears that LMS algorithm aids us to get the best directivity followed by the GWO, and we may get most sidelobe level reduction by using the GA algorithm, followed by LMS algorithm in second rank.
SENSITIVITY ANALYSIS IN A LIDARCAMERA CALIBRATIONcscpconf
In this paper, variability analysis was performed on the model calibration methodology between
a multi-camera system and a LiDAR laser sensor (Light Detection and Ranging). Both sensors
are used to digitize urban environments. A practical and complete methodology is presented to
predict the error propagation inside the LiDAR-camera calibration. We perform a sensitivity
analysis in a local and global way. The local approach analyses the output variance with
respect to the input, only one parameter is varied at once. In the global sensitivity approach, all
parameters are varied simultaneously and sensitivity indexes are calculated on the total
variation range of the input parameters. We quantify the uncertainty behaviour in the intrinsic
camera parameters and the relationship between the noisy data of both sensors and their
calibration. We calculated the sensitivity indexes by two techniques, Sobol and FAST (Fourier
amplitude sensitivity test). Statistics of the sensitivity analysis are displayed for each sensor, the
sensitivity ratio in laser-camera calibration data
Sensitivity analysis in a lidar camera calibrationcsandit
In this paper, variability analysis was performed o
n the model calibration methodology between
a multi-camera system and a LiDAR laser sensor (Lig
ht Detection and Ranging). Both sensors
are used to digitize urban environments. A practica
l and complete methodology is presented to
predict the error propagation inside the LiDAR-came
ra calibration. We perform a sensitivity
analysis in a local and global way. The local appro
ach analyses the output variance with
respect to the input, only one parameter is varied
at once. In the global sensitivity approach, all
parameters are varied simultaneously and sensitivit
y indexes are calculated on the total
variation range of the input parameters. We quantif
y the uncertainty behaviour in the intrinsic
camera parameters and the relationship between the
noisy data of both sensors and their
calibration. We calculated the sensitivity indexes
by two techniques, Sobol and FAST (Fourier
amplitude sensitivity test). Statistics of the sens
itivity analysis are displayed for each sensor, the
sensitivity ratio in laser-camera calibration data
The Positive Effects of Fuzzy C-Means Clustering on Supervised Learning Class...Waqas Tariq
Selection of inputs is one of the most substantial components of classification algorithms for data mining and pattern recognition problems since even the best classifier will perform badly if the inputs are not selected very well. Big data and computational complexity are main cause of bad performance and low accuracy for classical classifiers. In other words, the complexity of classifier method is inversely proportional with its classification efficiency. For this purpose, two hybrid classifiers have been developed by using both type-1 and type-2 fuzzy c-means clustering with cascaded a classifier. In this proposed classifier, a large number of data points are reduced by using fuzzy c-means clustering before applied to a classifier algorithm as inputs. The aim of this study is to investigate the effect of fuzzy clustering on well-known and useful classifiers such as artificial neural networks (ANN) and support vector machines (SVM). Then the role of positive effects of these proposed algorithms were investigated on applied different data sets.
The Positive Effects of Fuzzy C-Means Clustering on Supervised Learning Class...CSCJournals
Selection of inputs is one of the most substantial components of classification algorithms for data mining and pattern recognition problems since even the best classifier will perform badly if the inputs are not selected very well. Big data and computational complexity are main cause of bad performance and low accuracy for classical classifiers. In other words, the complexity of classifier method is inversely proportional with its classification efficiency. For this purpose, two hybrid classifiers have been developed by using both type-1 and type-2 fuzzy c-means clustering with cascaded a classifier. In this proposed classifier, a large number of data points are reduced by using fuzzy c-means clustering before applied to a classifier algorithm as inputs. The aim of this study is to investigate the effect of fuzzy clustering on well-known and useful classifiers such as artificial neural networks (ANN) and support vector machines (SVM). Then the role of positive effects of these proposed algorithms were investigated on applied different data sets.
Assessing Error Bound For Dominant Point DetectionCSCJournals
This paper compares three error bounds that can be used to make dominant point detection methods non-parametric. The three error bounds are based on the error in slope estimation due to digitization. However, each of the three methods takes a different approach for calculating the error bounds. This results into slightly different natures of the three methods and slightly different values. The impact of these error bounds is studied in the context of the non-parametric version of the widely used RDP method [1, 2] of dominant point detection. It is seen that the recently derived error bound (the third error bound in this paper), which depends on both the length and the slope of the line segment, provides the most balanced dominant point detection results for a variety of curves.
Multiuser Detection with Decision-feedback Detectors and PIC in MC-CDMA SystemTELKOMNIKA JOURNAL
In this paper we propose an iterative parallel decision feedback (P-DF) receivers associated with
parallel interference cancellation (PIC) for multicarrier code division multiple access (MC-CDMA) systems
in a Rayleigh fading channel (cost 207). First the most widely detection techniques, minimum
mean-squared error MMSE, Maximum Likelihood ML and PIC were investigated in order to compare their
performances in terms of Bit Error Rate (BER) with parallel feedback detection P-DFD. A MMSE DF
detector that employs parallel decision-feedback (MMSE-P-DFD) is considered and shows almost the
same BER performance with MMSE and ML, which present a better result than the other techniques. In a
second time, an iterative proposed method based on the multi-stage techniques P-DFD
(parallel DFD with two stages) and PIC was exploited to improve the performance of the system.
Channel Equalization of WCDMA Downlink System Using Finite Length MMSE-DFEIOSR Journals
Abstract: The performance of WCDMA system deteriorates in the presence of multipath fading environment. Fading destroys the orthogonality and is responsible for multiple access interference (MAI). Though conventional rake receiver provides reasonable performance in the WCDMA downlink system due to path diversity, but it does not restores the orthogonality. Linear equalizer restores orthogonality and suppresses MAI, but it is not efficient, since its performance depends on the spectral characteristics of the channel. To overcome this, Minimum Mean Square Error- Decision Feedback Equalizer (MMSE-DFE) with a linear, anticausal feedforward filter, causal feedback filter and simple detector is proposed in this paper. The filter taps of finite length DFE is derived using the cholesky factorization theory, capable of suppressing noise, Intersymbol Interference (ISI) and MAI. This paper describes the WCDMA downlink system using finite length MMSE-DFE and takes into consideration the effects of interference which includes additive white gaussian noise, multipath fading, ISI and MAI. Furthermore, the performance is compared with conventional rake receiver and MMSE and the simulation results are shown. Keywords – MMSE, MMSE-DFE, rake receiver, WCDMA
Channel Equalization of WCDMA Downlink System Using Finite Length MMSE-DFEIOSR Journals
The performance of WCDMA system deteriorates in the presence of multipath fading environment.
Fading destroys the orthogonality and is responsible for multiple access interference (MAI). Though
conventional rake receiver provides reasonable performance in the WCDMA downlink system due to path
diversity, but it does not restores the orthogonality. Linear equalizer restores orthogonality and suppresses
MAI, but it is not efficient, since its performance depends on the spectral characteristics of the channel. To
overcome this, Minimum Mean Square Error- Decision Feedback Equalizer (MMSE-DFE) with a linear,
anticausal feedforward filter, causal feedback filter and simple detector is proposed in this paper. The filter taps
of finite length DFE is derived using the cholesky factorization theory, capable of suppressing noise,
Intersymbol Interference (ISI) and MAI. This paper describes the WCDMA downlink system using finite length
MMSE-DFE and takes into consideration the effects of interference which includes additive white gaussian
noise, multipath fading, ISI and MAI. Furthermore, the performance is compared with conventional rake
receiver and MMSE and the simulation results are shown.
Real interpolation method for transfer function approximation of distributed ...TELKOMNIKA JOURNAL
Distributed parameter system (DPS) presents one of the most complex systems in the control theory. The transfer function of a DPS possibly contents: rational, nonlinear and irrational components. This thing leads that studies of the transfer function of a DPS are difficult in the time domain and frequency domain. In this paper, a systematic approach is proposed for linearizing DPS. This approach is based on the real interpolation method (RIM) to approximate the transfer function of DPS by rational-order transfer function. The results of the numerical examples show that the method is simple, computationally efficient, and flexible.
A NEW ALGORITHM FOR SOLVING FULLY FUZZY BI-LEVEL QUADRATIC PROGRAMMING PROBLEMSorajjournal
This paper is concerned with new method to find the fuzzy optimal solution of fully fuzzy bi-level non-linear (quadratic) programming (FFBLQP) problems where all the coefficients and decision variables of both objective functions and the constraints are triangular fuzzy numbers (TFNs). A new method is based on decomposed the given problem into bi-level problem with three crisp quadratic objective functions and bounded variables constraints. In order to often a fuzzy optimal solution of the FFBLQP problems, the concept of tolerance membership function is used to develop a fuzzy max-min decision model for generating satisfactory fuzzy solution for FFBLQP problems in which the upper-level decision maker (ULDM) specifies his/her objective functions and decisions with possible tolerances which are described by membership functions of fuzzy set theory. Then, the lower-level decision maker (LLDM) uses this preference information for ULDM and solves his/her problem subject to the ULDMs restrictions. Finally, the decomposed method is illustrated by numerical example.
Path Loss Prediction by Robust Regression Methodsijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
An optimal general type-2 fuzzy controller for Urban Traffic NetworkISA Interchange
Urban traffic network model is illustrated by state-charts and object-diagram. However, they have limitations to show the behavioral perspective of the traffic information flow. Consequently, a state space model is used to calculate the half-value waiting time of vehicles. In this study, a combination of the general type-2 fuzzy logic sets and the modified backtracking search algorithm (MBSA) techniques are used in order to control the traffic signal scheduling and phase succession so as to guarantee a smooth flow of traffic with the least wait times and average queue length. The parameters of input and output membership functions are optimized simultaneously by the novel heuristic algorithm MBSA. A comparison is made between the achieved results with those of optimal and conventional type-1 fuzzy logic controllers.
Differential evolution (DE) algorithm has been applied as a powerful tool to find optimum switching angles for selective harmonic elimination pulse width modulation (SHEPWM) inverters. However, the DE’s performace is very dependent on its control parameters. Conventional DE generally uses either trial and error mechanism or tuning technique to determine appropriate values of the control paramaters. The disadvantage of this process is that it is very time comsuming. In this paper, an adaptive control parameter is proposed in order to speed up the DE algorithm in optimizing SHEPWM switching angles precisely. The proposed adaptive control parameter is proven to enhance the convergence process of the DE algorithm without requiring initial guesses. The results for both negative and positive modulation index (M) also indicate that the proposed adaptive DE is superior to the conventional DE in generating SHEPWM switching patterns.
SURVEY ON POLYGONAL APPROXIMATION TECHNIQUES FOR DIGITAL PLANAR CURVESZac Darcy
Polygon approximation plays a vital role in abquitious applications like multimedia, geographic and object
recognition. An extensive number of polygonal approximation techniques for digital planar curves have
been proposed over the last decade, but there are no survey papers on recently proposed techniques.
Polygon is a collection of edges and vertices. Objects are represented using edges and vertices or contour
points (ie. polygon). Polygonal approximation is representing the object with less number of dominant
points (less number of edges and vertices). Polygon approximation results in less computational speed and
memory usage. This paper deals with comparative study on polygonal approximation techniques for digital
planar curves with respect to their computation and efficiency.
APPLYING DYNAMIC MODEL FOR MULTIPLE MANOEUVRING TARGET TRACKING USING PARTICL...IJITCA Journal
In this paper, we applied a dynamic model for manoeuvring targets in SIR particle filter algorithm for improving tracking accuracy of multiple manoeuvring targets. In our proposed approach, a color distribution model is used to detect changes of target's model . Our proposed approach controls
deformation of target's model. If deformation of target's model is larger than a predetermined threshold,then the model will be updated. Global Nearest Neighbor (GNN) algorithm is used as data association algorithm. We named our proposed method as Deformation Detection Particle Filter (DDPF) . DDPF
approach is compared with basic SIR-PF algorithm on real airshow videos. Comparisons results show that, the basic SIR-PF algorithm is not able to track the manoeuvring targets when the rotation or scaling is occurred in target' s model. However, DDPF approach updates target's model when the rotation or
scaling is occurred. Thus, the proposed approach is able to track the manoeuvring targets more efficiently
and accurately.
Geoid height determination is one of the major problems of geodesy because usage of satellite
techniques in geodesy isgetting increasing. Geoid heights can be determined using different methods according
to the available data. Soft computing methods such as Fuzzy logic and neural networks became so popular that
they are used to solve many engineering problems. Fuzzy logic theory and later developments in uncertainty
assessment have enabled us to develop more precise models for our requirements. In this study, How to
construct the best fuzzy model is examined. For this purpose, three different data sets were taken and two
different kinds (two inpust one output and three inputs one output) fuzzy model were formed for the calculation
of geoid heights in Istanbul (Turkey). The Fuzzy models results of these were compared with geoid heights
obtained by GPS/levelling methods. The fuzzy approximation models were tested on the test points.
Integrating Fuzzy Dematel and SMAA-2 for Maintenance Expensesinventionjournals
: The majority of the allowances being transferred to public institutions are mostly spent for buying new equipment, materials, facilities and their maintenance and repair. Some of the public sectors establish their own plants in order to reduce the maintenance and repair costs and gain ability to perform these activities. However, developing technology and variety of materials make their repair and maintenance activities more expensive for them. In this study, vital criteria for a public institution are determined. By using Fuzzy DEMATEL (Decision Making Trial And Evaluation Laboratory) method the degree of importance is identified by two defuzzification methods and the alternatives are ranked by using SMAA-2 (Stochastic Multi Criteria Acceptability Analysis) in three scenarios. The results show that different defuzzification methods change the order of preferences.
Integrating Fuzzy Dematel and SMAA-2 for Maintenance Expensesinventionjournals
The majority of the allowances being transferred to public institutions are mostly spent for buying new equipment, materials, facilities and their maintenance and repair. Some of the public sectors establish their own plants in order to reduce the maintenance and repair costs and gain ability to perform these activities. However, developing technology and variety of materials make their repair and maintenance activities more expensive for them. In this study, vital criteria for a public institution are determined. By using Fuzzy DEMATEL (Decision Making Trial And Evaluation Laboratory) method the degree of importance is identified by two defuzzification methods and the alternatives are ranked by using SMAA-2 (Stochastic Multi Criteria Acceptability Analysis) in three scenarios. The results show that different defuzzification methods change the order of preferences.
Effect of Feature Selection on Gene Expression Datasets Classification Accura...IJECEIAES
Feature selection attracts researchers who deal with machine learning and data mining. It consists of selecting the variables that have the greatest impact on the dataset classification, and discarding the rest. This dimentionality reduction allows classifiers to be fast and more accurate. This paper traits the effect of feature selection on the accuracy of widely used classifiers in literature. These classifiers are compared with three real datasets which are pre-processed with feature selection methods. More than 9% amelioration in classification accuracy is observed, and k-means appears to be the most sensitive classifier to feature selection
A New Approach to Linear Estimation Problem in Multiuser Massive MIMO SystemsRadita Apriana
A novel approach for solving linear estimation problem in multi-user massive MIMO systems is
proposed. In this approach, the difficulty of matrix inversion is attributed to the incomplete definition of the
dot product. The general definition of dot product implies that the columns of channel matrix are always
orthogonal whereas, in practice, they may be not. If the latter information can be incorporated into dot
product, then the unknowns can be directly computed from projections without inverting the channel
matrix. By doing so, the proposed method is able to achieve an exact solution with a 25% reduction in
computational complexity as compared to the QR method. Proposed method is stable, offers an extra
flexibility of computing any single unknown, and can be implemented in just twelve lines of code.
An Interactive Decomposition Algorithm for Two-Level Large Scale Linear Multi...IJERA Editor
This paper extended TOPSIS (Technique for Order Preference by Similarity Ideal Solution) method for solving
Two-Level Large Scale Linear Multiobjective Optimization Problems with Stochastic Parameters in the righthand
side of the constraints (TL-LSLMOP-SP)rhs of block angular structure. In order to obtain a compromise (
satisfactory) solution to the (TL-LSLMOP-SP)rhs of block angular structure using the proposed TOPSIS
method, a modified formulas for the distance function from the positive ideal solution (PIS ) and the distance
function from the negative ideal solution (NIS) are proposed and modeled to include all the objective functions
of the two levels. In every level, as the measure of ―Closeness‖ dp-metric is used, a k-dimensional objective
space is reduced to two –dimentional objective space by a first-order compromise procedure. The membership
functions of fuzzy set theory is used to represent the satisfaction level for both criteria. A single-objective
programming problem is obtained by using the max-min operator for the second –order compromise operaion.
A decomposition algorithm for generating a compromise ( satisfactory) solution through TOPSIS approach is
provided where the first level decision maker (FLDM) is asked to specify the relative importance of the
objectives. Finally, an illustrative numerical example is given to clarify the main results developed in the paper.
APPLYING DYNAMIC MODEL FOR MULTIPLE MANOEUVRING TARGET TRACKING USING PARTICL...IJITCA Journal
In this paper, we applied a dynamic model for manoeuvring targets in SIR particle filter algorithm for improving tracking accuracy of multiple manoeuvring targets. In our proposed approach, a color distribution model is used to detect changes of target's model. Our proposed approach controls deformation of target's model. If deformation of target's model is larger than a predetermined threshold, then the model will be updated. Global Nearest Neighbor (GNN) algorithm is used as data association algorithm. We named our proposed method as Deformation Detection Particle Filter (DDPF). DDPF approach is compared with basic SIR-PF algorithm on real airshow videos. Comparisons results show that, the basic SIR-PF algorithm is not able to track the manoeuvring targets when the rotation or scaling is occurred in target's model. However, DDPF approach updates target's model when the rotation or scaling is occurred. Thus, the proposed approach is able to track the manoeuvring targets more efficientlyand accurately.
MIP Award presentation at the IEEE International Conference on Software Analy...Annibale Panichella
Presentation for the Most Influential Paper (MIP) award at the IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER) 2024
Abstract:
Existing defect prediction models use product or process metrics and machine learning methods to identify defect- prone source code entities. Different classifiers (e.g., linear regression, logistic regression, or classification trees) have been investigated in the last decade. The results achieved so far are sometimes contrasting and do not show a clear winner. In this paper we present an empirical study aiming at statistically analyzing the equivalence of different defect predictors. We also propose a combined approach, coined as CODEP (COmbined DEfect Predictor), that employs the classification provided by different machine learning techniques to improve the detection of defect-prone entities. The study was conducted on 10 open source software systems and in the context of cross-project defect prediction, that represents one of the main challenges in the defect prediction field. The statistical analysis of the results indicates that the investigated classifiers are not equivalent and they can complement each other. This is also confirmed by the superior prediction accuracy achieved by CODEP when compared to stand-alone defect predictors.
Breaking the Silence: the Threats of Using LLMs in Software EngineeringAnnibale Panichella
Presentation of our work presented at ICSE 2024 (NIER track) in Lisbon
Abstract:
Large Language Models (LLMs) have gained considerable traction within the Software Engineering (SE) community, impacting various SE tasks from code completion to test generation, from program repair to code summarization. Despite their promise, researchers must still be careful as numerous intricate factors can influence the outcomes of experiments involving LLMs.
This paper initiates an open discussion on potential threats to the validity of LLM-based research including issues such as closed-source models, possible data leakage between LLM training data and research evaluation, and the reproducibility of LLM-based findings.
In response, this paper proposes a set of guidelines tailored for SE researchers and Language Model (LM) providers to mitigate these concerns.
The implications of the guidelines are illustrated using existing good practices followed by LLM providers and a practical example for SE researchers in the context of test case generation.
More Related Content
Similar to A Fast Multi-objective Evolutionary Approach for Designing Large-Scale Optical Mode Sorter
Assessing Error Bound For Dominant Point DetectionCSCJournals
This paper compares three error bounds that can be used to make dominant point detection methods non-parametric. The three error bounds are based on the error in slope estimation due to digitization. However, each of the three methods takes a different approach for calculating the error bounds. This results into slightly different natures of the three methods and slightly different values. The impact of these error bounds is studied in the context of the non-parametric version of the widely used RDP method [1, 2] of dominant point detection. It is seen that the recently derived error bound (the third error bound in this paper), which depends on both the length and the slope of the line segment, provides the most balanced dominant point detection results for a variety of curves.
Multiuser Detection with Decision-feedback Detectors and PIC in MC-CDMA SystemTELKOMNIKA JOURNAL
In this paper we propose an iterative parallel decision feedback (P-DF) receivers associated with
parallel interference cancellation (PIC) for multicarrier code division multiple access (MC-CDMA) systems
in a Rayleigh fading channel (cost 207). First the most widely detection techniques, minimum
mean-squared error MMSE, Maximum Likelihood ML and PIC were investigated in order to compare their
performances in terms of Bit Error Rate (BER) with parallel feedback detection P-DFD. A MMSE DF
detector that employs parallel decision-feedback (MMSE-P-DFD) is considered and shows almost the
same BER performance with MMSE and ML, which present a better result than the other techniques. In a
second time, an iterative proposed method based on the multi-stage techniques P-DFD
(parallel DFD with two stages) and PIC was exploited to improve the performance of the system.
Channel Equalization of WCDMA Downlink System Using Finite Length MMSE-DFEIOSR Journals
Abstract: The performance of WCDMA system deteriorates in the presence of multipath fading environment. Fading destroys the orthogonality and is responsible for multiple access interference (MAI). Though conventional rake receiver provides reasonable performance in the WCDMA downlink system due to path diversity, but it does not restores the orthogonality. Linear equalizer restores orthogonality and suppresses MAI, but it is not efficient, since its performance depends on the spectral characteristics of the channel. To overcome this, Minimum Mean Square Error- Decision Feedback Equalizer (MMSE-DFE) with a linear, anticausal feedforward filter, causal feedback filter and simple detector is proposed in this paper. The filter taps of finite length DFE is derived using the cholesky factorization theory, capable of suppressing noise, Intersymbol Interference (ISI) and MAI. This paper describes the WCDMA downlink system using finite length MMSE-DFE and takes into consideration the effects of interference which includes additive white gaussian noise, multipath fading, ISI and MAI. Furthermore, the performance is compared with conventional rake receiver and MMSE and the simulation results are shown. Keywords – MMSE, MMSE-DFE, rake receiver, WCDMA
Channel Equalization of WCDMA Downlink System Using Finite Length MMSE-DFEIOSR Journals
The performance of WCDMA system deteriorates in the presence of multipath fading environment.
Fading destroys the orthogonality and is responsible for multiple access interference (MAI). Though
conventional rake receiver provides reasonable performance in the WCDMA downlink system due to path
diversity, but it does not restores the orthogonality. Linear equalizer restores orthogonality and suppresses
MAI, but it is not efficient, since its performance depends on the spectral characteristics of the channel. To
overcome this, Minimum Mean Square Error- Decision Feedback Equalizer (MMSE-DFE) with a linear,
anticausal feedforward filter, causal feedback filter and simple detector is proposed in this paper. The filter taps
of finite length DFE is derived using the cholesky factorization theory, capable of suppressing noise,
Intersymbol Interference (ISI) and MAI. This paper describes the WCDMA downlink system using finite length
MMSE-DFE and takes into consideration the effects of interference which includes additive white gaussian
noise, multipath fading, ISI and MAI. Furthermore, the performance is compared with conventional rake
receiver and MMSE and the simulation results are shown.
Real interpolation method for transfer function approximation of distributed ...TELKOMNIKA JOURNAL
Distributed parameter system (DPS) presents one of the most complex systems in the control theory. The transfer function of a DPS possibly contents: rational, nonlinear and irrational components. This thing leads that studies of the transfer function of a DPS are difficult in the time domain and frequency domain. In this paper, a systematic approach is proposed for linearizing DPS. This approach is based on the real interpolation method (RIM) to approximate the transfer function of DPS by rational-order transfer function. The results of the numerical examples show that the method is simple, computationally efficient, and flexible.
A NEW ALGORITHM FOR SOLVING FULLY FUZZY BI-LEVEL QUADRATIC PROGRAMMING PROBLEMSorajjournal
This paper is concerned with new method to find the fuzzy optimal solution of fully fuzzy bi-level non-linear (quadratic) programming (FFBLQP) problems where all the coefficients and decision variables of both objective functions and the constraints are triangular fuzzy numbers (TFNs). A new method is based on decomposed the given problem into bi-level problem with three crisp quadratic objective functions and bounded variables constraints. In order to often a fuzzy optimal solution of the FFBLQP problems, the concept of tolerance membership function is used to develop a fuzzy max-min decision model for generating satisfactory fuzzy solution for FFBLQP problems in which the upper-level decision maker (ULDM) specifies his/her objective functions and decisions with possible tolerances which are described by membership functions of fuzzy set theory. Then, the lower-level decision maker (LLDM) uses this preference information for ULDM and solves his/her problem subject to the ULDMs restrictions. Finally, the decomposed method is illustrated by numerical example.
Path Loss Prediction by Robust Regression Methodsijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
An optimal general type-2 fuzzy controller for Urban Traffic NetworkISA Interchange
Urban traffic network model is illustrated by state-charts and object-diagram. However, they have limitations to show the behavioral perspective of the traffic information flow. Consequently, a state space model is used to calculate the half-value waiting time of vehicles. In this study, a combination of the general type-2 fuzzy logic sets and the modified backtracking search algorithm (MBSA) techniques are used in order to control the traffic signal scheduling and phase succession so as to guarantee a smooth flow of traffic with the least wait times and average queue length. The parameters of input and output membership functions are optimized simultaneously by the novel heuristic algorithm MBSA. A comparison is made between the achieved results with those of optimal and conventional type-1 fuzzy logic controllers.
Differential evolution (DE) algorithm has been applied as a powerful tool to find optimum switching angles for selective harmonic elimination pulse width modulation (SHEPWM) inverters. However, the DE’s performace is very dependent on its control parameters. Conventional DE generally uses either trial and error mechanism or tuning technique to determine appropriate values of the control paramaters. The disadvantage of this process is that it is very time comsuming. In this paper, an adaptive control parameter is proposed in order to speed up the DE algorithm in optimizing SHEPWM switching angles precisely. The proposed adaptive control parameter is proven to enhance the convergence process of the DE algorithm without requiring initial guesses. The results for both negative and positive modulation index (M) also indicate that the proposed adaptive DE is superior to the conventional DE in generating SHEPWM switching patterns.
SURVEY ON POLYGONAL APPROXIMATION TECHNIQUES FOR DIGITAL PLANAR CURVESZac Darcy
Polygon approximation plays a vital role in abquitious applications like multimedia, geographic and object
recognition. An extensive number of polygonal approximation techniques for digital planar curves have
been proposed over the last decade, but there are no survey papers on recently proposed techniques.
Polygon is a collection of edges and vertices. Objects are represented using edges and vertices or contour
points (ie. polygon). Polygonal approximation is representing the object with less number of dominant
points (less number of edges and vertices). Polygon approximation results in less computational speed and
memory usage. This paper deals with comparative study on polygonal approximation techniques for digital
planar curves with respect to their computation and efficiency.
APPLYING DYNAMIC MODEL FOR MULTIPLE MANOEUVRING TARGET TRACKING USING PARTICL...IJITCA Journal
In this paper, we applied a dynamic model for manoeuvring targets in SIR particle filter algorithm for improving tracking accuracy of multiple manoeuvring targets. In our proposed approach, a color distribution model is used to detect changes of target's model . Our proposed approach controls
deformation of target's model. If deformation of target's model is larger than a predetermined threshold,then the model will be updated. Global Nearest Neighbor (GNN) algorithm is used as data association algorithm. We named our proposed method as Deformation Detection Particle Filter (DDPF) . DDPF
approach is compared with basic SIR-PF algorithm on real airshow videos. Comparisons results show that, the basic SIR-PF algorithm is not able to track the manoeuvring targets when the rotation or scaling is occurred in target' s model. However, DDPF approach updates target's model when the rotation or
scaling is occurred. Thus, the proposed approach is able to track the manoeuvring targets more efficiently
and accurately.
Geoid height determination is one of the major problems of geodesy because usage of satellite
techniques in geodesy isgetting increasing. Geoid heights can be determined using different methods according
to the available data. Soft computing methods such as Fuzzy logic and neural networks became so popular that
they are used to solve many engineering problems. Fuzzy logic theory and later developments in uncertainty
assessment have enabled us to develop more precise models for our requirements. In this study, How to
construct the best fuzzy model is examined. For this purpose, three different data sets were taken and two
different kinds (two inpust one output and three inputs one output) fuzzy model were formed for the calculation
of geoid heights in Istanbul (Turkey). The Fuzzy models results of these were compared with geoid heights
obtained by GPS/levelling methods. The fuzzy approximation models were tested on the test points.
Integrating Fuzzy Dematel and SMAA-2 for Maintenance Expensesinventionjournals
: The majority of the allowances being transferred to public institutions are mostly spent for buying new equipment, materials, facilities and their maintenance and repair. Some of the public sectors establish their own plants in order to reduce the maintenance and repair costs and gain ability to perform these activities. However, developing technology and variety of materials make their repair and maintenance activities more expensive for them. In this study, vital criteria for a public institution are determined. By using Fuzzy DEMATEL (Decision Making Trial And Evaluation Laboratory) method the degree of importance is identified by two defuzzification methods and the alternatives are ranked by using SMAA-2 (Stochastic Multi Criteria Acceptability Analysis) in three scenarios. The results show that different defuzzification methods change the order of preferences.
Integrating Fuzzy Dematel and SMAA-2 for Maintenance Expensesinventionjournals
The majority of the allowances being transferred to public institutions are mostly spent for buying new equipment, materials, facilities and their maintenance and repair. Some of the public sectors establish their own plants in order to reduce the maintenance and repair costs and gain ability to perform these activities. However, developing technology and variety of materials make their repair and maintenance activities more expensive for them. In this study, vital criteria for a public institution are determined. By using Fuzzy DEMATEL (Decision Making Trial And Evaluation Laboratory) method the degree of importance is identified by two defuzzification methods and the alternatives are ranked by using SMAA-2 (Stochastic Multi Criteria Acceptability Analysis) in three scenarios. The results show that different defuzzification methods change the order of preferences.
Effect of Feature Selection on Gene Expression Datasets Classification Accura...IJECEIAES
Feature selection attracts researchers who deal with machine learning and data mining. It consists of selecting the variables that have the greatest impact on the dataset classification, and discarding the rest. This dimentionality reduction allows classifiers to be fast and more accurate. This paper traits the effect of feature selection on the accuracy of widely used classifiers in literature. These classifiers are compared with three real datasets which are pre-processed with feature selection methods. More than 9% amelioration in classification accuracy is observed, and k-means appears to be the most sensitive classifier to feature selection
A New Approach to Linear Estimation Problem in Multiuser Massive MIMO SystemsRadita Apriana
A novel approach for solving linear estimation problem in multi-user massive MIMO systems is
proposed. In this approach, the difficulty of matrix inversion is attributed to the incomplete definition of the
dot product. The general definition of dot product implies that the columns of channel matrix are always
orthogonal whereas, in practice, they may be not. If the latter information can be incorporated into dot
product, then the unknowns can be directly computed from projections without inverting the channel
matrix. By doing so, the proposed method is able to achieve an exact solution with a 25% reduction in
computational complexity as compared to the QR method. Proposed method is stable, offers an extra
flexibility of computing any single unknown, and can be implemented in just twelve lines of code.
An Interactive Decomposition Algorithm for Two-Level Large Scale Linear Multi...IJERA Editor
This paper extended TOPSIS (Technique for Order Preference by Similarity Ideal Solution) method for solving
Two-Level Large Scale Linear Multiobjective Optimization Problems with Stochastic Parameters in the righthand
side of the constraints (TL-LSLMOP-SP)rhs of block angular structure. In order to obtain a compromise (
satisfactory) solution to the (TL-LSLMOP-SP)rhs of block angular structure using the proposed TOPSIS
method, a modified formulas for the distance function from the positive ideal solution (PIS ) and the distance
function from the negative ideal solution (NIS) are proposed and modeled to include all the objective functions
of the two levels. In every level, as the measure of ―Closeness‖ dp-metric is used, a k-dimensional objective
space is reduced to two –dimentional objective space by a first-order compromise procedure. The membership
functions of fuzzy set theory is used to represent the satisfaction level for both criteria. A single-objective
programming problem is obtained by using the max-min operator for the second –order compromise operaion.
A decomposition algorithm for generating a compromise ( satisfactory) solution through TOPSIS approach is
provided where the first level decision maker (FLDM) is asked to specify the relative importance of the
objectives. Finally, an illustrative numerical example is given to clarify the main results developed in the paper.
APPLYING DYNAMIC MODEL FOR MULTIPLE MANOEUVRING TARGET TRACKING USING PARTICL...IJITCA Journal
In this paper, we applied a dynamic model for manoeuvring targets in SIR particle filter algorithm for improving tracking accuracy of multiple manoeuvring targets. In our proposed approach, a color distribution model is used to detect changes of target's model. Our proposed approach controls deformation of target's model. If deformation of target's model is larger than a predetermined threshold, then the model will be updated. Global Nearest Neighbor (GNN) algorithm is used as data association algorithm. We named our proposed method as Deformation Detection Particle Filter (DDPF). DDPF approach is compared with basic SIR-PF algorithm on real airshow videos. Comparisons results show that, the basic SIR-PF algorithm is not able to track the manoeuvring targets when the rotation or scaling is occurred in target's model. However, DDPF approach updates target's model when the rotation or scaling is occurred. Thus, the proposed approach is able to track the manoeuvring targets more efficientlyand accurately.
Similar to A Fast Multi-objective Evolutionary Approach for Designing Large-Scale Optical Mode Sorter (20)
MIP Award presentation at the IEEE International Conference on Software Analy...Annibale Panichella
Presentation for the Most Influential Paper (MIP) award at the IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER) 2024
Abstract:
Existing defect prediction models use product or process metrics and machine learning methods to identify defect- prone source code entities. Different classifiers (e.g., linear regression, logistic regression, or classification trees) have been investigated in the last decade. The results achieved so far are sometimes contrasting and do not show a clear winner. In this paper we present an empirical study aiming at statistically analyzing the equivalence of different defect predictors. We also propose a combined approach, coined as CODEP (COmbined DEfect Predictor), that employs the classification provided by different machine learning techniques to improve the detection of defect-prone entities. The study was conducted on 10 open source software systems and in the context of cross-project defect prediction, that represents one of the main challenges in the defect prediction field. The statistical analysis of the results indicates that the investigated classifiers are not equivalent and they can complement each other. This is also confirmed by the superior prediction accuracy achieved by CODEP when compared to stand-alone defect predictors.
Breaking the Silence: the Threats of Using LLMs in Software EngineeringAnnibale Panichella
Presentation of our work presented at ICSE 2024 (NIER track) in Lisbon
Abstract:
Large Language Models (LLMs) have gained considerable traction within the Software Engineering (SE) community, impacting various SE tasks from code completion to test generation, from program repair to code summarization. Despite their promise, researchers must still be careful as numerous intricate factors can influence the outcomes of experiments involving LLMs.
This paper initiates an open discussion on potential threats to the validity of LLM-based research including issues such as closed-source models, possible data leakage between LLM training data and research evaluation, and the reproducibility of LLM-based findings.
In response, this paper proposes a set of guidelines tailored for SE researchers and Language Model (LM) providers to mitigate these concerns.
The implications of the guidelines are illustrated using existing good practices followed by LLM providers and a practical example for SE researchers in the context of test case generation.
Searching for Quality: Genetic Algorithms and Metamorphic Testing for Softwar...Annibale Panichella
More machine learning (ML) models are introduced to the field of Software Engineering (SE) and reached a stage of maturity to be considered for real-world use; But the real world is complex, and testing these models lacks often in explainability, feasibility and computational capacities. Existing research introduced meta-morphic testing to gain additional insights and certainty about the model, by applying semantic-preserving changes to input-data while observing model-output. As this is currently done at random places, it can lead to potentially unrealistic datapoints and high computational costs. With this work, we introduce genetic search as an aid for metamorphic testing in SE ML. Exploiting the delta in output as a fitness function, the evolutionary intelligence optimizes the transformations to produce higher deltas with less changes. We perform a case study minimizing F1 and MRR for Code2Vec on a representative sample from java-small with both genetic and random search. Our results show that within the same amount of time, genetic search was able to achieve a decrease of 10% in F1 while random search produced 3% drop.
Keynote at the 5th Workshop on Validation, Analysis, and Evolution of Software Tests (VST 2022)
Website: https://rramler.github.io/vst2022/
Abstract: Nowadays, Artificial intelligence (AI) plays a critical role in automating different human-intensive tasks, including software engineering tasks. Since the late 70s, researchers have proposed automated techniques to automatically generate test data (fuzzing) or test suites (test suite generation). Proposed techniques span from simple heuristics to more advanced AI-based techniques and evolutionary intelligence in particular. While recent studies have shown that these techniques achieve high coverage and find bugs, generated tests can be hard to understand and maintain. This talk will provide an overview and reflection on state-of-the-art techniques, open challenges, and research opportunities towards more accessible tests that are easy to integrate within the DevOps cycle. To this aim, the talk will cover relevant application domains, including "traditional" software and emerging cyber-physical systems.
This presentation describes the results published in the following paper published in the Journal INFORMATION AND SOFTWARE TECHNOLOGY
TITLE: A Large Scale Empirical Comparison of State-of-the-art Search-based Test Case Generators
AUTHORS: Annibale Panichella, Fitsum Kifetew, Paolo Tonella
ABSTRACT: Context: Replication studies and experiments form an important foundation in advancing scientific research. While their prevalence in Software Engineering is increasing, there is still more to be done. Objective: This article aims to extend our previous replication study on search-based test generation techniques by performing a large-scale empirical comparison with further techniques from state of the art. Method: We designed a comprehensive experimental study involving six techniques, a benchmark composed of 180 non-trivial Java classes, and a total of 21,600 independent executions. Metrics regarding the effectiveness and efficiency of the techniques were collected and analyzed by means of statistical methods. Results: Our empirical study shows that single-target approaches are generally outperformed by multi-target approaches, while within the multi-target approaches, DynaMOSA/MOSA, which are based on many-objective optimization, outperform the others, in particular for complex classes. Conclusion: The results obtained from our large-scale empirical investigation con rm what has been reported in previous studies, while also highlighting striking differences and novel observations. Future studies, on different benchmarks and considering additional techniques, could further reinforce and extend our findings.
An Adaptive Evolutionary Algorithm based on Non-Euclidean Geometry for Many-O...Annibale Panichella
In the last decade, several evolutionary algorithms have been proposed in the literature for solving multi- and many-objective optimization problems. The performance of such algorithms depends on their capability to produce a well-diversified front (diversity) that is as closer to the Pareto optimal front as possible (proximity). Diversity and proximity strongly depend on the geometry of the Pareto front, i.e., whether it forms a Euclidean, spherical or hyperbolic hypersurface. However, existing multi- and many-objective evolutionary algorithms show poor versatility on different geometries. To address this issue, we propose a novel evolutionary algorithm that: (1) estimates the geometry of the generated front using a fast procedure with O(M × N) computational complexity (M is the number of objectives and N is the population size); (2) adapts the diversity and proximity metrics accordingly. Therefore, to form the population for the next generation, solutions are selected based on their contribution to the diversity and proximity of the non-dominated front with regards to the estimated geometry.
Computational experiments show that the proposed algorithm outperforms state-of-the-art multi and many-objective evolutionary algorithms on benchmark test problems with different geometries and number of objectives (M=3,5, and 10).
Speeding-up Software Testing With Computational IntelligenceAnnibale Panichella
Software testing is a crucial activity to assess the correct behavior of a program. However, it is also costly since it consumes a large ratio of software development time. For this reason, researchers have investigated techniques to automate the process of creating test cases. The key idea is to use meta-heuristics (e.g., Genetic Algorithms) to automatically generate test cases that reveal software failures. In this talk, I will present a case study in the automotive and showing the larger effectiveness and efficiency of meta-heuristics compared to manual testing.
Incremental Control Dependency Frontier Exploration for Many-Criteria Test C...Annibale Panichella
The presentation was given at the 10th International Symposium on Seach-Based Software Engineering (SSBSE 2018)
Abstract:
Several criteria have been proposed over the years for measuring test suite adequacy. Each criterion can be converted into a specific objective function to optimize with search-based techniques in an attempt to generate test suites achieving the highest possible coverage for that criterion. Recent work has tried to optimize for multiple-criteria at once by constructing a single objective function obtained as a weighted sum of the objective functions of the respective criteria. However, this solution suffers the problem of sum scalarization, i.e., differences along the various dimensions being optimized get lost when such dimensions are projected into a single value. Recent advances in SBST formulated coverage as a many-objective optimization problem rather than applying sum scalarization. Starting from this formulation, in this work, we apply many-objective test generation that handles multiple adequacy criteria simultaneously. To scale the approach to the big number of objectives to be optimized at the same time, we adopt an incremental strategy, where only coverage targets in the control dependency frontier are considered until the frontier is expanded by covering a previously uncovered target.
We report on the advances in this sixth edition of the JUnit tool
competitions. This year the contest introduces new benchmarks to
assess the performance of JUnit testing tools on different types of
real-world software projects. Following on the statistical analyses
from the past contest work, we have extended it with the performance of the combined tool aiming to beat the human-made tests. Overall,
the 6th competition evaluates four automated JUnit testing tools
taking as baseline human written test cases for the selected benchmark
projects. The paper details the modications performed to
the methodology and provides full results of the competition.
After four successful JUnit tool competitions, we
report on the achievements of a new Java Unit Testing Tool
Competition. This 5th contest introduces statistical analyses in
the benchmark infrastructure and has been validated with significance against the results of the previous 4th edition. Overall, the competition evaluates four automated JUnit testing tools taking as baseline human written test cases from real projects. The paper details the modifications performed to the methodology
and provides full results of the competition.
To reduce the effort developers have to make for crash debugging, researchers have proposed several solutions for automatic failure reproduction.
Recent advances proposed the usage of symbolic execution, mutation analysis and directed model checking as underling techniques for post-failure analysis of crash stack traces.
However, existing approaches still cannot reproduce many real-world crashes due to various limitations, such as environment dependencies, path explosion, and time complexity.
In this paper, we present EvoCrash, a post-failure approach which uses a novel Guided Genetic Algorithm (GGA) to cope with the large search space characterizing real-world software programs, and thereby address major challenges in automated crash replication.
Results of an empirical study on three open-source systems show that EvoCrash can successfully replicate 33 (66%) of real-world crashes, thereby outperforming the three cutting-edge crash replication techniques.
Manual crash reproduction is a labor-intensive and timeconsuming task. Therefore, several solutions have been proposed in literature for automatic crash reproduction, including generating unit tests via symbolic execution and mutation analysis. However, various limitations adversely a ect the capabilities of the existing solutions in covering a wider range of crashes because generating helpful tests that trigger speci c execution paths is particularly challenging. In this paper, we propose a new solution for automatic crash reproduction based on evolutionary unit test generation techniques. The proposed solution exploits crash data from collected stack traces to guide search-based algorithms toward the generation of unit test cases that can reproduce the original crashes. Results from our preliminary study on real crashes from Apache Commons libraries show that our solution can successfully reproduce crashes which are not reproducible by two other state-of-art techniques.
Parameterizing and Assembling IR-based Solutions for SE Tasks using Genetic A...Annibale Panichella
Information Retrieval (IR) approaches are nowadays used to support various software engineering tasks, such as feature location, traceability link recovery, clone detection, or refactoring. However, previous studies showed that inadequate instantiation of an IR technique and underlying process could significantly affect the performance of such approaches in terms of precision and recall. This paper proposes the use of Genetic Algorithms (GAs) to automatically configure and assemble an IR process for software engineering tasks. The approach (named GA-IR) determines the (near) optimal solution to be used for each stage of the IR process, ie term extraction, stop word removal, stemming, indexing and an IR algebraic method calibration. We applied GA-IR on two different software engineering tasks, namely traceability link recovery and identification of duplicate bug reports. The results of the study indicate that GA-IR outperforms approaches previously published in the literature, and that it does not significantly differ from an ideal upper bound that could be achieved by a supervised and combinatorial approach.
Business applications are more and more collaborative (cross-domains, cross-devices, service composition). Security shall focus on the overall application scenario including the interplay between its entities/devices/services, not only on the isolated systems within it. In this paper we propose the Security Threat Identification And TEsting (STIATE) toolkit to support development teams toward security assessment of their under-development applications focusing on subtle security logic flaws that may go undetected by using current industrial technology. At design-time, STIATE supports the development teams toward threat modeling and analysis by identifying automatically potential threats (via model checking and mutation techniques) on top of sequence diagrams enriched with security annotations (including {whatif} conditions). At run-time, STIATE supports the development teams toward testing by exploiting the identified threats to automatically generate and execute test-cases on the up and running application. We demonstrate the usage of the STIATE toolkit on an application scenario employing the SAML Single Sign-On multi-party protocol, a well-known industrial security standard largely studied in previous literature.
Reformulating Branch Coverage as a Many-Objective Optimization ProblemAnnibale Panichella
Test data generation has been extensively investigated as a search problem, where the search goal is to maximize the number of covered program elements (e.g., branches). Recently, the whole suite approach, which combines the fitness functions of single branches into an aggregate, test suite-level fitness, has been demonstrated to be superior to the traditional single-branch at a time approach. In this paper, we propose to consider branch coverage directly as a many-objective optimization problem, instead of aggregating multiple objectives into a single value, as in the whole suite approach. Since programs may have hundreds of branches (objectives), traditional many-objective algorithms that are designed for numerical optimization problems with less than 15 objectives are not applicable. Hence, we introduce a novel highly scalable many-objective genetic algorithm, called MOSA (Many-Objective Sorting Algorithm), suitably defined for the many-objective branch coverage problem. Results achieved on 64 Java classes indicate that the proposed many-objective algorithm is significantly more effective and more efficient than the whole suite approach. In particular, effectiveness (coverage) was significantly improved in 66% of the subjects and efficiency (search budget consumed) was improved in 62% of the subjects on which effectiveness remains the same.
Results for EvoSuite-MOSA at the Third Unit Testing Tool CompetitionAnnibale Panichella
EvoSuite-MOSA is a unit test data generation tool that employs a novel many-objective optimization algorithm suitably developed for branch coverage. It was implemented by extending the EvoSuite test data generation tool. In this paper we present the results achieved by EvoSuite-MOSA in the third Unit Testing Tool Competition at SBST'15. Among six participants, EvoSuite-MOSA stood third with an overall score of 189.22.
Adaptive User Feedback for IR-based Traceability RecoveryAnnibale Panichella
Traceability recovery allows software engineers to understand the interconnections among software artefacts and, thus, it provides an important support to software maintenance activities. In the last decade, Information Retrieval (IR) has been widely adopted as core technology of semi-automatic tools to extract traceability links between artefacts according to their textual information. However, a widely known problem of IR-based methods is that some artefacts may share more words with non-related artefacts than with related ones. To overcome this problem, enhancing strategies have been proposed in literature. One of these strategies is relevance feedback, which allows to modify the textual similarity according to information about links classified by the users. Even though this technique is widely used for natural language documents, previous work has demonstrated that relevance feedback is not always useful for software artefacts. In this paper, we propose an adaptive version of relevance feedback that, unlike the standard version, considers the characteristics of both (i) the software artefacts and (ii) the previously classified links for deciding whether and how to apply the feedback. An empirical evaluation conducted on three systems suggests that the adaptive relevance feedback outperforms both a pure IR-based method and the standard feedback.
Estimating the Evolution Direction of Populations to Improve Genetic AlgorithmsAnnibale Panichella
Meta-heuristics have been successfully used to solve a wide variety of problems. However, one issue many techniques have is their risk of being trapped into local optima, or to create a limited variety of solutions (problem known as ``population drift''). During recent and past years, different kinds of techniques have been proposed to deal with population drift, for example hybridizing genetic algorithms with local search techniques or using niche techniques.
This paper proposes a technique, based on Singular Value Decomposition (SVD), to enhance Genetic Algorithms (GAs) population diversity. SVD helps to estimate the evolution direction and drive next generations towards orthogonal dimensions.
The proposed SVD-based GA has been evaluated on 11 benchmark problems and compared with a simple GA and a GA with a distance-crowding schema. Results indicate that SVD-based GA achieves significantly better solutions and exhibits a quicker convergence than the alternative techniques.
When and How Using Structural Information to Improve IR-Based Traceability Re...Annibale Panichella
Traceability recovery is a key software maintenance activity in which software engineers extract the relationships among software artifacts. Information Retrieval (IR) has been widely accepted as a method for automated traceability recovery based on the textual similarity among the software artifacts. However, a notorious difficulty for IR-based methods is that artifacts may be related even if they are not textually similar. A growing body of work addresses this challenge by combining IR-based methods with structural information from source code. Unfortunately, the accuracy of such methods is highly dependent on the IR methods. If IR methods perform poorly, the combined approaches may perform even worse.
In this paper, we propose to use the feedback provided by software engineers when classifying candidate links to regulate the effect of using structural information. Specifically, our approach only considers structural information when the traceability links from the IR methods are verified by developers and classified as correct links. An empirical evaluation conducted on three systems suggests that our approach outperforms both a pure IR-based method and a simple approach for combining textual and structural information.
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
DERIVATION OF MODIFIED BERNOULLI EQUATION WITH VISCOUS EFFECTS AND TERMINAL V...Wasswaderrick3
In this book, we use conservation of energy techniques on a fluid element to derive the Modified Bernoulli equation of flow with viscous or friction effects. We derive the general equation of flow/ velocity and then from this we derive the Pouiselle flow equation, the transition flow equation and the turbulent flow equation. In the situations where there are no viscous effects , the equation reduces to the Bernoulli equation. From experimental results, we are able to include other terms in the Bernoulli equation. We also look at cases where pressure gradients exist. We use the Modified Bernoulli equation to derive equations of flow rate for pipes of different cross sectional areas connected together. We also extend our techniques of energy conservation to a sphere falling in a viscous medium under the effect of gravity. We demonstrate Stokes equation of terminal velocity and turbulent flow equation. We look at a way of calculating the time taken for a body to fall in a viscous medium. We also look at the general equation of terminal velocity.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
Salas, V. (2024) "John of St. Thomas (Poinsot) on the Science of Sacred Theol...Studia Poinsotiana
I Introduction
II Subalternation and Theology
III Theology and Dogmatic Declarations
IV The Mixed Principles of Theology
V Virtual Revelation: The Unity of Theology
VI Theology as a Natural Science
VII Theology’s Certitude
VIII Conclusion
Notes
Bibliography
All the contents are fully attributable to the author, Doctor Victor Salas. Should you wish to get this text republished, get in touch with the author or the editorial committee of the Studia Poinsotiana. Insofar as possible, we will be happy to broker your contact.
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...
A Fast Multi-objective Evolutionary Approach for Designing Large-Scale Optical Mode Sorter
1. A Fast Multi-Objective Evolutionary Approach for
Designing Large-Scale Optical Mode Sorter
Annibale Panichella
a.panichella@tudelf.nl
@AnniPanic
1
Giuseppe Di Domenico
giuseppe.didomenico@attocube.com
4. Mode Sorters
4
Hermine-Gauss Modes
A sorter is a device able to discriminate
optical beams based on various signal
properties:
• Wavelength
• Amplitude
• Phase
• Polarization
• …
• Mode (mode sorter)
5. The Optimization Problem
5
Problem: Design a device (on a 2D
surface plasmon polarization platform)
that can distinguish and route the input
plasmonic beams based on their mode
Mode sorter by
Di Domenico et al.,ACS Photonics 2022
The presence of the polymethyl
methacrylate (PMMA) change locally the
beam propagation. Globally it determines
how each mode is routed/redirected
7. The Optimization Problem
7
Mode
Sorter
Actual Output
Locations
Input
Signal
The objectives to optimize
are the distances between
the target and the actual
output locations
N. objectives = N. Of
modes to redirect
8. Open Challenges
• Very large search space (16K
decision variables)
• Not all decision variables (device
blocks) have the same importance
• The non-dominated fronts show an
hyperbolic geometry
8
Our intuitions:
1. Use a MOEA for large-scale
problem
2. Use a MOEA for hyperbolic
fronts
3. Use linkage learning to learn
which blocks matters
10. L2-AGE-MOEA
Our framework inherits the key features of
AGE-MOEA-II
• It uses ranking and dominance
• It estimates the geometry of the
fi
rst
non-dominated front
• It updates the diversity and
convergence on the estimated
geometry
10
GECCO ’23, July 15–19, 2023, Lisbon, Portugal A. Panichella and G. Di Domenico
Algorithm 2: L2-AGE-MOEA
Input: ": Number of objectives and #: Population size
Result: Final population %
1 begin
2 % RANDOM-POPULATION(N)
3 F FAST-NONDOMINATED-SORT(%
–
&)
4 FOS INFER-MODEL(F1, ⇤)
5 while not (stop_condition) do
6 & ;
7 forall i in 1..|% | do
8 Parent P8
9 Donor TOURNAMENT-SELECTION(%)
10 O REPRODUCTION(Parent, Donor, FOS)
11 O ADAPTIVE-MUTATION(O)
12 & &
–
{O}
13 F FAST-NONDOMINATED-SORT(%
–
&)
14 F NORMALIZE(F)
15 ? NEWTON-RAPHSON(F1) /* Eq. 3 */
16 3 1 /* First non-dominated rank */
17 while | % | + | F3 |6 # do
18 F? MANIFOLD-PROJECTION(F3 , ?)
19 ⇡ GEODESIC-DIV(F?, ?)
20 SURVIVAL-SCORE(D, F, 3, ?)
21 % %
–
F3
22 3 3 + 1
23 SORT(F3 ) /* by survival scores */
24 % %
–
F3 [1 : (# |% |)]
25 FOS INFER-MODEL(F1, ⇤)
26 return %
The solutions/devices are evaluated using the beam propaga-
tion method [10] and considering the following device and beam
The remainder of Algorithm 2 is identical to the main loop of
the original AGE-MOEA-II. In a nutshell, parent % and o�spring &
populations are combined and sorted using the fast non-dominated
sorting algorithm (line 13) and normalized (line 14). The �rst front
F1 is then used to compute the curvature ? using the Newton-
Raphson method (Equation 3). Finally, in lines 17-22, the survival
score (line 20) is computed iteratively for all fronts using the geo-
desic distance (line 19).
The loop between lines 17 and 24 inserts as many individuals
to the new population % as possible, based on their ranks and
until reaching the population size #. L2-AGE-MOEA �rst selects the
non-dominated solutions from F1; if | F8 |< #, the loop selects
the non-dominated solutions from F8, etc. The loop ends when
inserting all the solutions from the front 3 exceeds the maximum
population size, as shown in line 24. In this case, the algorithm
selects the remaining solutions from the front F3 according to the
descending order of survival score in lines 24-25.
Finally, the linkage model is retrained in line 25 based on the
�rst non-dominated front F1 previously computed on line 13, i.e.,
on the latest population.
3.1 Linkage learning with Clustering Methods
Linkage learning [43] is a category of techniques applied to infer
linkage structures from a population, i.e., groups or clusters of
promising decision variable values that contribute to achieving
11. Linkage Learning
11
• Key new ingredients:
• It uses hierarchical clustering to
infer the linkage structures (LL)
• It applies a stochastic crossover
that leverages the linkage structure
• It uses a linearly-decreasing mutation
rate (bit-
fl
ip mutation)
GECCO ’23, July 15–19, 2023, Lisbon, Portugal A. Panichella and G. Di Domenico
Algorithm 2: L2-AGE-MOEA
Input: ": Number of objectives and #: Population size
Result: Final population %
1 begin
2 % RANDOM-POPULATION(N)
3 F FAST-NONDOMINATED-SORT(%
–
&)
4 FOS INFER-MODEL(F1, ⇤)
5 while not (stop_condition) do
6 & ;
7 forall i in 1..|% | do
8 Parent P8
9 Donor TOURNAMENT-SELECTION(%)
10 O REPRODUCTION(Parent, Donor, FOS)
11 O ADAPTIVE-MUTATION(O)
12 & &
–
{O}
13 F FAST-NONDOMINATED-SORT(%
–
&)
14 F NORMALIZE(F)
15 ? NEWTON-RAPHSON(F1) /* Eq. 3 */
16 3 1 /* First non-dominated rank */
17 while | % | + | F3 |6 # do
18 F? MANIFOLD-PROJECTION(F3 , ?)
19 ⇡ GEODESIC-DIV(F?, ?)
20 SURVIVAL-SCORE(D, F, 3, ?)
21 % %
–
F3
22 3 3 + 1
23 SORT(F3 ) /* by survival scores */
24 % %
–
F3 [1 : (# |% |)]
25 FOS INFER-MODEL(F1, ⇤)
26 return %
The solutions/devices are evaluated using the beam propaga-
tion method [10] and considering the following device and beam
The remainder of Algorithm 2 is identical to the main loop of
the original AGE-MOEA-II. In a nutshell, parent % and o�spring &
populations are combined and sorted using the fast non-dominated
sorting algorithm (line 13) and normalized (line 14). The �rst front
F1 is then used to compute the curvature ? using the Newton-
Raphson method (Equation 3). Finally, in lines 17-22, the survival
score (line 20) is computed iteratively for all fronts using the geo-
desic distance (line 19).
The loop between lines 17 and 24 inserts as many individuals
to the new population % as possible, based on their ranks and
until reaching the population size #. L2-AGE-MOEA �rst selects the
non-dominated solutions from F1; if | F8 |< #, the loop selects
the non-dominated solutions from F8, etc. The loop ends when
inserting all the solutions from the front 3 exceeds the maximum
population size, as shown in line 24. In this case, the algorithm
selects the remaining solutions from the front F3 according to the
descending order of survival score in lines 24-25.
Finally, the linkage model is retrained in line 25 based on the
�rst non-dominated front F1 previously computed on line 13, i.e.,
on the latest population.
3.1 Linkage learning with Clustering Methods
Linkage learning [43] is a category of techniques applied to infer
linkage structures from a population, i.e., groups or clusters of
promising decision variable values that contribute to achieving
12. Linkage Learning
12
A Fast Multi-objective Evolutionary Approach for Designing Large-Scale Optical M
G1 G2 G3 G4 ...
G1, G2
G3, G4
G3, G4, ...
G1, . . .
Hamming
distance
Figure 1: Example of dendrogram or Family Of Subset (FOS
be zero and clustered together in the lower internal nodes of th
dendrogram. Instead, decision variables that never have the sam
values across the di�erent individuals are clustered together but i
the top internal nodes of the dendrogram. The distance betwee
pairs of decision variables is computed using the hamming distanc
a well-known distance function for binary vectors. This is als
the distance recommended in the literature due to its low (linea
computational complexity [10, 24].
Solution/device
Region
with
PMMA
Region
without
PMMA
Linearization Hierarchical Clustering
…
x1 x2 x3 x4 …
13. Stochastic Linkage-Based Crossover
13
…
x1 x2 x3 x4 …
…
x1 x2 x3 x4 …
Donor
Parent
A Fast Multi-objective Evolutionary Approach for Designing Large-Scale Optical Mode Sorters GECCO ’23, July 15–19, 2023, Lis
G1 G2 G3 G4 ...
G1, G2
G3, G4
G3, G4, ...
G1, . . .
Hamming
distance
Figure 1: Example of dendrogram or Family Of Subset (FOS)
be zero and clustered together in the lower internal nodes of the
dendrogram. Instead, decision variables that never have the same
values across the di�erent individuals are clustered together but in
the top internal nodes of the dendrogram. The distance between
pairs of decision variables is computed using the hamming distance,
where 'BC0AC is the initial mutation rate, while '4=3
mutation rate (with 'BC0AC > '4=3); ⇢( denotes the
solution evaluations performed at the generation 8 and
the maximum number of solution evaluations (total sear
4 EMPIRICAL STUDY
To assess the performance of L2-AGE-MOEA in genera
sorters, we performed a computational experiment wit
types of sorters. We describe the setup of our study in
while the results are discussed in Section 4.2.
4.1 Empirical Setup
In this paper, we consider four formulations of the mod
sign problem. First, we consider two-mode and three-m
Linkage Structures
(Family of subsets)
14. Stochastic Linkage-Based Crossover
14
…
x1 x2 x3 x4 …
…
x1 x2 x3 x4 …
Donor
Parent
A Fast Multi-objective Evolutionary Approach for Designing Large-Scale Optical Mode Sorters GECCO ’23, July 15–19, 2023, Lis
G1 G2 G3 G4 ...
G1, G2
G3, G4
G3, G4, ...
G1, . . .
Hamming
distance
Figure 1: Example of dendrogram or Family Of Subset (FOS)
be zero and clustered together in the lower internal nodes of the
dendrogram. Instead, decision variables that never have the same
values across the di�erent individuals are clustered together but in
the top internal nodes of the dendrogram. The distance between
pairs of decision variables is computed using the hamming distance,
where 'BC0AC is the initial mutation rate, while '4=3
mutation rate (with 'BC0AC > '4=3); ⇢( denotes the
solution evaluations performed at the generation 8 and
the maximum number of solution evaluations (total sear
4 EMPIRICAL STUDY
To assess the performance of L2-AGE-MOEA in genera
sorters, we performed a computational experiment wit
types of sorters. We describe the setup of our study in
while the results are discussed in Section 4.2.
4.1 Empirical Setup
In this paper, we consider four formulations of the mod
sign problem. First, we consider two-mode and three-m
Random
Cut-point
Linkage Structures
(Family of subsets)
15. Stochastic Linkage-Based Crossover
15
…
x1 x2 x3 x4 …
…
x1 x2 x3 x4 …
Donor
Parent
A Fast Multi-objective Evolutionary Approach for Designing Large-Scale Optical Mode Sorters GECCO ’23, July 15–19, 2023, Lis
G1 G2 G3 G4 ...
G1, G2
G3, G4
G3, G4, ...
G1, . . .
Hamming
distance
Figure 1: Example of dendrogram or Family Of Subset (FOS)
be zero and clustered together in the lower internal nodes of the
dendrogram. Instead, decision variables that never have the same
values across the di�erent individuals are clustered together but in
the top internal nodes of the dendrogram. The distance between
pairs of decision variables is computed using the hamming distance,
where 'BC0AC is the initial mutation rate, while '4=3
mutation rate (with 'BC0AC > '4=3); ⇢( denotes the
solution evaluations performed at the generation 8 and
the maximum number of solution evaluations (total sear
4 EMPIRICAL STUDY
To assess the performance of L2-AGE-MOEA in genera
sorters, we performed a computational experiment wit
types of sorters. We describe the setup of our study in
while the results are discussed in Section 4.2.
4.1 Empirical Setup
In this paper, we consider four formulations of the mod
sign problem. First, we consider two-mode and three-m
Random
Cut-point
x3, x4, …
x1 x2
Linkage Structures
(Family of subsets)
16. …
x1 x2 x3 x4 …
…
x1 x2 x3 x4 …
Donor
Parent
A Fast Multi-objective Evolutionary Approach for Designing Large-Scale Optical Mode Sorters GECCO ’23, July 15–19, 2023, Lis
G1 G2 G3 G4 ...
G1, G2
G3, G4
G3, G4, ...
G1, . . .
Hamming
distance
Figure 1: Example of dendrogram or Family Of Subset (FOS)
be zero and clustered together in the lower internal nodes of the
dendrogram. Instead, decision variables that never have the same
values across the di�erent individuals are clustered together but in
the top internal nodes of the dendrogram. The distance between
pairs of decision variables is computed using the hamming distance,
where 'BC0AC is the initial mutation rate, while '4=3
mutation rate (with 'BC0AC > '4=3); ⇢( denotes the
solution evaluations performed at the generation 8 and
the maximum number of solution evaluations (total sear
4 EMPIRICAL STUDY
To assess the performance of L2-AGE-MOEA in genera
sorters, we performed a computational experiment wit
types of sorters. We describe the setup of our study in
while the results are discussed in Section 4.2.
4.1 Empirical Setup
In this paper, we consider four formulations of the mod
sign problem. First, we consider two-mode and three-m
Stochastic Linkage-Based Crossover
16
Random
Cut-point
x3, x4, …
x1 x2
Linkage Structures
(Family of subsets)
Each cluster (group of genes) has 50% chance to be copied
from the donor solutions
17. Mutation Rate
17
advanced adaptive schema as part of our future agenda.
Given that we handle a binary problem, L2-AGE-MOEA uses the
bit-�ip mutation, which randomly �ips some decision variable val-
ues from one to zero (or vice versa) based on the mutation rate
'. Instead of using a constant mutation rate, we use a linearly de-
creasing mutation rate, such as it has a large mutation rate in the
initial generation (for better exploration) and a lower mutation rate
in the last generation (for better exploitation). More precisely, the
mutation rate ' at the generation 8 is computed as follows:
'8 = '4=3 +
⇢(<0G ⇢(
⇢(<0G
⇥ ('BC0AC '4=3 ) (6)
Linearly-decreasing mutation Rate (R)
Rate
(R)
Fitness
Evaluations
(FEs)
Rstart
Rend
Mutation rate and
generation i
Maximum number
Of FEs
19. Benchmarks
We consider four different mode sorters:
• Two-mode sorter (12350 decision variables)
• Three-mode sorter (16200 decision variables)
• Two-mode + Structure optimization (2+1 objectives)
• Three-mode + Structure optimization (3+1 objectives)
Characteristics of the sorters:
• Plasmonic refractive index
𝑛
0=1.008856
• Plasmonic
fi
eld with wavelength
𝜆
= 1.064 · 10−6/
𝑛
• Vacuum wavevector 2
𝜋
/
𝜆
• …
19
20. Selected MOEAs
• AGE-MOEA-II [A. Panichella 2022]
• L2-NSGA [Olsthoorn and Panichella 2021]
• NSGA-II [Deb et al. 2020]
• L2-AGE-MOEA [Our solution]
20
No Linkage
Learning
With Linkage
Learning
Handles Hyperbolic
Fronts
Does not consider
Hyperbolic fronts
No Linkage
Learning
Does not consider
Hyperbolic fronts
With Linkage
Learning
Handles Hyperbolic
Fronts
21. Overall Results
21
Median and IQR. of the IGD and HV across 30 runs
△ means better than, ▼ worst than L2-AGE-MOEA
A Fast Multi-objective Evolutionary Approach for Designing Large-Scale Optical Mode Sorters GECCO ’23, July 15–19, 2023, Lisbon, Portugal
Table 1: Median and Interquartile Range (IQR) values achieved by the di�erent evolutionary algorithms. " denotes the number
of objectives while ⇡ is the number of decision variables. The best values are highlighted in grey color.
Results for IGD
Problem " ⇡ NSGA-II AGE-MOEA-II L2-NSGA L2-AGE-MOEA
Two-Beams 2 12350 1.4232e+0 (4.85e-1) H 8.6429e-1 (3.13e-1) H 1.0086e+0 (4.65e-1) H 2.6545e-1 (2.19e-1)
Two-Beams + Structure 3 12350 1.2744e+0 (2.40e-1) H 7.3520e-1 (3.44e-1) H 6.2873e-1 (1.86e-1) H 4.1567e-1 (2.02e-1)
Three-Beams 3 16200 1.1844e+1 (1.73e+0) H 2.5116e+0 (9.56e-1) H 6.4420e+0 (3.62e+0) H 3.5408e-1 (2.10e-1)
Three-Beams + Structure 4 16200 6.7468e-1 (2.92e-2) H 3.701e-1 (9.61e-2) H 2.462e-1 (2.48e-2) H 1.9041e-1 (3.86e-3)
Results for HV
Problem " ⇡ NSGA-II AGE-MOEA-II L2-NSGA L2-AGE-MOEA
Two-Beams 2 12350 2.0254e-1 (4.65e-2) H 2.6338e-1 (3.51e-2) H 2.7187e-1 (4.40e-2) H 3.4791e-1 (4.13e-2)
Two-Beam + Structure 3 12350 7.7192e-2 (1.63e-2) H 2.1024e-1 (1.76e-2) H 1.3591e-1 (7.30e-2) H 3.3073e-1 (2.47e-2))
Three-Beams 3 16200 5.2256e-5 (2.17e-3) H 2.2892e-1 (3.84e-2) H 7.7827e-2 (1.10e-1) H 3.7184e-1 (2.34e-2)
Three Beams + Structure 4 16200 2.5452e-2 (1.18e-2) H 2.5838e-1 (2.78e-2) H 1.4414e-2 (4.82e-3) H 3.3585e-1 (1.76e-2)
methods. Instead, AGE-MOEA-II does not use any linkage learn-
ing methods but uses front modeling methods. As we can observe,
Reference R L2-AGE-MOEA L2-NSGA NSGA-II AGE-MOEA-II
22. Results for Two-Beams
22
Results for HV
AGE-MOEA-II L2-NSGA L2-AGE-MOEA
2) H 2.6338e-1 (3.51e-2) H 2.7187e-1 (4.40e-2) H 3.4791e-1 (4.13e-2)
2) H 2.1024e-1 (1.76e-2) H 1.3591e-1 (7.30e-2) H 3.3073e-1 (2.47e-2))
3) H 2.2892e-1 (3.84e-2) H 7.7827e-2 (1.10e-1) H 3.7184e-1 (2.34e-2)
2) H 2.5838e-1 (2.78e-2) H 1.4414e-2 (4.82e-3) H 3.3585e-1 (1.76e-2)
arn-
erve,
imal
also
sum
NSGA
. In-
quiv-
olds
(all
NSGA
er in
EAs
does
two
2 2.5 3 3.5 4
2
2.5
3
3.5
4
51
5
2
Reference R L2-AGE-MOEA L2-NSGA NSGA-II AGE-MOEA-II
Figure 2: Fronts produced by the di�erent MOEAs for
Two-Beams with "=2 and population size #=100. The refer-
ence front R includes all best (non-dominated) solutions
𝑀
=2 and Population Size
𝑁
=100
• Reference front = best of all
non-dominated front from all
EAs and runs
• We plot the fronts with
median HV
• All fronts show a hyperbolic
geometry (p<1 curvature)
23. The Generated Sorter
GECCO ’23, July 15–19, 2023, Lisbon, Portugal A. Panichella and G. Di Domenico
(a) Generated device (right side) and the corresponding
beams propagation (left side)
(b) Target output signal (red line) and generated output
signal (blue line) for the two HG input beams
Figure 3: Example of device generated by L2-AGE-MOEA for the Two-Beams+Structure problem.
depicted in Figure 3. Figure 3a plots the device (right-hand side); manually. Multi-objective evolutionary algorithms (MOEAs) have
52 μm
38
μm
Two-mode sorter
Target Output Signals
Generated Output Signals