SlideShare a Scribd company logo
1 of 706
Download to read offline
Ana I. Pereira · Florbela P. Fernandes ·
João P. Coelho · João P. Teixeira ·
Maria F. Pacheco · Paulo Alves ·
Rui P. Lopes (Eds.)
First International Conference, OL2A 2021
Bragança, Portugal, July 19–21, 2021
Revised Selected Papers
Optimization, Learning
Algorithms and Applications
Communications in Computer and Information Science 1488
Communications
in Computer and Information Science 1488
Editorial Board Members
Joaquim Filipe
Polytechnic Institute of Setúbal, Setúbal, Portugal
Ashish Ghosh
Indian Statistical Institute, Kolkata, India
Raquel Oliveira Prates
Federal University of Minas Gerais (UFMG), Belo Horizonte, Brazil
Lizhu Zhou
Tsinghua University, Beijing, China
More information about this series at https://link.springer.com/bookseries/7899
Ana I. Pereira · Florbela P. Fernandes ·
João P. Coelho · João P. Teixeira ·
Maria F. Pacheco · Paulo Alves ·
Rui P. Lopes (Eds.)
Optimization, Learning
Algorithms and Applications
First International Conference, OL2A 2021
Bragança, Portugal, July 19–21, 2021
Revised Selected Papers
Editors
Ana I. Pereira
Instituto Politécnico de Bragança
Bragança, Portugal
João P. Coelho
Instituto Politécnico de Bragança
Bragança, Portugal
Maria F. Pacheco
Instituto Politécnico de Bragança
Bragança, Portugal
Rui P. Lopes
Instituto Politécnico de Bragança
Bragança, Portugal
Florbela P. Fernandes
Instituto Politécnico de Bragança
Bragança, Portugal
João P. Teixeira
Instituto Politécnico de Bragança
Bragança, Portugal
Paulo Alves
Instituto Politécnico de Bragança
Bragança, Portugal
ISSN 1865-0929 ISSN 1865-0937 (electronic)
Communications in Computer and Information Science
ISBN 978-3-030-91884-2 ISBN 978-3-030-91885-9 (eBook)
https://doi.org/10.1007/978-3-030-91885-9
© Springer Nature Switzerland AG 2021
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the
material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation,
broadcasting, reproduction on microfilms or in any other physical way, and transmission or information
storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now
known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this book are
believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors
give a warranty, expressed or implied, with respect to the material contained herein or for any errors or
omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in
published maps and institutional affiliations.
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
The volume CCIS 1488 contains the refereed proceedings of the International
Conference on Optimization, Learning Algorithms and Applications (OL2A 2021), an
event that, due to the COVID-19 pandemic, was held online.
OL2A 2021 provided a space for the research community on optimization and
learning to get together and share the latest developments, trends, and techniques
as well as develop new paths and collaborations. OL2A 2021 had more than 400
participants in an online environment throughout the three days of the conference
(July 19–21, 2021), discussing topics associated to areas such as optimization
and learning and state-of-the-art applications related to multi-objective optimization,
optimization for machine learning, robotics, health informatics, data analysis,
optimization and learning under uncertainty, and the Fourth Industrial Revolution.
Four special sessions were organized under the following topics: Trends in
Engineering Education, Optimization in Control Systems Design, Data Visualization
and Virtual Reality, and Measurements with the Internet of Things. The event had 52
accepted papers, among which 39 were full papers. All papers were carefully reviewed
and selected from 134 submissions. All the reviews were carefully carried out by a
Scientific Committee of 61 PhD researchers from 18 countries.
July 2021 Ana I. Pereira
Organization
General Chair
Ana Isabel Pereira Polytechnic Institute of Bragança, Portugal
Organizing Committee Chairs
Florbela P. Fernandes Polytechnic Institute of Bragança, Portugal
João Paulo Coelho Polytechnic Institute of Bragança, Portugal
João Paulo Teixeira Polytechnic Institute of Bragança, Portugal
M. Fátima Pacheco Polytechnic Institute of Bragança, Portugal
Paulo Alves Polytechnic Institute of Bragança, Portugal
Rui Pedro Lopes Polytechnic Institute of Bragança, Portugal
Scientific Committee
Ana Maria A. C. Rocha University of Minho, Portugal
Ana Paula Teixeira University of Trás-os-Montes and Alto Douro, Portugal
André Pinz Borges Federal University of Technology – Paraná, Brazil
Andrej Košir University of Ljubljana, Slovenia
Arnaldo Cândido Júnior Federal University of Technology – Paraná, Brazil
Bruno Bispo Federal University of Santa Catarina, Brazil
Carmen Galé University of Zaragoza, Spain
B. Rajesh Kanna Vellore Institute of Technology, India
C. Sweetlin Hemalatha Vellore Institute of Technology, India
Damir Vrančić Jozef Stefan Institute, Slovenia
Daiva Petkeviciute Kaunas University of Technology, Lithuania
Diamantino Silva Freitas University of Porto, Portugal
Esteban Clua Federal Fluminense University, Brazil
Eric Rogers University of Southampton, UK
Felipe Nascimento Martins Hanze University of Applied Sciences,
The Netherlands
Gaukhar Muratova Dulaty University, Kazakhstan
Gediminas Daukšys Kauno Technikos Kolegija, Lithuania
Glaucia Maria Bressan Federal University of Technology – Paraná, Brazil
Humberto Rocha University of Coimbra, Portugal
José Boaventura-Cunha University of Trás-os-Montes and Alto Douro, Portugal
José Lima Polytechnic Institute of Bragança, Portugal
Joseane Pontes Federal University of Technology – Ponta Grossa,
Brazil
Juani Lopéz Redondo University of Almeria, Spain
viii Organization
Jorge Ribeiro Polytechnic Institute of Viana do Castelo, Portugal
José Ramos NOVA University Lisbon, Portugal
Kristina Sutiene Kaunas University of Technology, Lithuania
Lidia Sánchez University of León, Spain
Lino Costa University of Minho, Portugal
Luís Coelho Polytecnhic Institute of Porto, Portugal
Luca Spalazzi Marche Polytechnic University, Italy
Manuel Castejón Limas University of León, Spain
Marc Jungers Université de Lorraine, France
Maria do Rosário de Pinho University of Porto, Portugal
Marco Aurélio Wehrmeister Federal University of Technology – Paraná, Brazil
Mikulas Huba Slovak University of Technology in Bratislava,
Slovakia
Michał Podpora Opole University of Technology, Poland
Miguel Ángel Prada University of León, Spain
Nicolae Cleju Technical University of Iasi, Romania
Paulo Lopes dos Santos University of Porto, Portugal
Paulo Moura Oliveira University of Trás-os-Montes and Alto Douro, Portugal
Pavel Pakshin Nizhny Novgorod State Technical University, Russia
Pedro Luiz de Paula Filho Federal University of Technology – Paraná, Brazil
Pedro Miguel Rodrigues Catholic University of Portugal, Portugal
Pedro Morais Polytechnic Institute of Cávado e Ave, Portugal
Pedro Pinto Polytechnic Institute of Viana do Castelo, Portugal
Rudolf Rabenstein Friedrich-Alexander-University of Erlangen-Nürnberg,
Germany
Sani Rutz da Silva Federal University of Technology – Paraná, Brazil
Sara Paiva Polytechnic Institute of Viana do Castelo, Portugal
Sofia Rodrigues Polytechnic Institute of Viana do Castelo, Portugal
Sławomir St˛
epień Poznan University of Technology, Poland
Teresa Paula Perdicoulis University of Trás-os-Montes and Alto Douro, Portugal
Toma Roncevic University of Split, Croatia
Vitor Duarte dos Santos NOVA University Lisbon, Portugal
Wojciech Paszke University of Zielona Gora, Poland
Wojciech Giernacki Poznan University of Technology, Poland
Contents
Optimization Theory
Dynamic Response Surface Method Combined with Genetic Algorithm
to Optimize Extraction Process Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Laires A. Lima, Ana I. Pereira, Clara B. Vaz, Olga Ferreira,
Márcio Carocho, and Lillian Barros
Towards a High-Performance Implementation of the MCSFilter
Optimization Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Leonardo Araújo, Maria F. Pacheco, José Rufino,
and Florbela P. Fernandes
On the Performance of the OrthoMads Algorithm on Continuous
and Mixed-Integer Optimization Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Marie-Ange Dahito, Laurent Genest, Alessandro Maddaloni, and José Neto
A Look-Ahead Based Meta-heuristics for Optimizing Continuous
Optimization Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Thomas Nordli and Noureddine Bouhmala
Inverse Optimization for Warehouse Management . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Hannu Rummukainen
Model-Agnostic Multi-objective Approach for the Evolutionary Discovery
of Mathematical Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Alexander Hvatov, Mikhail Maslyaev, Iana S. Polonskaya,
Mikhail Sarafanov, Mark Merezhnikov, and Nikolay O. Nikitin
A Simple Clustering Algorithm Based on Weighted Expected Distances . . . . . . . 86
Ana Maria A. C. Rocha, M. Fernanda P. Costa,
and Edite M. G. P. Fernandes
Optimization of Wind Turbines Placement in Offshore Wind Farms: Wake
Effects Concerns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
José Baptista, Filipe Lima, and Adelaide Cerveira
A Simulation Tool for Optimizing a 3D Spray Painting System . . . . . . . . . . . . . . . 110
João Casanova, José Lima, and Paulo Costa
x Contents
Optimization of Glottal Onset Peak Detection Algorithm for Accurate
Jitter Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Joana Fernandes, Pedro Henrique Borghi, Diamantino Silva Freitas,
and João Paulo Teixeira
Searching the Optimal Parameters of a 3D Scanner Through Particle
Swarm Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
João Braun, José Lima, Ana I. Pereira, Cláudia Rocha, and Paulo Costa
Optimal Sizing of a Hybrid Energy System Based on Renewable Energy
Using Evolutionary Optimization Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
Yahia Amoura, Ângela P. Ferreira, José Lima, and Ana I. Pereira
Robotics
Human Detector Smart Sensor for Autonomous Disinfection Mobile Robot . . . . 171
Hugo Mendonça, José Lima, Paulo Costa, António Paulo Moreira,
and Filipe Santos
Multiple Mobile Robots Scheduling Based on Simulated Annealing
Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
Diogo Matos, Pedro Costa, José Lima, and António Valente
Multi AGV Industrial Supervisory System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Ana Cruz, Diogo Matos, José Lima, Paulo Costa, and Pedro Costa
Dual Coulomb Counting Extended Kalman Filter for Battery SOC
Determination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
Arezki A. Chellal, José Lima, José Gonçalves, and Hicham Megnafi
Sensor Fusion for Mobile Robot Localization Using Extended Kalman
Filter, UWB ToF and ArUco Markers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
Sílvia Faria, José Lima, and Paulo Costa
Deep Reinforcement Learning Applied to a Robotic Pick-and-Place
Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
Natanael Magno Gomes, Felipe N. Martins, José Lima,
and Heinrich Wörtche
Measurements with the Internet of Things
An IoT Approach for Animals Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
Matheus Zorawski, Thadeu Brito, José Castro, João Paulo Castro,
Marina Castro, and José Lima
Contents xi
Optimizing Data Transmission in a Wireless Sensor Network Based
on LoRaWAN Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
Thadeu Brito, Matheus Zorawski, João Mendes,
Beatriz Flamia Azevedo, Ana I. Pereira, José Lima, and Paulo Costa
Indoor Location Estimation Based on Diffused Beacon Network . . . . . . . . . . . . . 294
André Mendes and Miguel Diaz-Cacho
SMACovid-19 – Autonomous Monitoring System for Covid-19 . . . . . . . . . . . . . . 309
Rui Fernandes and José Barbosa
Optimization in Control Systems Design
Economic Burden of Personal Protective Strategies for Dengue Disease:
an Optimal Control Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
Artur M. C. Brito da Cruz and Helena Sofia Rodrigues
ERP Business Speed – A Measuring Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
Zornitsa Yordanova
BELBIC Based Step-Down Controller Design Using PSO . . . . . . . . . . . . . . . . . . . 345
João Paulo Coelho, Manuel Braz-César, and José Gonçalves
Robotic Welding Optimization Using A* Parallel Path Planning . . . . . . . . . . . . . . 357
Tiago Couto, Pedro Costa, Pedro Malaca, Daniel Marques,
and Pedro Tavares
Deep Learning
Leaf-Based Species Recognition Using Convolutional Neural Networks . . . . . . . 367
Willian Oliveira Pires, Ricardo Corso Fernandes Jr.,
Pedro Luiz de Paula Filho, Arnaldo Candido Junior,
and João Paulo Teixeira
Deep Learning Recognition of a Large Number of Pollen Grain Types . . . . . . . . 381
Fernando C. Monteiro, Cristina M. Pinto, and José Rufino
Predicting Canine Hip Dysplasia in X-Ray Images Using Deep Learning . . . . . . 393
Daniel Adorno Gomes, Maria Sofia Alves-Pimenta, Mário Ginja,
and Vitor Filipe
Convergence of the Reinforcement Learning Mechanism Applied
to the Channel Detection Sequence Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
André Mendes
xii Contents
Approaches to Classify Knee Osteoarthritis Using Biomechanical Data . . . . . . . 417
Tiago Franco, P. R. Henriques, P. Alves, and M. J. Varanda Pereira
Artificial Intelligence Architecture Based on Planar LiDAR Scan Data
to Detect Energy Pylon Structures in a UAV Autonomous Detailed
Inspection Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
Matheus F. Ferraz, Luciano B. Júnior, Aroldo S. K. Komori,
Lucas C. Rech, Guilherme H. T. Schneider, Guido S. Berger,
Álvaro R. Cantieri, José Lima, and Marco A. Wehrmeister
Data Visualization and Virtual Reality
Machine Vision to Empower an Intelligent Personal Assistant for Assembly
Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
Matheus Talacio, Gustavo Funchal, Victória Melo, Luis Piardi,
Marcos Vallim, and Paulo Leitao
Smart River Platform - River Quality Monitoring and Environmental
Awareness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
Kenedy P. Cabanga, Edmilson V. Soares, Lucas C. Viveiros,
Estefânia Gonçalves, Ivone Fachada, José Lima, and Ana I. Pereira
Health Informatics
Analysis of the Middle and Long Latency ERP Components
in Schizophrenia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
Miguel Rocha e Costa, Felipe Teixeira, and João Paulo Teixeira
Feature Selection Optimization for Breast Cancer Diagnosis . . . . . . . . . . . . . . . . . 492
Ana Rita Antunes, Marina A. Matos, Lino A. Costa,
Ana Maria A. C. Rocha, and Ana Cristina Braga
Cluster Analysis for Breast Cancer Patterns Identification . . . . . . . . . . . . . . . . . . . 507
Beatriz Flamia Azevedo, Filipe Alves, Ana Maria A. C. Rocha,
and Ana I. Pereira
Overview of Robotic Based System for Rehabilitation and Healthcare . . . . . . . . . 515
Arezki A. Chellal, José Lima, Florbela P. Fernandes, José Gonçalves,
Maria F. Pacheco, and Fernando C. Monteiro
Understanding Health Care Access in Higher Education Students . . . . . . . . . . . . . 531
Filipe J. A. Vaz, Clara B. Vaz, and Luís C. D. Cadinha
Contents xiii
Using Natural Language Processing for Phishing Detection . . . . . . . . . . . . . . . . . . 540
Richard Adolph Aires Jonker, Roshan Poudel, Tiago Pedrosa,
and Rui Pedro Lopes
Data Analysis
A Panel Data Analysis of the Electric Mobility Deployment in the European
Union . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555
Sarah B. Gruetzmacher, Clara B. Vaz, and Ângela P. Ferreira
Data Analysis of Workplace Accidents - A Case Study . . . . . . . . . . . . . . . . . . . . . . 571
Inês P. Sena, João Braun, and Ana I. Pereira
Application of Benford’s Law to the Tourism Demand: The Case
of the Island of Sal, Cape Verde . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 587
Gilberto A. Neves, Catarina S. Nunes, and Paula Odete Fernandes
Volunteering Motivations in Humanitarian Logistics: A Case Study
in the Food Bank of Viana do Castelo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599
Ana Rita Vasconcelos, Ângela Silva, and Helena Sofia Rodrigues
Occupational Behaviour Study in the Retail Sector . . . . . . . . . . . . . . . . . . . . . . . . . 617
Inês P. Sena, Florbela P. Fernandes, Maria F. Pacheco,
Abel A. C. Pires, Jaime P. Maia, and Ana I. Pereira
A Scalable, Real-Time Packet Capturing Solution . . . . . . . . . . . . . . . . . . . . . . . . . . 630
Rafael Oliveira, João P. Almeida, Isabel Praça, Rui Pedro Lopes,
and Tiago Pedrosa
Trends in Engineering Education
Assessing Gamification Effectiveness in Education Through Analytics . . . . . . . . 641
Zornitsa Yordanova
Real Airplane Cockpit Development Applied to Engineering Education:
A Project Based Learning Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649
José Carvalho, André Mendes, Thadeu Brito, and José Lima
Azbot-1C: An Educational Robot Prototype for Learning Mathematical
Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 657
Francisco Pedro, José Cascalho, Paulo Medeiros, Paulo Novo,
Matthias Funk, Albeto Ramos, Armando Mendes, and José Lima
xiv Contents
Towards Distance Teaching: A Remote Laboratory Approach for Modbus
and IoT Experiencing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 670
José Carvalho, André Mendes, Thadeu Brito, and José Lima
Evaluation of Soft Skills Through Educational Testbed 4.0 . . . . . . . . . . . . . . . . . . 678
Leonardo Breno Pessoa da Silva, Bernado Perrota Barreto,
Joseane Pontes, Fernanda Tavares Treinta,
Luis Mauricio Martins de Resende, and Rui Tadashi Yoshino
Collaborative Learning Platform Using Learning Optimized Algorithms . . . . . . . 691
Beatriz Flamia Azevedo, Yahia Amoura, Gauhar Kantayeva,
Maria F. Pacheco, Ana I. Pereira, and Florbela P. Fernandes
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 703
Optimization Theory
Dynamic Response Surface Method
Combined with Genetic Algorithm
to Optimize Extraction Process Problem
Laires A. Lima1,2(B)
, Ana I. Pereira1
, Clara B. Vaz1
, Olga Ferreira2
,
Márcio Carocho2
, and Lillian Barros2
1
Research Center in Digitalization and Intelligent Robotics (CeDRI), Instituto
Politécnico de Bragança, Campus de Santa Apolónia, 5300-253 Bragança, Portugal
{laireslima,apereira,clvaz}@ipb.pt
2
Centro de Investigação de Montanha (CIMO), Instituto Politécnico de Bragança,
Campus de Santa Apolónia, 5300-253 Bragança, Portugal
{oferreira,mcarocho,lillian}@ipb.pt
Abstract. This study aims to find and develop an appropriate opti-
mization approach to reduce the time and labor employed throughout a
given chemical process and could be decisive for quality management. In
this context, this work presents a comparative study of two optimization
approaches using real experimental data from the chemical engineering
area, reported in a previous study [4]. The first approach is based on the
traditional response surface method and the second approach combines
the response surface method with genetic algorithm and data mining.
The main objective is to optimize the surface function based on three
variables using hybrid genetic algorithms combined with cluster analysis
to reduce the number of experiments and to find the closest value to
the optimum within the established restrictions. The proposed strategy
has proven to be promising since the optimal value was achieved with-
out going through derivability unlike conventional methods, and fewer
experiments were required to find the optimal solution in comparison to
the previous work using the traditional response surface method.
Keywords: Optimization · Genetic algorithm · Cluster analysis
1 Introduction
Search and optimization methods have several principles, being the most rele-
vant: the search space, where the possibilities for solving the problem in question
are considered; the objective function (or cost function); and the codification of
the problem, that is, the way to evaluate an objective in the search space [1].
Conventional optimization techniques start with an initial value or vector
that, iteratively, is manipulated using some heuristic or deterministic process
c
 Springer Nature Switzerland AG 2021
A. I. Pereira et al. (Eds.): OL2A 2021, CCIS 1488, pp. 3–14, 2021.
https://doi.org/10.1007/978-3-030-91885-9_1
4 L. A. Lima et al.
directly associated with the problem to be solved. The great difficulty to deal
with when solving a problem using a stochastic method is the number of possible
solutions growing with a factorial speed, being impossible to list all possible
solutions of the problem [12]. Evolutionary computing techniques operate on a
population that changes in each iteration. Thus, they can search in different
regions on the feasible space, allocating an appropriate number of members to
search in different areas [12].
Considering the importance of predicting the behavior of analytical processes
and avoiding expensive procedures, this study aims to propose an alternative
for the optimization of multivariate problems, e.g. extraction processes of high-
value compounds from plant matrices. In the standard analytical approach, the
identification and quantification of phenolic compounds require expensive and
complex laboratory assays [6]. An alternative approach can be applied using
forecasting models from Response Surface Method (RSM). This approach can
maximize the extraction yield of the target compounds while decreasing the cost
of the extraction process.
In this study, a comparative analysis between two optimization methodolo-
gies (traditional RSM and dynamic RSM), developed in MATLAB R

software
(version R2019a 9.6), that aim to maximize the heat-assisted extraction yield
and phenolic compounds content in chestnut flower extracts is presented.
This paper is organized as follows. Section 2 describes the methods used to
evaluate multivariate problems involving optimization processes: Response Sur-
face Method (RSM), Hybrid Genetic Algorithm, Cluster Analysis and Bootsrap
Analysis. Sections 3 and 4 introduce the case study, consisting of the optimization
of the extraction yield and content of phenolic compounds in extracts of chest-
nut flower by two different approaches: Traditional RSM and Dynamic RSM.
Section 5 includes the numerical results obtained by both methods and their
comparative evaluation. Finally, Sect. 6 presents the conclusions and future work.
2 Methods Approaches and Techniques
Regarding optimization problems, some methods are used more frequently (tra-
ditional RSM, for example) due to their applicability and suitability to different
cases. For the design of the dynamic RSM, the approach of conventional meth-
ods based on Genetic Algorithm combined with clustering and bootstrap analysis
was made to evaluate the aspects that could be incorporated into the algorithm
developed in this work. The key concepts for dynamic RSM are presented below.
2.1 Response Surface Method
The Response Surface Method is a tool introduced in the early 1950s by Box
and Wilson, which covers a collection of mathematical and statistical techniques
useful for approximating and optimizing stochastic models [11]. It is a widely
used optimization method, which applies statistical techniques based on special
factorial designs [2,3]. Its scientific approach estimates the ideal conditions for
Dynamic RSM Combined with GA to Optimize Extraction Process Problem 5
achieving the highest or lowest required response value, through the design of the
response surface from the Taylor series [8]. RSM promotes the greatest amount
of information on experiments, such as the time of experiments and the influence
of each dependent variable, being one of the largest advantages in obtaining the
necessary general information on the planning of the process and the experience
design [8].
2.2 Hybrid Genetic Algorithm
The genetic algorithm is a stochastic optimization method based on the evolu-
tionary process of natural selection and genetic dynamics. The method seeks to
combine the survival of the fittest among the string structures with an exchange
of random, but structured, information to form an ideal solution [7]. Although
they are randomized, GA search strategies are able to explore several regions
of the feasible search space at a time. In this way, along with the iterations,
a unique search path is built, as new solutions are obtained through the com-
bination of previous solutions [1]. Optimization problems with restrictions can
influence the sampling capacity of a genetic algorithm due to the population
limits considered. Incorporating a local optimization method into GA can help
overcome most of the obstacles that arise as a result of finite population sizes,
for example, the accumulation of stochastic errors that generate genetic drift
problems [1,7].
2.3 Cluster Analysis
Cluster algorithms are often used to group large data sets and play an impor-
tant role in pattern recognition and mining large arrays. k-means and k-medoids
strategies work by grouping partition data into a k number of mutually exclusive
clusters, demonstrated in Fig. 1. These techniques assign each observation to a
cluster, minimizing the distance from the data point to the average (k-means)
or median (k-medoids) location of its assigned cluster [10].
Fig. 1. Mean and Medoid in 2D space representation. In both figures, the data are
represented by blue dots, being the rightmost point an outlier and the red point rep-
resents the centroid point found by k-mean or k-medoid methods. Adapted from Jin
and Han (2011) (Color figure online).
6 L. A. Lima et al.
2.4 Bootstrap Analysis
The idea of bootstrap analysis is to mimic the sampling distribution of the statis-
tic of interest through the use of many resamples replacing the original sample
elements [5]. In this work, the bootstrap analysis enables the handling of the vari-
ability of the optimal solutions derived from the cluster method analysis. Thus,
the bootstrap analysis is used to estimate the confidence interval of the statis-
tic of interest and subsequently, comparing the results obtained by traditional
methods.
3 Case Study
This work presents a comparative analysis between two methodologies for opti-
mizing the total phenolic content in extracts of chestnut flower, developed in
MATLAB R

software. The natural values of the dependent variables in the
extraction - t, time in minutes; T, temperature in ◦
C; and S, organic solvent con-
tent in %v/v of ethanol - were coded based on Central Composite Circumscribed
Design (CCCD) and the result was based on extraction yield (Y, expressed in
percentage of dry extract) and total phenolic content (Phe, expressed in mg/g
of dry extract) as shown in Table 1.
The experimental data presented were cordially provided by the Mountain
Research Center - CIMO (Bragança, Portugal) [4].
The CCCD design selected for the original experimental study [4] is based
on a cube circumscribed to a sphere in which the vertices are at α distance from
the center, with 5 levels for each factor (t, T, and S). In this case, the α values
vary between −1.68 and 1.68, and correspond to each factor level, as described
in Table 2.
4 Data Analysis
In this section, the two RSM optimization methods (traditional and dynamic)
will be discussed in detail, along with the results obtained from both methods.
4.1 Traditional RSM
In the original experiment, a five-level Central Composite Circumscribed Design
(CCCD) coupled with RSM was build to optimize the variables for the male
chestnut flowers. For the optimization, a simplex method developed ad hoc was
used to optimize nonlinear solutions obtained by a regression model to maximize
the response, described in the flowchart in Fig. 2.
Through the traditional RSM, the authors approximated the surface response
to a second-order polynomial function [4]:
Y = b0 +
n

i=1
biXi +
n−1

i=1
j1
n

j=2
bijXiXj +
n

i=1
biiX2
i (1)
Dynamic RSM Combined with GA to Optimize Extraction Process Problem 7
Table 1. Variables and natural values of the process parameters for the extraction of
chestnut flowers [4].
t (min) T (◦
C) S (%EtOH) Yield (%R) Phenolic cont.
(mg.g−1
dry weight)
40.30 37.20 20.30 38.12 36.18
40.30 37.20 79.70 26.73 11.05
40.30 72.80 20.30 42.83 36.66
40.30 72.80 79.70 35.94 22.09
99.70 37.20 20.30 32.77 35.55
99.70 37.20 79.70 32.99 8.85
99.70 72.80 20.30 42.55 29.61
99.70 72.80 79.70 35.52 11.10
120.00 55.00 50.00 42.41 14.56
20.00 55.00 50.00 35.45 24.08
70.00 25.00 50.00 38.82 12.64
70.00 85.00 50.00 42.06 17.41
70.00 55.00 0.00 35.24 34.58
70.00 55.00 100.00 15.61 12.01
20.00 25.00 0.00 22.30 59.56
20.00 25.00 100.00 8.02 15.57
20.00 85.00 0.00 34.81 42.49
20.00 85.00 100.00 18.71 50.93
120.00 25.00 0.00 31.44 40.82
120.00 25.00 100.00 15.33 8.79
120.00 85.00 0.00 34.96 45.61
120.00 85.00 100.00 32.70 21.89
70.00 55.00 50.00 41.03 14.62
Table 2. Natural and coded values of the extraction variables [4].
Natural variables Coded value
t (min) T (◦
C) S (%)
20.0 25.0 0.0 −1.68
40.3 37.2 20.0 −1.00
70 55 50.0 0.00
99.7 72.8 80.0 1.00
120 85 100.0 1.68
8 L. A. Lima et al.
Fig. 2. Flowchart of traditional RSM modeling approach for optimal design.
where, for i = 0, ..., n and j = 1, ..., n, bi stand for the linear coefficients; bij
correspond to the interaction coefficients while bii are the quadratic coefficients;
and, finally, Xi are the independent variables, associated to t, T and S, being n
the total number of variables.
In the previous study, the traditional RSM, Eq. 1 represents coherently the
behaviour of the extraction process of the target compounds from chestnut flow-
ers [4]. In order to compare the optimization methods and to avoid data conflict,
the estimation of the cost function was done based on a multivariate regression
model.
4.2 Dynamic RSM
For the proposed optimization method, briefly described in the flowchart shown
in Fig. 3, the structure of the design of the experience was maintained, as well
as the imposed restrictions on the responses and variables to elude awkward
solutions.
Fig. 3. Flowchart of dynamic RSM integrating genetic algorithm and cluster analysis
to the process.
The dynamic RSM method was build in MATLAB R

using a programming
code developed by the authors coupled with pre-existing functions from the
statistical and optimization toolboxes of the software. The algorithm starts by
generating a set of 15 random combinations between the levels of combinatorial
analysis. From this initial experimental data, a multivariate regression model is
calculated, being this model the objective function of the problem. Thereafter,
a built-in GA-based solver was used to solve the optimization problem. The
optimal combination is identified and it is used to define the objective function.
The process stops when no new optimal solution is identified.
Dynamic RSM Combined with GA to Optimize Extraction Process Problem 9
Considering the stochastic nature of this case study, clustering analysis is
used to identify the best candidate optimal solution. In order to handle the
variability of the achieved optimal solution, the bootstrap method is used to
estimate the confidence interval at 95%.
5 Numerical Results
The study using the traditional RSM returned the following optimal conditions
for maximum yield: 120.0 min, 85.0 ◦
C, and 44.5% of ethanol in the solvent, pro-
ducing 48.87% of dry extract. For total phenolic content, the optimal conditions
were: 20.0 min, 25.0 ◦
C and S = 0.0% of ethanol in the solvent, producing 55.37
mg/g of dry extract. These data are displayed in Table 3.
Table 3. Optimal responses and respective conditions using traditional and dynamic
RSM based on confidence intervals at 95%.
Method t (min) T (◦
C) S (%) Response
Extraction yield (%) Traditional RSM 120.0 ± 12.4 85.0 ± 6.7 44.5 ± 9.7 48.87
Dynamic RSM 118.5 ± 1.4 84.07 ± 0.9 46.1 ± 0.85 45.87
Total phenolics (Phe) Traditional RSM 20.0 ± 3.7 25.0 ± 5.7 0.0 ± 8.7 55.37
Dynamic RSM 20.4 ± 1.5 25.1 ± 1.97 0.05 ± 0.05 55.64
For the implementation of dynamic RSM in this case study, 100 runs were
carried out to evaluate the effectiveness of the method. For the yield, the esti-
mated optimal conditions were: 118.5 min, 84.1 ◦
C, and 46.1% of ethanol in the
solvent, producing 45.87% of dry extract. In this case, the obtained optimal con-
ditions for time and temperature were in accordance with approximately 80% of
the tests.
For the total phenolic content, the optimal conditions were: 20.4 min, 25.1
◦
C, and 0.05% of ethanol in the solvent, producing 55.64 mg/g of dry extract.
The results were very similar to the previous report with the same data [4].
The clustering analysis for each response variable was performed considering
the means (Figs. 4a and 5a) and the medoids (Figs. 4b and 5b) for the output
population (optimal responses). The bootstrap analysis makes the inference con-
cerning the results achieved and are represented graphically in terms of mean in
Figs. 4c and 5c, and in terms of medoids in Figs. 4d and 5d.
10 L. A. Lima et al.
The box plots of the group of optimal responses from dynamic RSM dis-
played in Fig. 6 shows that the variance within each group is small, given that
the difference between the set of responses is highly narrow. The histograms
concerning the set of dynamic RSM responses and the bootstrap distribution of
the mean (1000 resamples) are shown in Figs. 7 and 8.
(a) Extraction yield responses and
k-means
(b) Extraction yield responses and
k-medoids
(c) Extraction yield bootstrap output and
k-means
(d) Extraction yield bootstrap output and
k-medoids
Fig. 4. Clustering analysis of the outputs from the extraction yield optimization using
dynamic RSM. The responses are clustered in 3 distinct groups
Dynamic RSM Combined with GA to Optimize Extraction Process Problem 11
(a) Total phenolic content responses and
k-means
(b) Total phenolic content responses and
k-medoids
(c) Total phenolic content bootstrap
output and k-means
(d) Total phenolic content bootstrap
output and k-medoids
Fig. 5. Clustering analysis of the outputs from the total phenolic content optimization
using dynamic RSM. The responses are clustered in 3 distinct groups
Fig. 6. Box plot of the dynamic RSM outputs for the extraction yield and total phenolic
content before bootstrap analysis, respectively.
12 L. A. Lima et al.
Fig. 7. Histograms of the extraction data (extraction yield) and the bootstrap means,
respectively.
Fig. 8. Histograms of the extraction data (total phenolic content) and the bootstrap
means, respectively.
The results obtained in this work are satisfactory since they were analogous
for both methods, although dynamic RSM took 15 to 18 experimental points
to find the optimal coordinates. Some authors use the design of experiments
involving traditional RSM containing 20 different combinations, including the
repetition of centroid [4]. However, in studies involving recent data or the absence
of complementary data, evaluations about the influence of parameters and range
are essential to obtain consistent results, making it necessary to make about 30
experimental points for optimization. Considering these cases, the dynamic RSM
method proposes a different, competitive, and economical approach, in which
fewer points are evaluated to obtain the maximum response.
Genetic algorithms have been providing their efficiency in the search for
optimal solutions in a wide variety of problems, given that they do not have
some limitations found in traditional search methodologies, such as the require-
ment of the derivative function, for example [9]. GA is attractive to identify the
global solution of the problem. Considering the stochastic problem presented in
this work, the association of genetic algorithm with the k-methods as clustering
algorithm obtained satisfactory results. This solution can be used for problems
involving small-scale data since GA manages to gather the best data for opti-
Dynamic RSM Combined with GA to Optimize Extraction Process Problem 13
mization through its evolutionary method, while k-means or k-medoids make the
grouping of optimum points.
In addition to clustering analysis, bootstrapping was also applied, in which
the sample distribution of the statistic of interest is simulated through the use
of many resamples with replacement of the original sample, thus enabling to
make the statistical inference. Bootstrapping was used to calculate the confidence
intervals to obtain unbiased estimates from the proposed method. In this case,
the confidence interval was calculated at the 95% level (two-tailed), since the
same percentage was adopted by Caleja et al. (2019). It was observed that the
Dynamic RSM approach also enables the estimation of confidence intervals with
less margin of error than the Traditional RSM approach, conducting to define
more precisely the optimum conditions for the experiment.
6 Conclusion and Future Work
For the presented case study, applying dynamic RSM using Genetic Algorithm
coupled with clustering analysis returned positive results, in accordance with
previous published data [4]. Both methods seem attractive for the resolution
of this particular case concerning the optimization of the extraction of target
compounds from plant matrices. Therefore, the smaller number of experiments
required for dynamic RSM can be an interesting approach for future studies.
In brief, a smaller set of points was obtained that represent the best domain
of optimization, thus eliminating the need for a large number of costly labora-
tory experiments. The next steps involve the improvement of the dynamic RSM
algorithm and the application of the proposed method in other areas of study.
Acknowledgments. The authors are grateful to FCT for financial support through
national funds FCT/MCTES UIDB/00690/2020 to CIMO and UIDB/05757/2020.
M. Carocho also thanks FCT through the individual scientific employment program-
contract (CEECIND/00831/2018).
References
1. Beasley, D., Bull, D.R., Martin, R.R.: An overview of genetic algorithms: Part 1,
fundamentals. Univ. Comput. 2(15), 1–16 (1993)
2. Box, G.E.P., Behnken, D.W.: Simplex-sum designs: a class of second order rotatable
designs derivable from those of first order. Ann. Math. Stat. 31(4), 838–864 (1960)
3. Box, G.E.P., Wilson, K.B.: On the experimental attainment of optimum conditions.
J. Roy. Stat. Soc. Ser. B (Methodol.) 13(1), 1–38 (1951)
4. Caleja C., Barros L., Prieto M. A., Bento A., Oliveira M.B.P., Ferreira, I.C.F.R.:
Development of a natural preservative obtained from male chestnut flowers: opti-
mization of a heat-assisted extraction technique. In: Food and Function, vol. 10,
pp. 1352–1363 (2019)
5. Efron, B., Tibshirani, R.J.: An introduction to the Bootstrap, 1st edn. Wiley, New
York (1994)
14 L. A. Lima et al.
6. Eftekhari, M., Yadollahi, A., Ahmadi, H., Shojaeiyan, A., Ayyari, M.: Development
of an artificial neural network as a tool for predicting the targeted phenolic profile
of grapevine (Vitis vinifera) foliar wastes. Front. Plant Sci. 9, 837 (2018)
7. El-Mihoub, T.A., Hopgood, A.A., Nolle, L., Battersby, A.: Hybrid genetic algo-
rithms: a review. Eng. Lett. 11, 124–137 (2006)
8. Geiger, E.: Statistical methods for fermentation optimization. In: Vogel H.C.,
Todaro C.M., (eds.) Fermentation and Biochemical Engineering Handbook: Prin-
ciples, Process Design, and Equipment, 3rd edn, pp. 415–422. Elsevier Inc. (2014)
9. Härdle, W.K., Simar, L.: Applied Multivariate Statistical Analysis, 4th edn.
Springer, Heidelberg (2019)
10. Jin, X., Han, J.: K-medoids clustering. In: Sammut, C., Webb, G.I. (eds.) Ency-
clopedia of Machine Learning, pp. 564–565. Springer, Boston (2011)
11. Şenaras, A.E.: Parameter optimization using the surface response technique in
automated guided vehicles. In: Sustainable Engineering Products and Manufac-
turing Technologies, pp. 187–197. Academic Press (2019)
12. Schneider, J., Kirkpatrick, S.: Genetic algorithms and evolution strategies. In:
Stochastic Optimization, vol. 1, pp. 157–168, Springer-Verlag, Heidelberg (2006)
Towards a High-Performance
Implementation of the MCSFilter
Optimization Algorithm
Leonardo Araújo1,2
, Maria F. Pacheco2
, José Rufino2
,
and Florbela P. Fernandes2(B)
1
Universidade Tecnológica Federal do Paraná,
Campus de Ponta Grossa, Ponta Grossa 84017-220, Brazil
2
Research Centre in Digitalization and Intelligent Robotics (CeDRI),
Instituto Politécnico de Bragança, 5300-252 Bragança, Portugal
a46677@alunos.ipb.pt, {pacheco,rufino,fflor}@ipb.pt
Abstract. Multistart Coordinate Search Filter (MCSFilter) is an opti-
mization method suitable to find all minimizers – both local and global –
of a non convex problem, with simple bounds or more generic constraints.
Like many other optimization algorithms, it may be used in industrial
contexts, where execution time may be critical in order to keep a pro-
duction process within safe and expected bounds. MCSFilter was first
implemented in MATLAB and later in Java (which introduced a signif-
icant performance gain). In this work, a comparison is made between
these two implementations and a novel one in C that aims at further
performance improvements. For the comparison, the problems addressed
are bound constraint, with small dimension (between 2 and 10) and mul-
tiple local and global solutions. It is possible to conclude that the average
time execution for each problem is considerable smaller when using the
Java and C implementations, and that the current C implementation,
though not yet fully optimized, already exhibits a significant speedup.
Keywords: Optimization · MCSFilter method · MatLab · C · Java ·
Performance
1 Introduction
The set of techniques and principals for solving quantitative problems known as
optimization has become increasingly important in a broad range of applications
in areas of research as diverse as engineering, biology, economics, statistics or
physics. The application of the techniques and laws of optimization in these
(and other) areas, not only provides resources to describe and solve the specific
problems that appear within the framework of each area but it also provides the
opportunity for new advances and achievements in optimization theory and its
techniques [1,2,6,7].
c
 Springer Nature Switzerland AG 2021
A. I. Pereira et al. (Eds.): OL2A 2021, CCIS 1488, pp. 15–30, 2021.
https://doi.org/10.1007/978-3-030-91885-9_2
16 L. Araújo et al.
In order to apply optimization techniques, problems can be formulated in
terms of an objective function that is to be maximized or minimized, a set of
variables and a set of constraints (restrictions on the values that the variables can
assume). The structure of the these three items – objective function, variables
and constraints –, determines different subfields of optimization theory: linear,
integer, stochastic, etc.; within each of these subfields, several lines of research
can be pursued. The size and complexity of optimization problems that can be
dealt with has increased enormously with the improvement of the overall per-
formance of computers. As such, advances in optimization techniques have been
following progresses in computer science as well as in combinatorics, operations
research, control theory, approximation theory, routing in telecommunication
networks, image reconstruction and facility location, among other areas [14].
The need to keep up with the challenges of our rapidly changing society and
its digitalization means a continuing need to increase innovation and productiv-
ity and improve the performance of the industry sector, and places very high
expectations for the progress and adaptation of sophisticated optimization tech-
niques that are applied in the industrial context. Many of those problems can be
modelled as nonlinear programming problems [10–12] or mixed integer nonlinear
programming problems [5,8]. The urgency to quickly output solutions to difficult
multivariable problems, leads to an increasing need to develop robust and fast
optimization algorithms. Considering that, for many problems, reliable informa-
tion about the derivative of the objective function is unavailable, it is important
to use a method that allows to solve the problem without this information.
Algorithms that do not use derivatives are called derivative-free. The MCS-
Filter method is such a method, being able to deal with discontinuous or non-
differentiable functions that often appear in many applications. It is also a multi-
local method, meaning that it finds all the minimizers, both local and global, and
exhibits good results [9,10]. Moreover, a Java implementation was already used
to solve processes engineering problems [4]. Considering that, from an industrial
point of view, execution time is of utmost importance, a novel C reimplementa-
tion, aimed at increased performance, is currently under way, having reached a
stage at which it is already able to solve a broad set of problems with measur-
able performance gains over the previous Java version. This paper presents the
results of a preliminary evaluation of the new C implementation of the MCSFilter
method, against the previously developed versions (in MATLAB and Java).
The rest of this paper is organized as follows: in Sect. 2, the MCSFilter algo-
rithm is briefly described; in Sect. 3 the set of problems that are used to compare
the three implementations and the corresponding results are presented and ana-
lyzed. Finally, in Sect. 4, conclusions and future work are addressed.
2 The Multistart Coordinate Search Filter Method
The MCSFilter algorithm was initially developed in [10], with the aim of finding
multiple solutions of nonconvex and nonlinear constrained optimization problems
of the following type:
Towards a High-Performance Implementation of the MCSFilter 17
min f(x)
subject to gj(x) ≤ 0, j = 1, ..., m
li ≤ xi ≤ ui, i = 1, ..., n
(1)
where f is the objective function, gj(x), j = 1, ..., m are the constraint functions
and at least one of the functions f, gj : Rn
−→ R is nonlinear; also, l and u are
the bounds and Ω = {x ∈ Rn
: g(x) ≤ 0 , l ≤ x ≤ u} is the feasible region.
This method has two main different parts: i) the multistart part, related with
the exploration feature of the method, and ii) the coordinate search filter local
search, related with the exploitation of promising regions.
The MCSFilter method does not require any information about the deriva-
tives and is able to obtain all the solutions, both local and global, of a given non-
convex optimization problem. This is an important asset of the method since,
in industry problems, it is often not possible to know the derivative functions;
moreover, a large number of real-life problems are nonlinear and nonconvex.
As already stated, the MCSFilter algorithm relies on a multistart strategy
and a local search repeatedly called inside the multistart. Briefly, the multistart
strategy is a stochastic algorithm that applies more than once a local search to
sample points aiming to converge to all the minimizers, local and global, of a
multimodal problem. When the local search is repeatedly applied, some of the
minimizers can be reached more than once. This leads to a waste of time since
these minimizers have already been determined. To avoid these situations, a
clustering technique based on computing the regions of attraction of previously
identified minimizers is used. Thus, if the initial point belongs to the region of
attraction of a previously detected minimizer, the local search procedure may
not be performed, since it would converge to this known minimizer.
Figure 1 illustrates the influence of the regions of attraction. The red/magenta
lines between the initial approximation and the minimizer represent a local
search that has been performed; the red line represents the first local search that
converged to a given minimizer; the white dashed line between the two points
represents a discarded local search, using the regions of attraction. Therefore,
this representation intends to show the regions of attraction of each minimizer
and the corresponding points around each one. These regions are dynamic in the
sense that they may change every time a new initial point is used [3].
The local search uses a derivative-free strategy that consists of a coordinate
search combined with a filter methodology in order to generate a sequence of
approximate solutions that improve either the constraint violation or the objec-
tive function relatively to the previous approximation; this strategy is called
Coordinate Search Filter algorithm (CSFilter). In this way, the initial problem
is previously rewritten as a bi-objective problem (2):
min (θ(x), f(x))
x ∈ Ω
(2)
18 L. Araújo et al.
3 4 5 6 7 8 9 10 11 12 13
3
4
5
6
7
8
9
10
11
12
13
Fig. 1. Illustration of the multistart strategy with regions of attraction [3].
aiming to minimize, simultaneously, the objective function f(x) and the non-
negative continuous aggregate constraint violation function θ(x) defined in (3):
θ(x) = g(x)+2
+ (l − x)+2
+ (x − u)+2
(3)
where v+ = max{0, v}. For more details about this method see [9,10].
Algorithm 1 displays the steps of the MSCFilter method. The stopping condi-
tion of CSFilter is related with the step size α of the method (see condition (4)):
α  αmin (4)
with αmin  1 and close to zero.
The main steps of the MCSFilter algorithm for finding global (as well as
local) solutions to problem (1) are shown in Algorithm 2.
The stopping condition of the MCSFilter algorithm is related to the number
of minimizers found and to the number of local searches that were applied in
the multistart strategy. Considering nl as the number of local searches used
and nm as the number of minimizers obtained, then Pmin =
nm(nm + 1)
nl(nl − 1)
. The
MCSFilter algorithm stops when condition (5) is reached:
Pmin   (5)
where   1.
In this preliminary work, the main goal is to compare the performance
of MCSFilter when bound constraint problems are addressed, using different
Towards a High-Performance Implementation of the MCSFilter 19
Algorithm 1. CSFilter algorithm
Require: x and parameter values, αmin; set x̃ = x, xinf
F = x, z = x̃;
1: Initialize the filter; Set α = min{1, 0.05
n
i=1 ui−li
n
};
2: repeat
3: Compute the trial approximations zi
a = x̃ + αei, for all ei ∈ D⊕;
4: repeat
5: Check acceptability of trial points zi
a;
6: if there are some zi
a acceptable by the filter then
7: Update the filter;
8: Choose zbest
a ; set z = x̃, x̃ = zbest
a ; update xinf
F if appropriate;
9: else
10: Compute the trial approximations zi
a = xinf
F + αei, for all ei ∈ D⊕;
11: Check acceptability of trial points zi
a;
12: if there are some zi
a acceptable by the filter then
13: Update the filter;
14: Choose zbest
a ; Set z = x̃, x̃ = zbest
a ; update xinf
F if appropriate;
15: else
16: Set α = α/2;
17: end if
18: end if
19: until new trial zbest
a is acceptable
20: until α  αmin
implementations of the algorithm: the original implementation in MATLAB [10],
a follow up implementation in Java (already used to solve problems from the
Chemical Engineering area [3,4,13]), and a new implementation in C (evaluated
for the first time in this paper).
3 Computational Results
In order to compare the performance of the three implementations of the MCS-
Filter optimization algorithm, a set of problems was chosen. The definition of
each problem (a total of 15 bound constraint problems) is given below, along
with the experimental conditions under which they were evaluated, as well as
the obtained results (both numerical and performance-related).
3.1 Benchmark Problems
The collection of problems was taken from [9] (and the references therein) and
all the fifteen problems in study are listed below. The problems were chosen in
such a way that different characteristics were addressed: they are multimodal
problems with more than one minimizer (actually, the number of minimizers
varies from 2 to 1024); they can have just one global minimizer or more than
one global minimizer; the dimension of the problems varies between 2 and 10.
20 L. Araújo et al.
Algorithm 2. MCSFilter algorithm
Require: Parameter values; set M∗
= ∅, k = 1, t = 1;
1: Randomly generate x ∈ [l, u]; compute Bmin = mini=1,...,n{ui − li};
2: Compute m1 = CSFilter(x), R1 = x − m1; set r1 = 1, M∗
= M∗
∪ m1;
3: while the stopping rule is not satisfied do
4: Randomly generate x ∈ [l, u];
5: Set o = arg minj=1,...,k dj ≡ x − mj;
6: if do  Ro then
7: if the direction from x to yo is ascent then
8: Set prob = 1;
9: else
10: Compute prob =  φ( do
Ro
, ro);
11: end if
12: else
13: Set prob = 1;
14: end if
15: if ζ‡
 prob then
16: Compute m = CSFilter(x); set t = t + 1;
17: if m − mj  γ∗
Bmin, for all j = 1, . . . , k then
18: Set k = k + 1, mk = m, rk = 1, M∗
= M∗
∪ mk; compute Rk = x − mk;
19: else
20: Set Rl = max{Rl, x − ml}; rl = rl + 1;
21: end if
22: else
23: Set Ro = max{Ro, x − mo}; ro = ro + 1;
24: end if
25: end while
– Problem (P1)
min f(x) ≡

x2 −
5.1
4π2
x2
1 +
5
π
x1 − 6
2
+ 10

1 −
1
8π

cos(x1) + 10
s.t. −5 ≤ x1 ≤ 10, 0 ≤ x2 ≤ 15
• known global minimum f∗
= 0.39789.
– Problem (P2)
min f(x) ≡

4 − 2.1x2
1 +
x4
1
3

x2
1 + x1x2 − 4(1 − x2
2)x2
2
s.t. −2 ≤ xi ≤ 2, i = 1, 2
• known global minimum: f∗
= −1.03160.
– Problem (P3)
min f(x) ≡
n

i=1

sin(xi) + sin

2xi
3

s.t. 3 ≤ xi ≤ 13, i = 1, 2
Towards a High-Performance Implementation of the MCSFilter 21
• known global minimum: f∗
= −2.4319.
– Problem (P4)
min f(x) ≡
1
2
2

i=1
x4
i − 16x2
i + 5xi
s.t. −5 ≤ xi ≤ 5, i = 1, 2
• known global minimum: f∗
= −78.3323.
– Problem (P5)
min f(x) ≡
1
2
3

i=1
x4
i − 16x2
i + 5xi
s.t. −5 ≤ xi ≤ 5, i = 1, · · · , 3
• known global minimum: f∗
= −117, 4983.
– Problem (P6)
min f(x) ≡
1
2
4

i=1
x4
i − 16x2
i + 5xi
s.t. −5 ≤ xi ≤ 5, i = 1, · · · , 4
• known global minimum: f∗
= −156, 665.
– Problem (P7)
min f(x) ≡
1
2
5

i=1
x4
i − 16x2
i + 5xi
s.t. −5 ≤ xi ≤ 5, i = 1, · · · , 5
• known global minimum: f∗
= −195, 839.
– Problem (P8)
min f(x) ≡
1
2
6

i=1
x4
i − 16x2
i + 5xi
s.t. −5 ≤ xi ≤ 5, i = 1, · · · , 6
• known global minimum: f∗
= −234, 997.
– Problem (P9)
min f(x) ≡
1
2
8

i=1
x4
i − 16x2
i + 5xi
s.t. −5 ≤ xi ≤ 5, i = 1, · · · , 8
22 L. Araújo et al.
• known global minimum: f∗
= −313, 3287.
– Problem (P10)
min f(x) ≡
1
2
10

i=1
x4
i − 16x2
i + 5xi
s.t. −5 ≤ x1 ≤ 5, i = 1, · · · , 10
• known global minimum: f∗
= −391, 658.
– Problem (P11)
min f(x) ≡ (1 + (x1 + x2 + 1)2
(19 − 14x1 + 3x2
1 − 14x2 + 6x1x2 + 3x2
2))
× (30 + (2x1 − 3x2)2
(18 − 32x1 + 12x2
1 + 48x2 − 36x1x2 + 27x2
2))
s.t. −2 ≤ xi ≤ 2, i = 1, 2
• known global minimum: f∗
= 3.
– Problem (P12)
min f(x) ≡
 5

i=1
i cos((i + 1)x1 + i)
  5

i=1
i cos((i + 1)x2 + i)

s.t. −10 ≤ xi ≤ 10, i = 1, 2
• known global minimum: f∗
= −186, 731.
– Problem (P13)
min f(x) ≡ cos(x1) sin(x2) −
x1
x2
2 + 1
s.t. −1 ≤ x1 ≤ 2, −1 ≤ x2 ≤ 1
• known global minimum: f∗
= −2, 0.
– Problem (P14)
min f(x) ≡ (1.5 − x1(1 − x2))2
+ (2.25 − x1(1 − x2
2))2
+ (2.625 − x1(1 − x3
2))2
s.t. −4.5 ≤ xi ≤ 4.5, i = 1, · · · , 2
• known global minimum: f∗
= 0.
– Problem (P15)
min f(x) ≡ 0.25x4
1 − 0.5x2
1 + 0.1x1 + 0.5x2
2
s.t. −2 ≤ xi ≤ 2, i = 1, · · · , 2
• known global minimum: f∗
= −0, 352386.
Towards a High-Performance Implementation of the MCSFilter 23
3.2 Experimental Conditions
The problems were evaluated in a computational system with the following rel-
evant characteristics: CPU - 2.3/4.3 GHz 18-core Intel Xeon W-2195, RAM -
32 GB DDR4 2666 MHz ECCR, OS - Linux Ubuntu 20.04.2 LTS, MATLAB
- version R2020a, JAVA - OpenJDK 8, C Compiler - gcc version 9.3.0 (-O2
option).
Since the MCSFilter algorithm has a stochastic component, 10 runs were
performed for each problem. The execution time of the first run was ignored and
so the presented execution times are averages of the remaining 9 executions.
All three implementations (MATLAB, Java and C) ran using the same
parameters, namely, the ones related to the stopping conditions. For the local
search CSFilter, αmin = 10−5
was considered in condition (4), as in previous
works. For the stopping condition of MCSFilter  = 10−2
was considered in
condition (5).
3.3 Analysis of the Results
Tables 1, 2 and 3 show the results obtained for each implementation. In all the
tables the first column (Prob) has the name of each problem; the second col-
umn (minavg) presents the average number of minimizers obtained in the 9
executions; in the third column (tavg) the average execution time (in seconds)
of 9 runs is shown; the last column (best f∗
) shows the best value achieved for
the global minimum. One important feature visible in the results of all three
implementations is that the global minimum is always achieved, in all problems.
Table 1. Results obtained using MATLAB.
Prob minavg nfavg tavg(s) best f∗
P1 3 5684,82 0,216 0,398
P2 6 8678,36 0,289 −1,032
P3 4 4265,55 0,178 −2,432
P4 4 4663,27 0,162 −78,332
P5 8 15877,09 0,480 −117,499
P6 16 51534,64 1,438 −156,665
P7 32 145749,64 3,898 −195,831
P8 64 391584,00 10,452 −234,997
P9 256 2646434,55 71,556 −313,329
P10 1023,63 15824590,64 551,614 −391,662
P11 4 39264,73 1,591 3,000
P12 64,36 239005,18 6,731 −186,731
P13 2,45 4653,18 0,160 −2,022
P14 2,27 52964,09 2,157 0,000
P15 2 2374,91 0,135 −0,352
24 L. Araújo et al.
For the MATLAB implementation, looking at the second column of Table 1 it
is possible to state that all the minimizers are found for all the problems except
problems P10, P12, P13, P14. Nevertheless, it is important to note that P10 has
1024 minimizers and an average of 1023,63 were found, and P12 has 760 mini-
mizers and an average of 64,36 minimizers were discovered. P13 is the problem
where MCSFilter exhibits the worst behaviour, shared by other algorithms. It is
also worth to remark that problem P10 has 10 variables and, taking into account
the structure of the CSFilter algorithm, such leads to a large number of function
evaluations. This, of course, impacts the execution time and, therefore, prob-
lem P10 has the highest execution time of all the problems, taking an average
551,614 s per each run.
Table 2. Results obtained using JAVA.
Prob minavg nfavg tavg(s) best f∗
P1 3 9412,64 0,003 0,3980
P2 6 13461,82 0,005 −1,0320
P3 4 10118,73 0,006 −2,4320
P4 4 10011,91 0,005 −78,3320
P5 8 32990,73 0,013 −117,4980
P6 16 98368,73 0,038 −156,6650
P7 32 274812,36 0,118 −195,8310
P8 64 730754,82 0,363 −234,9970
P9 256 4701470,36 2,868 −313,3290
P10 1024 27608805,73 20,304 −391,6620
P11 4 59438,18 0,009 3,0000
P12 46,45 99022,91 0,035 −186,7310
P13 2,36 6189,09 0,003 −2,0220
P14 2,54 62806,64 0,019 0,0000
P15 2 5439,18 0,002 −0,3520
Considering now the results produced by the Java implementation (Table 2),
it is possible to observe a behaviour similar to the MATLAB version, regarding
to the best value of the global minimum. This value is always achieved, in all
runs, as well as the known number of minimizers – except in problems P12, P13
and P14. It is noteworthy that using Java, all the 1024 minimizers were obtained.
If the fourth column of Tables 1 and 2 are compared, it is possible to point out
that in Java the algorithm clearly takes less time to obtain the same solutions.
Towards a High-Performance Implementation of the MCSFilter 25
Finally, Table 3 shows the results obtained by the new C-based implementa-
tion. It can be observed that the numerical behaviour of the algorithm is similar
to that observed in the Java version: both implementations find, approximately,
the same number of minimizers. However, comparing the execution times of the
C version with those of the Java version (Table 2), the C version is clearly faster.
Table 3. Results obtained using C.
Prob minavg nfavg tavg(s) best f∗
P1 3 3434,64 0,001 0,3979
P2 6 7809,73 0,002 −1,0316
P3 4 5733,36 0,002 −2,4320
P4 4 6557,54 0,002 −78,3323
P5 8 17093,64 0,007 −117,4985
P6 16 51359,82 0,023 −156,6647
P7 32 162219,63 0,073 −195,8308
P8 64 403745,91 0,198 −234,9970
P9 255,18 2625482,73 1,600 −313,3293
P10 1020,81 15565608,53 14,724 −391,6617
P11 4 8723,34 0,004 3,0000
P12 38,81 43335,73 0,027 −186,7309
P13 2 2345,62 0,001 −2,0218
P14 2,36 24101,18 0,014 0,0000
P15 2 3334,53 0,002 −0,3524
In order to make discernible the differences in the computational performance
of the three MCSFilter implementations, the execution times of all problems
considered in this study are represented, side-by-side, in two different charts,
accordingly with the order of magnitude of the execution times in the MATLAB
version (the slowest one). The average execution times for problems P1 −P5, P13
and P15, are represented in Fig. 2; in MATLAB, all these problems executed in
less than half a second. For the remaining problems, the execution times are rep-
resented in Fig. 3: these include problems P6 −P12, and P14, which took between
≈1,5 s to ≈10,5 s to execute in MATLAB, and whose execution times are repre-
sented against the primary (left) vertical axis; and also include problems P9 and
P10, whose executions times in MATLAB were ≈71 s and ≈551 s, respectively,
and are represented against the (right) secondary axis.
26 L. Araújo et al.
Fig. 2. Average execution time (s) for problems P1 − P5, P13, P15.
Fig. 3. Average execution time (s) for problems P6 − P12, P14.
A quick overview of Fig. 2 and Fig. 3 is enough to conclude that the new
C implementation is faster than the previous Java implementation, and much
faster than the original MATLAB version of the MCSFilter algorithm (note that
logarithmic scales were used in both figures due the different order of magnitude
of the various execution times; also, in Fig. 3 different textures were used for
Towards a High-Performance Implementation of the MCSFilter 27
problems P9 and P10 once their execution times are represented against the
right vertical axis). It is also possible to observe that, in general, there is a
direct proportionality between the execution times of the three code bases: when
changing the optimization problem, if the execution time increases or decreases
in one version, the same happens in the other versions.
To quantify the performance improvement of a version of the algorithm,
over a preceding implementation, one can calculate the Speedups (accelerations)
achieved. Thus, the speedup of the X version of the algorithm against the Y
version of the same algorithm is simply given by S(X, Y ) = T(Y )/T(X) where
T(Y ) and T(X) are the average execution times of the Y and X implementations.
The relevant speedups in the context of this study are represented in Table 4.
Another perspective on the capabilities of the three MCSFilter implementa-
tions herein considered builds on the comparison of their efficiency (or effective-
ness) in discovering all known optimizers of the optimization problems at stake.
A simple metric that yields such efficiency is E(X, Y ) = minavg(X)/minavg(Y ).
Table 5 shows the relative optima search efficiency for several pairs of MCS-
Filter implementations. For all but three problems, the MATLAB, JAVA and C
implementations are able to find exactly the same number of optimizers (and so
their relative efficiency is 1 or 100%). For problems P12, P13 and P14, however,
the search efficiency may vary a lot. Compared to the MATLAB version, both
the Java and C versions are unable to find as much optimizers for problems
P12 and P13; for problem P14, however, the Java version is able to find 12%
more optimizers than the MATLAB version, and the C version still lags behind
Table 4. Speedups of the execution time.
Problem S(Java, MATLAB) S(C, Java) S(C, MATLAB)
P1 71,8 2,7 197,4
P2 57,9 2,1 122,3
P3 29,6 3,2 94,5
P4 32,3 2,0 64,7
P5 36,9 1,7 64,5
P6 37,8 1,6 61,2
P7 33,0 1,6 53,2
P8 28,8 1,8 52,7
P9 24,9 1,8 44,7
P10 27,2 1,4 37,5
P11 176,8 2,4 417,9
P12 192,3 1,3 252,3
P13 53,3 2,4 126,3
P14 113,5 1,3 150,4
P15 67,6 1,2 84,4
28 L. Araújo et al.
Table 5. Optima search eficiency.
Problem E(Java, MATLAB) E(C, Java) E(C, MATLAB)
P1 1 1 1
P2 1 1 1
P3 1 1 1
P4 1 1 1
P5 1 1 1
P6 1 1 1
P7 1 1 1
P8 1 1 1
P9 1 1 1
P10 1 1 1
P11 1 1 1
P12 0,72 0,75 0,54
P13 0,96 0,85 0,81
P14 1,12 0,79 0,88
P15 1 1 1
(finding only 88% of the optimizers found by MATLAB). Also, compared to the
Java version, the C version currently shows an inferior search efficiency regarding
problems P12, P13 and P14, something to be tackled in future work.
A final analysis is provided based on the data of Table 6. This table presents,
for each problem Y , and for each MCSFilter implementation X, the precision
achieved by that implementation as P(X, Y ) = |f∗
(Y ) − bestf∗
(X)|, that is,
the modulus of the distance between the known global minimum of problem Y
and the best value achieved for the global minimum by implementation X. The
following conclusions may be derived: in all the problems the best f∗
is closer to
the global minimum known in the literature since the measure used is close to
zero; moreover, in the new implementation (using C) there are six problems for
which this implementation overcomes the previous two; in five other problems
all the implementations obtained the same precision for the best f∗
.
Towards a High-Performance Implementation of the MCSFilter 29
Table 6. Global optima precision.
Problem P(MATLAB) P(Java) P(C)
P1 1,1E − 04 1,1E − 04 1,0E − 05
P2 4,0E − 04 4,0E − 04 0,0E + 00
P3 1,0E − 04 1,0E − 04 1,0E − 04
P4 3,0E − 04 3,0E − 04 0,0E + 00
P5 7,0E − 04 3,0E − 04 2,0E − 04
P6 0,0E + 00 0,0E + 00 3,0E − 04
P7 8,0E − 03 8,0E − 03 8,2E − 03
P8 0,0E + 00 0,0E + 00 0,0E + 00
P9 3,0E − 04 3,0E − 04 6,0E − 04
P10 4,0E − 03 4,0E − 03 3,7E − 03
P11 0,0E + 00 0,0E + 00 0,0E + 00
P12 0,0E + 00 0,0E + 00 1,0E − 04
P13 2,2E − 02 2,2E − 02 2,2E − 02
P14 0,0E + 00 0,0E+00 0,0E + 00
P15 3,9E − 04 3,9E − 04 1,4E − 05
4 Conclusions and Future Work
The MCSFilter algorithm was used to solve bound constraint problems with dif-
ferent dimensions, from two to ten. The algorithm was originally implemented in
MATLAB and the results initially obtained were considered very promising. A
second implementation was later developed in Java, which increased the perfor-
mance considerably. In this work, the MCSFilter algorithm was re-coded in the
C language and a comparison was made between all three implementations, both
performance-wise and regarding search efficiency and precision. The evaluation
results show that, for the set of problems considered, the novel C version, even
though it is still a preliminary version, already surpasses the performance of the
Java implementation. The search efficiency of the C version, however, must be
improved. Regarding precision, the C version matched the previous in 6 problems
and brought improvements on 5 other problems, in a total of 15 problems.
Besides tackling the numerical efficiency and precision issues that still persist,
future work will include testing the C code with other problems (including higher
dimension and harder problems), and refining the code in order to improve its
performance. In particular, and most relevant for the problems that still take a
considerable amount of execution time, parallelization strategies will be exploited
as a way to further accelerate the execution of the MCSFilter algorithm.
Acknowledgements. This work has been supported by FCT - Fundação para a
Ciência e Tecnologia within the Project Scope: UIDB/05757/2020.
30 L. Araújo et al.
References
1. Abhishek, K., Leyffer, S., Linderoth, J.: FilMINT: an outer-approximation-based
solver for convex mixed-integer nonlinear programs. INFORMS J. Comput. 22(4),
555–567 (2010)
2. Abramson, M., Audet, C., Chrissis, J., Walston, J.: Mesh adaptive direct search
algorithms for mixed variable optimization. Optim. Lett. 3(1), 35–47 (2009).
https://doi.org/10.1007/s11590-008-0089-2
3. Amador, A., Fernandes, F.P., Santos, L.O., Romanenko, A., Rocha, A.M.A.C.:
Parameter estimation of the kinetic α-Pinene isomerization model using the MCS-
Filter algorithm. In: Gervasi, O., et al. (eds.) ICCSA 2018, Part II. LNCS, vol.
10961, pp. 624–636. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-
95165-2 44
4. Amador, A., Fernandes, F.P., Santos, L.O., Romanenko, A.: Application of MCS-
Filter to estimate stiction control valve parameters. In: International Conference of
Numerical Analysis and Applied Mathematics, AIP Conference Proceedings, vol.
1863, pp. 270005 (2017)
5. Belotti, P., Kirches, C., Leyffer, S., Linderoth, J., Mahajan, A.: Mixed-Integer
Nonlinear Optimization. Acta Numer. 22, 1–131 (2013)
6. Bonami, P., et al.: An algorithmic framework for convex mixed integer nonlinear
programs. Discrete Optim. 5(2), 186–204 (2008)
7. Bonami, P., Gonçalves, J.: Heuristics for convex mixed integer nonlinear programs.
Comput. Optim. Appl. 51(2), 729–747 (2012)
8. D’Ambrosio, C., Lodi, A.: Mixed integer nonlinear programming tools: an updated
practical overview. Ann. Oper. Res. 24, 301–320 (2013). https://doi.org/10.1007/
s10479-012-1272-5
9. Fernandes, F.P.: Programação não linear inteira mista e não convexa sem derivadas.
PhD thesis, University of Minho, Braga (2014)
10. Fernandes, F.P., Costa, M.F.P., Fernandes, E.M.G.P., et al.: Multilocal program-
ming: a derivative-free filter multistart algorithm. In: Murgante, B. (ed.) ICCSA
2013, Part I. LNCS, vol. 7971, pp. 333–346. Springer, Heidelberg (2013). https://
doi.org/10.1007/978-3-642-39637-3 27
11. Floudas, C., et al.: Handbook of Test Problems in Local and Global Optimization.
Kluwer Academic Publishers, Boston (1999)
12. Hendrix, E.M.T., Tóth, B.G.: Introduction to Nonlinear and Global Optimization.
Springer, New York (2010). https://doi.org/10.1007/978-0-387-88670-1
13. Romanenko, A., Fernandes, F.P., Fernandes, N.C. P.: PID controllers tuning with
MCSFilter. In: AIP Conference Proceedings, vol. 2116, pp. 220003 (2019)
14. Yang, X.-S.: Optimization Techniques and Applications with Examples. Wiley,
Hoboken (2018)
On the Performance of the ORTHOMADS
Algorithm on Continuous
and Mixed-Integer Optimization
Problems
Marie-Ange Dahito1,2(B)
, Laurent Genest1
, Alessandro Maddaloni2
,
and José Neto2
1
Stellantis, Route de Gisy, 78140 Vélizy-Villacoublay, France
{marieange.dahito,laurent.genest}@stellantis.com
2
Samovar, Telecom SudParis, Institut Polytechnique de Paris,
19 place Marguerite Perey, 91120 Palaiseau, France
{alessandro.maddaloni,jose.neto}@telecom-sudparis.eu
Abstract. ORTHOMADS is an instantiation of the Mesh Adaptive
Direct Search (MADS) algorithm used in derivative-free and black-
box optimization. We investigate the performance of the variants of
ORTHOMADS on the bbob and bbob-mixint, respectively continuous
and mixed-integer, testbeds of the COmparing Continuous Optimizers
(COCO) platform and compare the considered best variants with heuris-
tic and non-heuristic techniques. The results show a favourable perfor-
mance of ORTHOMADS on the low-dimensional continuous problems
used and advantages on the considered mixed-integer problems. Besides,
a generally faster convergence is observed on all types of problems when
the search phase of ORTHOMADS is enabled.
Keywords: Derivative-free optimization · Blackbox optimization ·
Benchmarking · Mesh Adaptive Direct Search · Mixed-integer blackbox
1 Introduction
Derivative-free optimization (DFO) and blackbox optimization (BBO) are
branches of numerical optimization that have known a fast growth in the past
years, especially with the growing need to solve real-world application problems
but also with the development of methods to deal with unavailable or numeri-
cally costly derivatives. DFO focuses on optimization techniques that make no
use of derivatives while BBO deals with problems where the objective function
is not analytically known, that is it is a blackbox. A regular blackbox objective
is the output of a computer simulation: for instance, at Stellantis, the crash or
acoustic outputs computed by the finite element simulation of a vehicle. The
problems addressed in this paper are of the form:
c
 Springer Nature Switzerland AG 2021
A. I. Pereira et al. (Eds.): OL2A 2021, CCIS 1488, pp. 31–47, 2021.
https://doi.org/10.1007/978-3-030-91885-9_3
32 M.-A. Dahito et al.
minimize
x∈X
f(x), (1)
where X is a bounded domain of either Rn
or Rc
× Zi
with c and i respectively
the number of continuous and integer variables. n = c+i is the dimension of the
problem and f is a blackbox function. Heuristic and non-heuristic techniques
can tackle this kind of problems. Among the main approaches used in DFO
are direct local search methods. The latter are iterative methods that, at each
iteration, evaluate a set of points in a certain radius that can be increased if a
better solution is found or decreased if the incumbent remains the best point at
the current iteration.
The Mesh Adaptive Direct Search (MADS) [1,4,5] is a famous direct local
search method used in DFO and BBO that is an extension of the Generalized
Pattern Search (GPS) introduced in [28]. MADS evolves on a mesh by first doing
a global exploration called the search phase and then, if a better solution than
the current iterate is not found, a local poll is performed. The points evaluated
in the poll are defined by a finite set of poll directions that is updated at each
iteration. The algorithm is derived in several instantiations available in the Non-
linear Optimization with the MADS algorithm (NOMAD) software [7,19] and
its performance is evaluated in several papers. As examples, a broad compar-
ison of DFO optimizers is performed on 502 problems in [25] and NOMAD is
used in [24] with a DACE surrogate and compared with other local and global
surrogate-based approaches in the context of constrained blackbox optimization
on an automotive optimization problem and twenty two test problems.
Given the growing number of algorithms to deal with BBO problems, the
choice of the most adapted method for solving a specific problem still remains
complex. In order to help with this decision, some tools have been developed
to compare the performance of algorithms. In particular, data profiles [20] are
frequently used in DFO and BBO to benchmark algorithms: they show, given
some precision or target value, the fraction of problems solved by an algorithm
according to the number of function evaluations. There also exist suites of aca-
demic test problems: although the latter are treated as blackbox functions, they
are analytically known, which is an advantage to understand the behaviour of
an algorithm. There are also available industrial applications but they are rare.
Twenty two implementations of derivative-free algorithms for solving box-
constrained optimization problems are benchmarked in [25] and compared with
each other according to different criteria. They use a set of 502 problems that
are categorized according to their convexity (convex or nonconvex), smoothness
(smooth or non-smooth) and dimensions between 1 and 300. The algorithms
tested include local-search methods such as MADS through NOMAD version
3.3 and global-search methods such as the NEW Unconstrained Optimization
Algorithm (NEWUOA) [23] using trust regions and the Covariance Matrix Adap-
tation - Evolution Strategy (CMA-ES) [16] which is an evolutionary algorithm.
Simulation optimization deals with problems where at least some of the objec-
tive or constraints come from stochastic simulations. A review of algorithms to
solve simulation optimization is presented in [2], among which the NOMAD
software. However, this paper does not compare them due to a lack of standard
comparison tools and large-enough testbeds in this optimization branch.
ORTHOMADS on Continuous and Mixed-Integer Optimization Problems 33
In [3], the MADS algorithm is used to optimize the treatment process of
spent potliners in the production of aluminum. The problem is formalized as
a 7–dimensional non-linear blackbox problem with 4 inequality constraints. In
particular, three strategies are compared using absolute displacements, relative
displacements and the latter with a global Latin hypercube sampling search.
They show that the use of scaling is particularly beneficial on the considered
chemical application.
The instantiation ORTHOMADS is introduced in Ref. [1] and consists in using
orthogonal directions in the poll step of MADS. It is compared to the initial
LTMADS, where the poll directions are generated from a random lower trian-
gular matrix, and to GPS algorithm on 45 problems from the literature. They
show that MADS outperforms GPS and that the instantiation ORTHOMADS
competes with LTMADS and has the advantage that its poll directions cover
better the variable space.
The ORTHOMADS algorithm, which is the default MADS instantiation used
in NOMAD, presents variants in the poll directions of the method. To our knowl-
edge, the performance of these different variants has not been discussed in the
literature. The purpose of this paper is to explore this aspect by performing
experiments with the ORTHOMADS variants. This work is part of a project
conducted with the automotive group Stellantis to develop new approaches for
solving their blackbox optimization problems. Our contributions are first the
evaluations of the ORTHOMADS variants on continuous and mixed-integer opti-
mization problems. Besides, the contribution of the search phase is studied and
shows a general deterioration of the performance when the search is turned off.
The effect however decreases with increasing dimension. Two from the best vari-
ants of ORTHOMADS are identified on each of the used testbeds and their perfor-
mance is compared with other algorithms including heuristic and non-heuristic
techniques. Our experiments exhibit particular variants of ORTHOMADS per-
forming best depending on problems features. Plots for analyses are available at
the following link: https://github.com/DahitoMA/ResultsOrthoMADS.
The paper is organized as follows. Section 2 gives an overview of the
MADS algorithm and its ORTHOMADS variants. In Sect. 3, the variants of
ORTHOMADS are evaluated on the bbob and bbob-mixint suites that con-
sist respectively of continuous and mixed-integer functions. Then, two from the
best variants of ORTHOMADS are compared with other algorithms in Sect. 4.
Finally, Sect. 5 discusses the results of the paper.
2 MADS and the Variants of ORTHOMADS
This section gives an overview of the MADS algorithm and explains the differ-
ences among the ORTHOMADS variants.
2.1 The MADS Algorithm
MADS is an iterative direct local search method used for DFO and BBO prob-
lems. The method relies on a mesh Mk updated at each iteration and determined
34 M.-A. Dahito et al.
by the current iterate xk, a mesh parameter size δk  0 and a matrix D whose
columns consist of p positive spanning directions. The mesh is defined as follows:
Mk := {xk + δkDy : y ∈ Np
}, (2)
where the columns of D form a positive spanning set {D1, D2, . . . , Dp} and N
stands for natural numbers.
The algorithm proceeds in two phases at each iteration: the search and the
poll. The search phase is optional and similar to a design of experiment: a finite
set of points Sk, stemming generally from a surrogate model prediction and a
Nelder-Mead (N-M) search [21], are evaluated anywhere on the mesh. If the
search fails at finding a better point, then a poll is performed. During the poll
phase, a finite set of points are evaluated on the mesh in the neighbourhood of
the incumbent. This neighbourhood is called the frame Fk and has a radius of
Δk  0 that is called the poll size parameter. The frame is defined as follows:
Fk := {x ∈ Mk : x − xk∞ ≤ Δkb}, (3)
where b = max{d∞, d ∈ D} and D ⊂ {D1, D2, . . . , Dp} is a finite set of poll
directions. The latter are such that their union over iterations grows dense on
the unit sphere.
The two size parameters are such that δk ≤ Δk and evolve after each itera-
tion: if a better solution is found, they are increased and otherwise decreased. As
the mesh size decreases more drastically than the poll size in case of an unsuc-
cessful iteration, the choice of points to evaluate during the poll becomes greater
with unsuccessful iteration. Usually, δk = min{Δk, Δ2
k}. The description of the
MADS algorithm is given in Algorithm 1 and inspired from [6].
Algorithm 1: Mesh Adaptive Direct Search (MADS)
Initialize k = 0, x0 ∈ Rn
, D ∈ Rn×p
, Δ0  0, τ ∈ (0, 1) ∩ Q, stop  0
1. Update δk = min{Δk, Δ2
k}
2. Search
If f(x)  f(xk) for x ∈ Sk then xk+1 ← x, Δk+1 ← τ−1
Δk and go to 4
Else go to 3
3. Poll
Select Dk,Δk such that Pk := {xk + δkd : d ∈ Dk,Δk } ⊂ Fk
If f(x)  f(xk) for x ∈ Pk then xk+1 ← x, Δk+1 ← τ−1
Δk and go to 4
Else xk+1 ← xk and Δk+1 ← τΔk
4. Termination
If Δk+1 ≥ stop then k ← k + 1 and go to 1
Else stop
2.2 ORTHOMADS Variants
MADS has two main instantiations called ORTHOMADS and LTMADS, the
latter being the first developed. Both variants are implemented in the NOMAD
ORTHOMADS on Continuous and Mixed-Integer Optimization Problems 35
software but as ORTHOMADS is to be preferred for its coverage property in
the variable space, it was used for the experiments of this paper with NOMAD
version 3.9.1.
The NOMAD implementation of ORTHOMADS provides 6 variants of the
algorithm according to the number of directions used in the poll or according to
the way that the last poll direction is computed. They are listed below.
ORTHO N + 1 NEG computes n + 1 directions among which n are orthogonal
and the (n + 1)th
direction is the opposite sum of the n first ones.
ORTHO N + 1 UNI computes n + 1 directions among which n are orthogonal
and the (n + 1)th
direction is generated from a uniform distribution.
ORTHO N + 1 QUAD computes n + 1 directions among which n are orthogonal
and the (n+1)th
direction is generated from the minimization of a local quadratic
model of the objective.
ORTHO 2N computes 2n directions that are orthogonal. More precisely each
direction is orthogonal to 2n−2 directions and collinear with the remaining one.
ORTHO 1 uses only one direction in the poll.
ORTHO 2 uses two opposite directions in the poll.
In the plots, the variants will respectively be denoted using Neg, Uni, Quad,
2N, 1 and 2.
3 Test of the Variants of ORTHOMADS
In this section, we try to identify potentially better direction types of
ORTHOMADS and investigate the contribution of the search phase.
3.1 The COCO Platform and the Used Testbeds
The COmparing Continuous Optimizers (COCO) platform [17] is a bench-
marking framework for blackbox optimization. In this respect, several suites
of standard test problems are provided and are declined in variants, also called
instances. The latter are obtained from transformations in variable and objective
space in order to make the functions less regular.
In particular, the bbob testbed [13] provides 24 continuous problems for
blackbox optimization, each of them available in 15 instances and in dimensions
2, 3, 5, 10, 20 and 40. The problems are categorized in five subgroups: separable
functions, functions with low or moderate conditioning, ill-conditioned functions,
multi-modal functions with global structure and multi-modal weakly structured
functions. All problems are known to have their global optima in [−5, 5]n
, where
n is the size of a problem.
The mixed-integer suite of problems bbob-mixint [29] derives the bbob and
bbob-largescale [30] problems by imposing integer constraints on some vari-
ables. It consists of the 24 functions of bbob available in 15 instances and in
dimensions 5, 10, 20, 40, 80 and 160.
COCO also provides various tools for algorithm comparison, notably Empir-
ical Cumulative Distribution Function (ECDF) plots (or data profiles) that are
36 M.-A. Dahito et al.
used in this paper. They show the empirical runtimes, computed as the num-
ber of function evaluations to reach given function target values, divided by the
dimension. A function target value is defined as ft = f∗
+ Δft, where f∗
is the
minimum value of a function f and Δft is a target precision.
For the bbob and bbob-mixint testbeds, the target precisions are 51 values
between 10−8
and 102
. Thus, if a method reaches 1 in the ordinate axis of an
ECDF plot, it means 100% of function target values have been reached, including
the smallest one f∗
+ 10−8
. The presence of a cross on an ECDF curve indicates
when the maximal budget of function evaluations is reached. After the cross,
COCO estimates the runtimes: it is called simulated restarts.
For bbob, an artificial solver called best 2009 is present on the plots and is
used as reference solver. Its data comes from the BBOB-2009 workshop1
com-
paring 31 solvers.
The statistical significance of the results is evaluated in COCO using the
rank-sum test.
3.2 Parameter Setting
In order to test the performance of the different variants of ORTHOMADS, the
bbob and bbob-mixint suites of COCO were used, in particular the problems
that have a dimension lower than or equal to 20. This limit in the dimensions has
two main reasons: the first one is the computational cost required for the exper-
iments and, with the perspective of solving real-world problems, 20 is already
a high dimension in this expensive blackbox context. Only the first 5 instances
of each function were used, that is a total of respectively 600 and 360 problems
used from bbob and bbob-mixint. A maximal function evaluation budget of
2 × 103
× n was set, with n being the dimension of the considered problem.
To see the contribution of the search phase, the experiments on the vari-
ants were divided in two subgroups: the first one using the default search
of ORTHOMADS and the second one where the search phase is disabled.
The latter is obtained by setting the four parameters NM SEARCH, VNS SEARCH,
SPECULATIVE SEARCH and MODEL SEARCH of NOMAD to the value no. In the
plots, the label NoSrch is used when the search is turned off. The search notably
includes the use of a quadratic model and of the N-M method. The minimal mesh
size was set to 10−11
.
Experiments were run with restarts allowed for unsolved problems when the
evaluation budget is not reached. This may happen due to internal stopping
criteria of the solvers. The initial points used are suggested by COCO through
the method initial solution proposal().
3.3 Results
Continuous Problems. As said previously, the contribution of the search
phase was studied. The results aggregated on all functions in dimensions 5, 10
1
https://coco.gforge.inria.fr/doku.php?id=bbob-2009-results.
ORTHOMADS on Continuous and Mixed-Integer Optimization Problems 37
and 20 on the bbob suite are depicted on Fig. 1. They show that enabling the
search step in NOMAD generally leads to an equivalent or higher performance
of the variants and this improvement can be important. Besides, using one or
two directions with or without search is often far from being competitive with
the other variants. In particular, 1 NoSrch is often the worst or among the
worsts, except on Discus which is an ill-conditioned quadratic function, where it
competes with the variants that do not use the search. As mentioned in Sect. 1,
the plots depicting the results described in the paper are available online.
Looking at the results aggregated on all functions for ORTHO 2N, ORTHO N + 1
NEG, ORTHO N + 1 QUAD and ORTHO N + 1 UNI, the search increases the success
rate from nearly 70%, 55% and 40% up to 90%, 80% and 65% respectively in
dimensions 2, 3 and 5, as shown in Fig. 1a for dimension 5. From dimension
10, the advantage of the search decreases and the performance of ORTHO N + 1
UNI visibly stands out from the other three variants mentioned above since it
decreases with or without the search, as illustrated in Figs. 1b and 1c.
Focusing on some families of functions, Neg NoSrch seems slightly less
impacted than the other NoSrch variants by the increase of the dimension.
On ill-conditioned problems, the variants using search are more sensitive to
the increase of the dimension.
Considering multi-modal functions with adequate global structure, 2N NoSrch
solves 15% more problems than the other NoSrch variants in 2D. In this dimen-
sion, the variants using search have a better success rate than the best 2009
up to a budget of 200 function evaluations. From 10D, all curves are rather flat:
all ORTHOMADS variants tend to a local optimum.
With increasing dimension, Neg is competitive or better than the others on
multi-modal problems without global structure, followed by 2N. In particular,
in dimension 20 both variants are competitive and outperform the remaining
variants that use search on the Gallagher’s Gaussian 101–me peaks function,
and Neg outperforms them with a gap of more than 20% in their success rate on
the Gallagher’s Gaussian 21–hi peaks function which is also ill-conditioned.
Since Neg and 2N are often among the best variants on the considered prob-
lems and have an advantage on some multi-modal weakly structured functions,
they are chosen for comparison with other solvers.
Mixed-Integer Problems. The experiments performed on the mixed-integer
problems also show a similar or improved performance of the ORTHOMADS
variants when the search step is enabled in NOMAD, as illustrated in Fig. 2 in
dimensions 5, 10 and 20. Looking at Fig. 2a for instance, in the given budget of
2 × 103
× n, the variant denoted as 2 solves 75% of the problems in dimension
5 against 42% for 2 NoSrch.
However, it is not always the case: the only use of the poll directions is some-
times favourable. It is notably the case on the Schwefel function in dimension 20
where the curve Neg NoSrch solves 43% of the problems, which is the highest
success rate when the search and non-search settings are compared together.
38 M.-A. Dahito et al.
(a) 5D (b) 10D (c) 20D
Fig. 1. ECDF plots: the variants of ORTHOMADS with and without the search step
on the bbob problems. Results aggregated on all functions in dimensions 5, 10 and 20.
When the search is disabled, ORTHO 2N seems preferable in small dimension,
namely here in 5D as presented in Fig. 2a. In this dimension, it is sometimes
the only variant that solves all the instances of a function in the given budget:
it is the case for the step-ellipsoidal function, the two Rosenbrock functions
(original and rotated), the Schaffer functions, and the Schwefel function. It also
solves all the separable functions in 5D and can therefore solve the different
types of problems. Although the difference is less noticeable with the search step
enabled, this variant is still a good choice, especially on multi-modal problems
with adequate global structure.
On the whole, looking at Fig. 2, ORTHO 1 and ORTHO 2 solve less problems than
the other variants and the gap in performance with the other direction types
increases with the dimension, whether using the search phase or not. Although
the use of the search helps solving some functions in low dimension such as the
sphere or linear slope functions in 5D, both variants perform poorly in dimension
20 on second-order separable functions, even if the search enables the solution
of linear slope which is a linear function. Among these two variants, using 2
poll directions also seems better than only one, especially in dimension 10 where
ORTHO 2 solves more than 23% and 40% of problems respectively without and
with use of search, against 16% and 31% for ORTHO 1 as presented in Fig. 2b.
Among the four remaining variants, ORTHO N + 1 UNI reaches equivalent or
less targets than the others whether considering the setting where the search
is available or when only the poll directions are used, as depicted in Fig. 2. In
particular, in dimension 5, the four variants using more than n+1 poll directions
solve more than 85% of the separable problems with or without search. But when
the dimension increases, ORTHO N + 1 UNI has a disadvantage on the Rastrigin
functions where the use of the search does not noticeably help the convergence
of the algorithm.
Focusing on the different function types, no algorithm among the variants
ORTHO 2N, ORTHO N + 1 NEG and ORTHO N + 1 QUAD seem to particularly outperform
the others in dimensions 10 and 20. A higher success rate is however noticeable
on multimodal weakly structured problems with search available for ORTHO N + 1
NEG in comparison with ORTHO N + 1 QUAD and for the latter in comparison with
ORTHOMADS on Continuous and Mixed-Integer Optimization Problems 39
ORTHO 2N. Besides, Neg reaches more targets on problems with low or moderate
conditioning. For these reasons, ORTHO N + 1 NEG was chosen for comparison with
other solvers. Besides, the mentioned slight advantage of ORTHO N + 1 QUAD over
ORTHO 2N, its equivalent or better performance on separable and ill-conditioned
functions compared with the latter variant, makes it a good second choice to
represent ORTHOMADS.
(a) 5D (b) 10D (c) 20D
Fig. 2. ECDF plots: the variants of ORTHOMADS with and without the search step
on the bbob-mixint problems. Results aggregated on all functions in dimensions 5, 10
and 20.
4 Comparison of ORTHOMADS with other solvers
The previous experiments showed the advantage of using the search step in
ORTHOMADS to speed up convergence. They also revealed the effectiveness of
some variants that are used here for comparisons with other algorithms on the
continuous and mixed-integer suites.
4.1 Compared Algorithms
Apart from ORTHOMADS, the other algorithms used for comparison on bbob
are first, three deterministic algorithms: the quasi-Newton Broyden-Fletcher-
Goldfarb-Shanno (BFGS) method [22], the quadratic model-based NEWUOA
and the adaptive N-M [14] that is a simplicial search. Stochastic methods are also
used among which a Random Search (RS) algorithm [10] and three population-
based algorithms: a surrogate-assisted CMA-ES, Differential Evolution (DE) [27]
and Particle Swarm Optimization (PSO) [11,18].
In order to perform algorithm comparisons on bbob-mixint, data from four
stochastic methods were collected: RS, the mixed-integer variant of CMA-ES,
DE and the Tree-structured Parzen Estimator (TPE) [8] that is a stochastic
model-based technique.
BFGS is an iterative quasi-Newton linesearch method that uses approxima-
tions of the Hessian matrix of the objective. At iteration k, the search direc-
tion pk solves a linear system Bkpk = −∇f(xk), where xk is the iterate, f the
40 M.-A. Dahito et al.
objective function and Bk ≈ ∇2
f(xk). The matrix Bk is then updated according
to a formula. In the context of BBO, the derivatives are approximated with finite
differences.
NEWUOA is the Powell’s model-based algorithm for DFO. It is a trust-region
method that uses sequential quadratic interpolation models to solve uncon-
strained derivative-free problems.
The N-M method is a heuristic DFO method that uses simplices. It begins
with a non degenerated simplex. The algorithm identifies the worst point among
the vertices of the simplex and tries to replace it by reflection, expansion or
contraction. If none of these geometric transformations of the worst point enables
to find a better point, a contraction preserving the best point is done. The
adaptive N-M method uses the N-M technique with adaptation of parameters
to the dimension, which is notably useful in high dimensions.
RS is a stochastic iterative method that performs a random selection of
candidates: at each iteration, a random point is sampled and the best between
this trial point and the incumbent is kept.
CMA-ES is a state-of-the art evolutionary algorithm used in DFO. Let
N(m, C) denote a normal distribution of mean m and covariance matrix C.
It can be represented by the ellipsoid x
C−1
x = 1. The main axes of the ellip-
soid are the eigenvectors of C and the square roots of their lengths correspond
to the associated eigenvalues. CMA-ES iteratively samples its populations from
multivariate normal distributions. The method uses updates of the covariance
matrices to learn a quadratic model of the objective.
DE is a meta-heuristic that creates a trial vector by combining the incumbent
with randomly chosen individuals from a population. The trial vector is then
sequentially filled with parameters from itself or the incumbent. Finally the best
vector between the incumbent and the created vector is chosen.
PSO is an archive-based evolutionary algorithm where candidate solutions
are called particles and the population is a swarm. The particles evolve according
to the global best solution encountered but also according to their local best
points.
TPE is an iterative model-based method for hyperparameter optimization.
It sequentially builds a probabilistic model from already evaluated hyperparam-
eters sets in order to suggest a new set of hyperparameters to evaluate on a score
function that is to be minimized.
4.2 Parameter Setting
To compare the considered best variants of ORTHOMADS with other methods,
the 15 instances of each function were used and the maximal function evaluation
budget was increased to 105
× n, with n being the dimension.
For the bbob problems, the data used for BFGS, DE and the adaptive N-M
method comes from the experiments of [31]. CMA-ES was tested in [15], the
data of NEWUOA is from [26], the one of PSO is from [12] and RS results
come from [9]. The comparison data of CMA-ES, DE, RS and TPE used on
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf
Optimization, Learning Algorithms and Applications.pdf

More Related Content

Similar to Optimization, Learning Algorithms and Applications.pdf

Bioinformatics resources and search tools - report on summer training proj...
Bioinformatics   resources and search tools -  report on summer training proj...Bioinformatics   resources and search tools -  report on summer training proj...
Bioinformatics resources and search tools - report on summer training proj...
Sapan Anand
 
Community Finding with Applications on Phylogenetic Networks [Thesis]
Community Finding with Applications on Phylogenetic Networks [Thesis]Community Finding with Applications on Phylogenetic Networks [Thesis]
Community Finding with Applications on Phylogenetic Networks [Thesis]
Luís Rita
 
Medical_Informatics_and_Data_Analysis.pdf
Medical_Informatics_and_Data_Analysis.pdfMedical_Informatics_and_Data_Analysis.pdf
Medical_Informatics_and_Data_Analysis.pdf
alemu27
 
Industrial neuroscience in aviation evaluation of mental states in aviation ...
Industrial neuroscience in aviation  evaluation of mental states in aviation ...Industrial neuroscience in aviation  evaluation of mental states in aviation ...
Industrial neuroscience in aviation evaluation of mental states in aviation ...
BatDeegii
 
Brochure icamen 2019
Brochure icamen 2019Brochure icamen 2019
Brochure icamen 2019
Dr. Mithilesh Dikshit
 
Structure Based Prediction of Interacting Partners of WDR13
Structure Based Prediction of Interacting Partners of WDR13Structure Based Prediction of Interacting Partners of WDR13
Structure Based Prediction of Interacting Partners of WDR13
Ashish Baghudana
 

Similar to Optimization, Learning Algorithms and Applications.pdf (20)

International journal on telecom 2011
International journal on telecom 2011International journal on telecom 2011
International journal on telecom 2011
 
MIFRD - Brochure.pdf
MIFRD - Brochure.pdfMIFRD - Brochure.pdf
MIFRD - Brochure.pdf
 
CSEDU 2014. Proceedings of the 6th International Conference on Computer Suppo...
CSEDU 2014. Proceedings of the 6th International Conference on Computer Suppo...CSEDU 2014. Proceedings of the 6th International Conference on Computer Suppo...
CSEDU 2014. Proceedings of the 6th International Conference on Computer Suppo...
 
Bioinformatics resources and search tools - report on summer training proj...
Bioinformatics   resources and search tools -  report on summer training proj...Bioinformatics   resources and search tools -  report on summer training proj...
Bioinformatics resources and search tools - report on summer training proj...
 
Community Finding with Applications on Phylogenetic Networks [Thesis]
Community Finding with Applications on Phylogenetic Networks [Thesis]Community Finding with Applications on Phylogenetic Networks [Thesis]
Community Finding with Applications on Phylogenetic Networks [Thesis]
 
2021_AI4_health_prairie
2021_AI4_health_prairie2021_AI4_health_prairie
2021_AI4_health_prairie
 
Medical_Informatics_and_Data_Analysis.pdf
Medical_Informatics_and_Data_Analysis.pdfMedical_Informatics_and_Data_Analysis.pdf
Medical_Informatics_and_Data_Analysis.pdf
 
A hybrid approach to finding phenotype candidates in genetic text.pdf
A hybrid approach to finding phenotype candidates in genetic text.pdfA hybrid approach to finding phenotype candidates in genetic text.pdf
A hybrid approach to finding phenotype candidates in genetic text.pdf
 
ICIOT 2023 Brochure.pdf
ICIOT 2023 Brochure.pdfICIOT 2023 Brochure.pdf
ICIOT 2023 Brochure.pdf
 
2-preface and pages
2-preface and pages2-preface and pages
2-preface and pages
 
Developing Computational Thinking in Compulsory Education
Developing Computational Thinking in Compulsory EducationDeveloping Computational Thinking in Compulsory Education
Developing Computational Thinking in Compulsory Education
 
International Conference on Latest Trends in Engineering Science and Management
International Conference on Latest Trends in Engineering Science and ManagementInternational Conference on Latest Trends in Engineering Science and Management
International Conference on Latest Trends in Engineering Science and Management
 
Industrial neuroscience in aviation evaluation of mental states in aviation ...
Industrial neuroscience in aviation  evaluation of mental states in aviation ...Industrial neuroscience in aviation  evaluation of mental states in aviation ...
Industrial neuroscience in aviation evaluation of mental states in aviation ...
 
1111 (1).pdf
1111 (1).pdf1111 (1).pdf
1111 (1).pdf
 
Brochure icamen 2019
Brochure icamen 2019Brochure icamen 2019
Brochure icamen 2019
 
Emerging technologies for health and medicine_ virtual reality, augmented rea...
Emerging technologies for health and medicine_ virtual reality, augmented rea...Emerging technologies for health and medicine_ virtual reality, augmented rea...
Emerging technologies for health and medicine_ virtual reality, augmented rea...
 
Link - Opportunities and Challenges for Research on Intelligent Algorithms fo...
Link - Opportunities and Challenges for Research on Intelligent Algorithms fo...Link - Opportunities and Challenges for Research on Intelligent Algorithms fo...
Link - Opportunities and Challenges for Research on Intelligent Algorithms fo...
 
Structure Based Prediction of Interacting Partners of WDR13
Structure Based Prediction of Interacting Partners of WDR13Structure Based Prediction of Interacting Partners of WDR13
Structure Based Prediction of Interacting Partners of WDR13
 
fuzzy multi-criteria-decision_making_theory_and_applications
fuzzy multi-criteria-decision_making_theory_and_applicationsfuzzy multi-criteria-decision_making_theory_and_applications
fuzzy multi-criteria-decision_making_theory_and_applications
 
PSK Method for Solving Type-1 and Type-3 Fuzzy Transportation Problems
PSK Method for Solving Type-1 and Type-3 Fuzzy Transportation Problems PSK Method for Solving Type-1 and Type-3 Fuzzy Transportation Problems
PSK Method for Solving Type-1 and Type-3 Fuzzy Transportation Problems
 

More from Man_Ebook

More from Man_Ebook (20)

BÀI GIẢNG MÔN HỌC CƠ SỞ NGÔN NGỮ, Dùng cho hệ Cao đẳng chuyên nghiệp.pdf
BÀI GIẢNG MÔN HỌC CƠ SỞ NGÔN NGỮ, Dùng cho hệ Cao đẳng chuyên nghiệp.pdfBÀI GIẢNG MÔN HỌC CƠ SỞ NGÔN NGỮ, Dùng cho hệ Cao đẳng chuyên nghiệp.pdf
BÀI GIẢNG MÔN HỌC CƠ SỞ NGÔN NGỮ, Dùng cho hệ Cao đẳng chuyên nghiệp.pdf
 
TL Báo cáo Thực tập tại Nissan Đà Nẵng.doc
TL Báo cáo Thực tập tại Nissan Đà Nẵng.docTL Báo cáo Thực tập tại Nissan Đà Nẵng.doc
TL Báo cáo Thực tập tại Nissan Đà Nẵng.doc
 
Giáo trình thực vật học 2 - Trường ĐH Cần Thơ.pdf
Giáo trình thực vật học 2 - Trường ĐH Cần Thơ.pdfGiáo trình thực vật học 2 - Trường ĐH Cần Thơ.pdf
Giáo trình thực vật học 2 - Trường ĐH Cần Thơ.pdf
 
Giáo trình mô động vật - Trường ĐH Cần Thơ.pdf
Giáo trình mô động vật - Trường ĐH Cần Thơ.pdfGiáo trình mô động vật - Trường ĐH Cần Thơ.pdf
Giáo trình mô động vật - Trường ĐH Cần Thơ.pdf
 
Giáo trình ngôn ngữ hệ thống A - Trường ĐH Cần Thơ.pdf
Giáo trình ngôn ngữ hệ thống A - Trường ĐH Cần Thơ.pdfGiáo trình ngôn ngữ hệ thống A - Trường ĐH Cần Thơ.pdf
Giáo trình ngôn ngữ hệ thống A - Trường ĐH Cần Thơ.pdf
 
Giáo trình ngôn ngữ mô hình hóa UML - Trường ĐH Cần Thơ.pdf
Giáo trình ngôn ngữ mô hình hóa UML - Trường ĐH Cần Thơ.pdfGiáo trình ngôn ngữ mô hình hóa UML - Trường ĐH Cần Thơ.pdf
Giáo trình ngôn ngữ mô hình hóa UML - Trường ĐH Cần Thơ.pdf
 
Giáo trình nguyên lý máy học - Trường ĐH Cần Thơ.pdf
Giáo trình nguyên lý máy học - Trường ĐH Cần Thơ.pdfGiáo trình nguyên lý máy học - Trường ĐH Cần Thơ.pdf
Giáo trình nguyên lý máy học - Trường ĐH Cần Thơ.pdf
 
Giáo trình mô hình hóa quyết định - Trường ĐH Cần Thơ.pdf
Giáo trình mô hình hóa quyết định - Trường ĐH Cần Thơ.pdfGiáo trình mô hình hóa quyết định - Trường ĐH Cần Thơ.pdf
Giáo trình mô hình hóa quyết định - Trường ĐH Cần Thơ.pdf
 
Giáo trình Linux và phần mềm nguồn mở.pdf
Giáo trình Linux và phần mềm nguồn mở.pdfGiáo trình Linux và phần mềm nguồn mở.pdf
Giáo trình Linux và phần mềm nguồn mở.pdf
 
Giáo trình logic học đại cương - Trường ĐH Cần Thơ.pdf
Giáo trình logic học đại cương - Trường ĐH Cần Thơ.pdfGiáo trình logic học đại cương - Trường ĐH Cần Thơ.pdf
Giáo trình logic học đại cương - Trường ĐH Cần Thơ.pdf
 
Giáo trình lý thuyết điều khiển tự động.pdf
Giáo trình lý thuyết điều khiển tự động.pdfGiáo trình lý thuyết điều khiển tự động.pdf
Giáo trình lý thuyết điều khiển tự động.pdf
 
Giáo trình mạng máy tính - Trường ĐH Cần Thơ.pdf
Giáo trình mạng máy tính - Trường ĐH Cần Thơ.pdfGiáo trình mạng máy tính - Trường ĐH Cần Thơ.pdf
Giáo trình mạng máy tính - Trường ĐH Cần Thơ.pdf
 
Giáo trình lý thuyết xếp hàng và ứng dụng đánh giá hệ thống.pdf
Giáo trình lý thuyết xếp hàng và ứng dụng đánh giá hệ thống.pdfGiáo trình lý thuyết xếp hàng và ứng dụng đánh giá hệ thống.pdf
Giáo trình lý thuyết xếp hàng và ứng dụng đánh giá hệ thống.pdf
 
Giáo trình lập trình cho thiết bị di động.pdf
Giáo trình lập trình cho thiết bị di động.pdfGiáo trình lập trình cho thiết bị di động.pdf
Giáo trình lập trình cho thiết bị di động.pdf
 
Giáo trình lập trình web - Trường ĐH Cần Thơ.pdf
Giáo trình lập trình web  - Trường ĐH Cần Thơ.pdfGiáo trình lập trình web  - Trường ĐH Cần Thơ.pdf
Giáo trình lập trình web - Trường ĐH Cần Thơ.pdf
 
Giáo trình lập trình .Net - Trường ĐH Cần Thơ.pdf
Giáo trình lập trình .Net  - Trường ĐH Cần Thơ.pdfGiáo trình lập trình .Net  - Trường ĐH Cần Thơ.pdf
Giáo trình lập trình .Net - Trường ĐH Cần Thơ.pdf
 
Giáo trình lập trình song song - Trường ĐH Cần Thơ.pdf
Giáo trình lập trình song song  - Trường ĐH Cần Thơ.pdfGiáo trình lập trình song song  - Trường ĐH Cần Thơ.pdf
Giáo trình lập trình song song - Trường ĐH Cần Thơ.pdf
 
Giáo trình lập trình hướng đối tượng.pdf
Giáo trình lập trình hướng đối tượng.pdfGiáo trình lập trình hướng đối tượng.pdf
Giáo trình lập trình hướng đối tượng.pdf
 
Giáo trình lập trình hướng đối tượng Java.pdf
Giáo trình lập trình hướng đối tượng Java.pdfGiáo trình lập trình hướng đối tượng Java.pdf
Giáo trình lập trình hướng đối tượng Java.pdf
 
Giáo trình kỹ thuật phản ứng - Trường ĐH Cần Thơ.pdf
Giáo trình kỹ thuật phản ứng  - Trường ĐH Cần Thơ.pdfGiáo trình kỹ thuật phản ứng  - Trường ĐH Cần Thơ.pdf
Giáo trình kỹ thuật phản ứng - Trường ĐH Cần Thơ.pdf
 

Recently uploaded

Seal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptxSeal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptx
negromaestrong
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
QucHHunhnh
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
heathfieldcps1
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
QucHHunhnh
 

Recently uploaded (20)

Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17
 
Asian American Pacific Islander Month DDSD 2024.pptx
Asian American Pacific Islander Month DDSD 2024.pptxAsian American Pacific Islander Month DDSD 2024.pptx
Asian American Pacific Islander Month DDSD 2024.pptx
 
ICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptxICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptx
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104
 
Unit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxUnit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptx
 
Micro-Scholarship, What it is, How can it help me.pdf
Micro-Scholarship, What it is, How can it help me.pdfMicro-Scholarship, What it is, How can it help me.pdf
Micro-Scholarship, What it is, How can it help me.pdf
 
Seal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptxSeal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptx
 
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17  How to Extend Models Using Mixin ClassesMixin Classes in Odoo 17  How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdf
 
ICT role in 21st century education and it's challenges.
ICT role in 21st century education and it's challenges.ICT role in 21st century education and it's challenges.
ICT role in 21st century education and it's challenges.
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SD
 
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptxINDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
 
Role Of Transgenic Animal In Target Validation-1.pptx
Role Of Transgenic Animal In Target Validation-1.pptxRole Of Transgenic Animal In Target Validation-1.pptx
Role Of Transgenic Animal In Target Validation-1.pptx
 
Unit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptxUnit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptx
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
 
On National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan FellowsOn National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan Fellows
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
 

Optimization, Learning Algorithms and Applications.pdf

  • 1. Ana I. Pereira · Florbela P. Fernandes · João P. Coelho · João P. Teixeira · Maria F. Pacheco · Paulo Alves · Rui P. Lopes (Eds.) First International Conference, OL2A 2021 Bragança, Portugal, July 19–21, 2021 Revised Selected Papers Optimization, Learning Algorithms and Applications Communications in Computer and Information Science 1488
  • 2. Communications in Computer and Information Science 1488 Editorial Board Members Joaquim Filipe Polytechnic Institute of Setúbal, Setúbal, Portugal Ashish Ghosh Indian Statistical Institute, Kolkata, India Raquel Oliveira Prates Federal University of Minas Gerais (UFMG), Belo Horizonte, Brazil Lizhu Zhou Tsinghua University, Beijing, China
  • 3. More information about this series at https://link.springer.com/bookseries/7899
  • 4. Ana I. Pereira · Florbela P. Fernandes · João P. Coelho · João P. Teixeira · Maria F. Pacheco · Paulo Alves · Rui P. Lopes (Eds.) Optimization, Learning Algorithms and Applications First International Conference, OL2A 2021 Bragança, Portugal, July 19–21, 2021 Revised Selected Papers
  • 5. Editors Ana I. Pereira Instituto Politécnico de Bragança Bragança, Portugal João P. Coelho Instituto Politécnico de Bragança Bragança, Portugal Maria F. Pacheco Instituto Politécnico de Bragança Bragança, Portugal Rui P. Lopes Instituto Politécnico de Bragança Bragança, Portugal Florbela P. Fernandes Instituto Politécnico de Bragança Bragança, Portugal João P. Teixeira Instituto Politécnico de Bragança Bragança, Portugal Paulo Alves Instituto Politécnico de Bragança Bragança, Portugal ISSN 1865-0929 ISSN 1865-0937 (electronic) Communications in Computer and Information Science ISBN 978-3-030-91884-2 ISBN 978-3-030-91885-9 (eBook) https://doi.org/10.1007/978-3-030-91885-9 © Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
  • 6. Preface The volume CCIS 1488 contains the refereed proceedings of the International Conference on Optimization, Learning Algorithms and Applications (OL2A 2021), an event that, due to the COVID-19 pandemic, was held online. OL2A 2021 provided a space for the research community on optimization and learning to get together and share the latest developments, trends, and techniques as well as develop new paths and collaborations. OL2A 2021 had more than 400 participants in an online environment throughout the three days of the conference (July 19–21, 2021), discussing topics associated to areas such as optimization and learning and state-of-the-art applications related to multi-objective optimization, optimization for machine learning, robotics, health informatics, data analysis, optimization and learning under uncertainty, and the Fourth Industrial Revolution. Four special sessions were organized under the following topics: Trends in Engineering Education, Optimization in Control Systems Design, Data Visualization and Virtual Reality, and Measurements with the Internet of Things. The event had 52 accepted papers, among which 39 were full papers. All papers were carefully reviewed and selected from 134 submissions. All the reviews were carefully carried out by a Scientific Committee of 61 PhD researchers from 18 countries. July 2021 Ana I. Pereira
  • 7. Organization General Chair Ana Isabel Pereira Polytechnic Institute of Bragança, Portugal Organizing Committee Chairs Florbela P. Fernandes Polytechnic Institute of Bragança, Portugal João Paulo Coelho Polytechnic Institute of Bragança, Portugal João Paulo Teixeira Polytechnic Institute of Bragança, Portugal M. Fátima Pacheco Polytechnic Institute of Bragança, Portugal Paulo Alves Polytechnic Institute of Bragança, Portugal Rui Pedro Lopes Polytechnic Institute of Bragança, Portugal Scientific Committee Ana Maria A. C. Rocha University of Minho, Portugal Ana Paula Teixeira University of Trás-os-Montes and Alto Douro, Portugal André Pinz Borges Federal University of Technology – Paraná, Brazil Andrej Košir University of Ljubljana, Slovenia Arnaldo Cândido Júnior Federal University of Technology – Paraná, Brazil Bruno Bispo Federal University of Santa Catarina, Brazil Carmen Galé University of Zaragoza, Spain B. Rajesh Kanna Vellore Institute of Technology, India C. Sweetlin Hemalatha Vellore Institute of Technology, India Damir Vrančić Jozef Stefan Institute, Slovenia Daiva Petkeviciute Kaunas University of Technology, Lithuania Diamantino Silva Freitas University of Porto, Portugal Esteban Clua Federal Fluminense University, Brazil Eric Rogers University of Southampton, UK Felipe Nascimento Martins Hanze University of Applied Sciences, The Netherlands Gaukhar Muratova Dulaty University, Kazakhstan Gediminas Daukšys Kauno Technikos Kolegija, Lithuania Glaucia Maria Bressan Federal University of Technology – Paraná, Brazil Humberto Rocha University of Coimbra, Portugal José Boaventura-Cunha University of Trás-os-Montes and Alto Douro, Portugal José Lima Polytechnic Institute of Bragança, Portugal Joseane Pontes Federal University of Technology – Ponta Grossa, Brazil Juani Lopéz Redondo University of Almeria, Spain
  • 8. viii Organization Jorge Ribeiro Polytechnic Institute of Viana do Castelo, Portugal José Ramos NOVA University Lisbon, Portugal Kristina Sutiene Kaunas University of Technology, Lithuania Lidia Sánchez University of León, Spain Lino Costa University of Minho, Portugal Luís Coelho Polytecnhic Institute of Porto, Portugal Luca Spalazzi Marche Polytechnic University, Italy Manuel Castejón Limas University of León, Spain Marc Jungers Université de Lorraine, France Maria do Rosário de Pinho University of Porto, Portugal Marco Aurélio Wehrmeister Federal University of Technology – Paraná, Brazil Mikulas Huba Slovak University of Technology in Bratislava, Slovakia Michał Podpora Opole University of Technology, Poland Miguel Ángel Prada University of León, Spain Nicolae Cleju Technical University of Iasi, Romania Paulo Lopes dos Santos University of Porto, Portugal Paulo Moura Oliveira University of Trás-os-Montes and Alto Douro, Portugal Pavel Pakshin Nizhny Novgorod State Technical University, Russia Pedro Luiz de Paula Filho Federal University of Technology – Paraná, Brazil Pedro Miguel Rodrigues Catholic University of Portugal, Portugal Pedro Morais Polytechnic Institute of Cávado e Ave, Portugal Pedro Pinto Polytechnic Institute of Viana do Castelo, Portugal Rudolf Rabenstein Friedrich-Alexander-University of Erlangen-Nürnberg, Germany Sani Rutz da Silva Federal University of Technology – Paraná, Brazil Sara Paiva Polytechnic Institute of Viana do Castelo, Portugal Sofia Rodrigues Polytechnic Institute of Viana do Castelo, Portugal Sławomir St˛ epień Poznan University of Technology, Poland Teresa Paula Perdicoulis University of Trás-os-Montes and Alto Douro, Portugal Toma Roncevic University of Split, Croatia Vitor Duarte dos Santos NOVA University Lisbon, Portugal Wojciech Paszke University of Zielona Gora, Poland Wojciech Giernacki Poznan University of Technology, Poland
  • 9. Contents Optimization Theory Dynamic Response Surface Method Combined with Genetic Algorithm to Optimize Extraction Process Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Laires A. Lima, Ana I. Pereira, Clara B. Vaz, Olga Ferreira, Márcio Carocho, and Lillian Barros Towards a High-Performance Implementation of the MCSFilter Optimization Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Leonardo Araújo, Maria F. Pacheco, José Rufino, and Florbela P. Fernandes On the Performance of the OrthoMads Algorithm on Continuous and Mixed-Integer Optimization Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Marie-Ange Dahito, Laurent Genest, Alessandro Maddaloni, and José Neto A Look-Ahead Based Meta-heuristics for Optimizing Continuous Optimization Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 Thomas Nordli and Noureddine Bouhmala Inverse Optimization for Warehouse Management . . . . . . . . . . . . . . . . . . . . . . . . . . 56 Hannu Rummukainen Model-Agnostic Multi-objective Approach for the Evolutionary Discovery of Mathematical Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Alexander Hvatov, Mikhail Maslyaev, Iana S. Polonskaya, Mikhail Sarafanov, Mark Merezhnikov, and Nikolay O. Nikitin A Simple Clustering Algorithm Based on Weighted Expected Distances . . . . . . . 86 Ana Maria A. C. Rocha, M. Fernanda P. Costa, and Edite M. G. P. Fernandes Optimization of Wind Turbines Placement in Offshore Wind Farms: Wake Effects Concerns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 José Baptista, Filipe Lima, and Adelaide Cerveira A Simulation Tool for Optimizing a 3D Spray Painting System . . . . . . . . . . . . . . . 110 João Casanova, José Lima, and Paulo Costa
  • 10. x Contents Optimization of Glottal Onset Peak Detection Algorithm for Accurate Jitter Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 Joana Fernandes, Pedro Henrique Borghi, Diamantino Silva Freitas, and João Paulo Teixeira Searching the Optimal Parameters of a 3D Scanner Through Particle Swarm Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 João Braun, José Lima, Ana I. Pereira, Cláudia Rocha, and Paulo Costa Optimal Sizing of a Hybrid Energy System Based on Renewable Energy Using Evolutionary Optimization Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 Yahia Amoura, Ângela P. Ferreira, José Lima, and Ana I. Pereira Robotics Human Detector Smart Sensor for Autonomous Disinfection Mobile Robot . . . . 171 Hugo Mendonça, José Lima, Paulo Costa, António Paulo Moreira, and Filipe Santos Multiple Mobile Robots Scheduling Based on Simulated Annealing Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Diogo Matos, Pedro Costa, José Lima, and António Valente Multi AGV Industrial Supervisory System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Ana Cruz, Diogo Matos, José Lima, Paulo Costa, and Pedro Costa Dual Coulomb Counting Extended Kalman Filter for Battery SOC Determination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 Arezki A. Chellal, José Lima, José Gonçalves, and Hicham Megnafi Sensor Fusion for Mobile Robot Localization Using Extended Kalman Filter, UWB ToF and ArUco Markers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 Sílvia Faria, José Lima, and Paulo Costa Deep Reinforcement Learning Applied to a Robotic Pick-and-Place Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 Natanael Magno Gomes, Felipe N. Martins, José Lima, and Heinrich Wörtche Measurements with the Internet of Things An IoT Approach for Animals Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 Matheus Zorawski, Thadeu Brito, José Castro, João Paulo Castro, Marina Castro, and José Lima
  • 11. Contents xi Optimizing Data Transmission in a Wireless Sensor Network Based on LoRaWAN Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 Thadeu Brito, Matheus Zorawski, João Mendes, Beatriz Flamia Azevedo, Ana I. Pereira, José Lima, and Paulo Costa Indoor Location Estimation Based on Diffused Beacon Network . . . . . . . . . . . . . 294 André Mendes and Miguel Diaz-Cacho SMACovid-19 – Autonomous Monitoring System for Covid-19 . . . . . . . . . . . . . . 309 Rui Fernandes and José Barbosa Optimization in Control Systems Design Economic Burden of Personal Protective Strategies for Dengue Disease: an Optimal Control Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 Artur M. C. Brito da Cruz and Helena Sofia Rodrigues ERP Business Speed – A Measuring Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . 336 Zornitsa Yordanova BELBIC Based Step-Down Controller Design Using PSO . . . . . . . . . . . . . . . . . . . 345 João Paulo Coelho, Manuel Braz-César, and José Gonçalves Robotic Welding Optimization Using A* Parallel Path Planning . . . . . . . . . . . . . . 357 Tiago Couto, Pedro Costa, Pedro Malaca, Daniel Marques, and Pedro Tavares Deep Learning Leaf-Based Species Recognition Using Convolutional Neural Networks . . . . . . . 367 Willian Oliveira Pires, Ricardo Corso Fernandes Jr., Pedro Luiz de Paula Filho, Arnaldo Candido Junior, and João Paulo Teixeira Deep Learning Recognition of a Large Number of Pollen Grain Types . . . . . . . . 381 Fernando C. Monteiro, Cristina M. Pinto, and José Rufino Predicting Canine Hip Dysplasia in X-Ray Images Using Deep Learning . . . . . . 393 Daniel Adorno Gomes, Maria Sofia Alves-Pimenta, Mário Ginja, and Vitor Filipe Convergence of the Reinforcement Learning Mechanism Applied to the Channel Detection Sequence Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401 André Mendes
  • 12. xii Contents Approaches to Classify Knee Osteoarthritis Using Biomechanical Data . . . . . . . 417 Tiago Franco, P. R. Henriques, P. Alves, and M. J. Varanda Pereira Artificial Intelligence Architecture Based on Planar LiDAR Scan Data to Detect Energy Pylon Structures in a UAV Autonomous Detailed Inspection Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430 Matheus F. Ferraz, Luciano B. Júnior, Aroldo S. K. Komori, Lucas C. Rech, Guilherme H. T. Schneider, Guido S. Berger, Álvaro R. Cantieri, José Lima, and Marco A. Wehrmeister Data Visualization and Virtual Reality Machine Vision to Empower an Intelligent Personal Assistant for Assembly Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447 Matheus Talacio, Gustavo Funchal, Victória Melo, Luis Piardi, Marcos Vallim, and Paulo Leitao Smart River Platform - River Quality Monitoring and Environmental Awareness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463 Kenedy P. Cabanga, Edmilson V. Soares, Lucas C. Viveiros, Estefânia Gonçalves, Ivone Fachada, José Lima, and Ana I. Pereira Health Informatics Analysis of the Middle and Long Latency ERP Components in Schizophrenia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477 Miguel Rocha e Costa, Felipe Teixeira, and João Paulo Teixeira Feature Selection Optimization for Breast Cancer Diagnosis . . . . . . . . . . . . . . . . . 492 Ana Rita Antunes, Marina A. Matos, Lino A. Costa, Ana Maria A. C. Rocha, and Ana Cristina Braga Cluster Analysis for Breast Cancer Patterns Identification . . . . . . . . . . . . . . . . . . . 507 Beatriz Flamia Azevedo, Filipe Alves, Ana Maria A. C. Rocha, and Ana I. Pereira Overview of Robotic Based System for Rehabilitation and Healthcare . . . . . . . . . 515 Arezki A. Chellal, José Lima, Florbela P. Fernandes, José Gonçalves, Maria F. Pacheco, and Fernando C. Monteiro Understanding Health Care Access in Higher Education Students . . . . . . . . . . . . . 531 Filipe J. A. Vaz, Clara B. Vaz, and Luís C. D. Cadinha
  • 13. Contents xiii Using Natural Language Processing for Phishing Detection . . . . . . . . . . . . . . . . . . 540 Richard Adolph Aires Jonker, Roshan Poudel, Tiago Pedrosa, and Rui Pedro Lopes Data Analysis A Panel Data Analysis of the Electric Mobility Deployment in the European Union . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555 Sarah B. Gruetzmacher, Clara B. Vaz, and Ângela P. Ferreira Data Analysis of Workplace Accidents - A Case Study . . . . . . . . . . . . . . . . . . . . . . 571 Inês P. Sena, João Braun, and Ana I. Pereira Application of Benford’s Law to the Tourism Demand: The Case of the Island of Sal, Cape Verde . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 587 Gilberto A. Neves, Catarina S. Nunes, and Paula Odete Fernandes Volunteering Motivations in Humanitarian Logistics: A Case Study in the Food Bank of Viana do Castelo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599 Ana Rita Vasconcelos, Ângela Silva, and Helena Sofia Rodrigues Occupational Behaviour Study in the Retail Sector . . . . . . . . . . . . . . . . . . . . . . . . . 617 Inês P. Sena, Florbela P. Fernandes, Maria F. Pacheco, Abel A. C. Pires, Jaime P. Maia, and Ana I. Pereira A Scalable, Real-Time Packet Capturing Solution . . . . . . . . . . . . . . . . . . . . . . . . . . 630 Rafael Oliveira, João P. Almeida, Isabel Praça, Rui Pedro Lopes, and Tiago Pedrosa Trends in Engineering Education Assessing Gamification Effectiveness in Education Through Analytics . . . . . . . . 641 Zornitsa Yordanova Real Airplane Cockpit Development Applied to Engineering Education: A Project Based Learning Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649 José Carvalho, André Mendes, Thadeu Brito, and José Lima Azbot-1C: An Educational Robot Prototype for Learning Mathematical Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 657 Francisco Pedro, José Cascalho, Paulo Medeiros, Paulo Novo, Matthias Funk, Albeto Ramos, Armando Mendes, and José Lima
  • 14. xiv Contents Towards Distance Teaching: A Remote Laboratory Approach for Modbus and IoT Experiencing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 670 José Carvalho, André Mendes, Thadeu Brito, and José Lima Evaluation of Soft Skills Through Educational Testbed 4.0 . . . . . . . . . . . . . . . . . . 678 Leonardo Breno Pessoa da Silva, Bernado Perrota Barreto, Joseane Pontes, Fernanda Tavares Treinta, Luis Mauricio Martins de Resende, and Rui Tadashi Yoshino Collaborative Learning Platform Using Learning Optimized Algorithms . . . . . . . 691 Beatriz Flamia Azevedo, Yahia Amoura, Gauhar Kantayeva, Maria F. Pacheco, Ana I. Pereira, and Florbela P. Fernandes Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 703
  • 16. Dynamic Response Surface Method Combined with Genetic Algorithm to Optimize Extraction Process Problem Laires A. Lima1,2(B) , Ana I. Pereira1 , Clara B. Vaz1 , Olga Ferreira2 , Márcio Carocho2 , and Lillian Barros2 1 Research Center in Digitalization and Intelligent Robotics (CeDRI), Instituto Politécnico de Bragança, Campus de Santa Apolónia, 5300-253 Bragança, Portugal {laireslima,apereira,clvaz}@ipb.pt 2 Centro de Investigação de Montanha (CIMO), Instituto Politécnico de Bragança, Campus de Santa Apolónia, 5300-253 Bragança, Portugal {oferreira,mcarocho,lillian}@ipb.pt Abstract. This study aims to find and develop an appropriate opti- mization approach to reduce the time and labor employed throughout a given chemical process and could be decisive for quality management. In this context, this work presents a comparative study of two optimization approaches using real experimental data from the chemical engineering area, reported in a previous study [4]. The first approach is based on the traditional response surface method and the second approach combines the response surface method with genetic algorithm and data mining. The main objective is to optimize the surface function based on three variables using hybrid genetic algorithms combined with cluster analysis to reduce the number of experiments and to find the closest value to the optimum within the established restrictions. The proposed strategy has proven to be promising since the optimal value was achieved with- out going through derivability unlike conventional methods, and fewer experiments were required to find the optimal solution in comparison to the previous work using the traditional response surface method. Keywords: Optimization · Genetic algorithm · Cluster analysis 1 Introduction Search and optimization methods have several principles, being the most rele- vant: the search space, where the possibilities for solving the problem in question are considered; the objective function (or cost function); and the codification of the problem, that is, the way to evaluate an objective in the search space [1]. Conventional optimization techniques start with an initial value or vector that, iteratively, is manipulated using some heuristic or deterministic process c Springer Nature Switzerland AG 2021 A. I. Pereira et al. (Eds.): OL2A 2021, CCIS 1488, pp. 3–14, 2021. https://doi.org/10.1007/978-3-030-91885-9_1
  • 17. 4 L. A. Lima et al. directly associated with the problem to be solved. The great difficulty to deal with when solving a problem using a stochastic method is the number of possible solutions growing with a factorial speed, being impossible to list all possible solutions of the problem [12]. Evolutionary computing techniques operate on a population that changes in each iteration. Thus, they can search in different regions on the feasible space, allocating an appropriate number of members to search in different areas [12]. Considering the importance of predicting the behavior of analytical processes and avoiding expensive procedures, this study aims to propose an alternative for the optimization of multivariate problems, e.g. extraction processes of high- value compounds from plant matrices. In the standard analytical approach, the identification and quantification of phenolic compounds require expensive and complex laboratory assays [6]. An alternative approach can be applied using forecasting models from Response Surface Method (RSM). This approach can maximize the extraction yield of the target compounds while decreasing the cost of the extraction process. In this study, a comparative analysis between two optimization methodolo- gies (traditional RSM and dynamic RSM), developed in MATLAB R software (version R2019a 9.6), that aim to maximize the heat-assisted extraction yield and phenolic compounds content in chestnut flower extracts is presented. This paper is organized as follows. Section 2 describes the methods used to evaluate multivariate problems involving optimization processes: Response Sur- face Method (RSM), Hybrid Genetic Algorithm, Cluster Analysis and Bootsrap Analysis. Sections 3 and 4 introduce the case study, consisting of the optimization of the extraction yield and content of phenolic compounds in extracts of chest- nut flower by two different approaches: Traditional RSM and Dynamic RSM. Section 5 includes the numerical results obtained by both methods and their comparative evaluation. Finally, Sect. 6 presents the conclusions and future work. 2 Methods Approaches and Techniques Regarding optimization problems, some methods are used more frequently (tra- ditional RSM, for example) due to their applicability and suitability to different cases. For the design of the dynamic RSM, the approach of conventional meth- ods based on Genetic Algorithm combined with clustering and bootstrap analysis was made to evaluate the aspects that could be incorporated into the algorithm developed in this work. The key concepts for dynamic RSM are presented below. 2.1 Response Surface Method The Response Surface Method is a tool introduced in the early 1950s by Box and Wilson, which covers a collection of mathematical and statistical techniques useful for approximating and optimizing stochastic models [11]. It is a widely used optimization method, which applies statistical techniques based on special factorial designs [2,3]. Its scientific approach estimates the ideal conditions for
  • 18. Dynamic RSM Combined with GA to Optimize Extraction Process Problem 5 achieving the highest or lowest required response value, through the design of the response surface from the Taylor series [8]. RSM promotes the greatest amount of information on experiments, such as the time of experiments and the influence of each dependent variable, being one of the largest advantages in obtaining the necessary general information on the planning of the process and the experience design [8]. 2.2 Hybrid Genetic Algorithm The genetic algorithm is a stochastic optimization method based on the evolu- tionary process of natural selection and genetic dynamics. The method seeks to combine the survival of the fittest among the string structures with an exchange of random, but structured, information to form an ideal solution [7]. Although they are randomized, GA search strategies are able to explore several regions of the feasible search space at a time. In this way, along with the iterations, a unique search path is built, as new solutions are obtained through the com- bination of previous solutions [1]. Optimization problems with restrictions can influence the sampling capacity of a genetic algorithm due to the population limits considered. Incorporating a local optimization method into GA can help overcome most of the obstacles that arise as a result of finite population sizes, for example, the accumulation of stochastic errors that generate genetic drift problems [1,7]. 2.3 Cluster Analysis Cluster algorithms are often used to group large data sets and play an impor- tant role in pattern recognition and mining large arrays. k-means and k-medoids strategies work by grouping partition data into a k number of mutually exclusive clusters, demonstrated in Fig. 1. These techniques assign each observation to a cluster, minimizing the distance from the data point to the average (k-means) or median (k-medoids) location of its assigned cluster [10]. Fig. 1. Mean and Medoid in 2D space representation. In both figures, the data are represented by blue dots, being the rightmost point an outlier and the red point rep- resents the centroid point found by k-mean or k-medoid methods. Adapted from Jin and Han (2011) (Color figure online).
  • 19. 6 L. A. Lima et al. 2.4 Bootstrap Analysis The idea of bootstrap analysis is to mimic the sampling distribution of the statis- tic of interest through the use of many resamples replacing the original sample elements [5]. In this work, the bootstrap analysis enables the handling of the vari- ability of the optimal solutions derived from the cluster method analysis. Thus, the bootstrap analysis is used to estimate the confidence interval of the statis- tic of interest and subsequently, comparing the results obtained by traditional methods. 3 Case Study This work presents a comparative analysis between two methodologies for opti- mizing the total phenolic content in extracts of chestnut flower, developed in MATLAB R software. The natural values of the dependent variables in the extraction - t, time in minutes; T, temperature in ◦ C; and S, organic solvent con- tent in %v/v of ethanol - were coded based on Central Composite Circumscribed Design (CCCD) and the result was based on extraction yield (Y, expressed in percentage of dry extract) and total phenolic content (Phe, expressed in mg/g of dry extract) as shown in Table 1. The experimental data presented were cordially provided by the Mountain Research Center - CIMO (Bragança, Portugal) [4]. The CCCD design selected for the original experimental study [4] is based on a cube circumscribed to a sphere in which the vertices are at α distance from the center, with 5 levels for each factor (t, T, and S). In this case, the α values vary between −1.68 and 1.68, and correspond to each factor level, as described in Table 2. 4 Data Analysis In this section, the two RSM optimization methods (traditional and dynamic) will be discussed in detail, along with the results obtained from both methods. 4.1 Traditional RSM In the original experiment, a five-level Central Composite Circumscribed Design (CCCD) coupled with RSM was build to optimize the variables for the male chestnut flowers. For the optimization, a simplex method developed ad hoc was used to optimize nonlinear solutions obtained by a regression model to maximize the response, described in the flowchart in Fig. 2. Through the traditional RSM, the authors approximated the surface response to a second-order polynomial function [4]: Y = b0 + n i=1 biXi + n−1 i=1 j1 n j=2 bijXiXj + n i=1 biiX2 i (1)
  • 20. Dynamic RSM Combined with GA to Optimize Extraction Process Problem 7 Table 1. Variables and natural values of the process parameters for the extraction of chestnut flowers [4]. t (min) T (◦ C) S (%EtOH) Yield (%R) Phenolic cont. (mg.g−1 dry weight) 40.30 37.20 20.30 38.12 36.18 40.30 37.20 79.70 26.73 11.05 40.30 72.80 20.30 42.83 36.66 40.30 72.80 79.70 35.94 22.09 99.70 37.20 20.30 32.77 35.55 99.70 37.20 79.70 32.99 8.85 99.70 72.80 20.30 42.55 29.61 99.70 72.80 79.70 35.52 11.10 120.00 55.00 50.00 42.41 14.56 20.00 55.00 50.00 35.45 24.08 70.00 25.00 50.00 38.82 12.64 70.00 85.00 50.00 42.06 17.41 70.00 55.00 0.00 35.24 34.58 70.00 55.00 100.00 15.61 12.01 20.00 25.00 0.00 22.30 59.56 20.00 25.00 100.00 8.02 15.57 20.00 85.00 0.00 34.81 42.49 20.00 85.00 100.00 18.71 50.93 120.00 25.00 0.00 31.44 40.82 120.00 25.00 100.00 15.33 8.79 120.00 85.00 0.00 34.96 45.61 120.00 85.00 100.00 32.70 21.89 70.00 55.00 50.00 41.03 14.62 Table 2. Natural and coded values of the extraction variables [4]. Natural variables Coded value t (min) T (◦ C) S (%) 20.0 25.0 0.0 −1.68 40.3 37.2 20.0 −1.00 70 55 50.0 0.00 99.7 72.8 80.0 1.00 120 85 100.0 1.68
  • 21. 8 L. A. Lima et al. Fig. 2. Flowchart of traditional RSM modeling approach for optimal design. where, for i = 0, ..., n and j = 1, ..., n, bi stand for the linear coefficients; bij correspond to the interaction coefficients while bii are the quadratic coefficients; and, finally, Xi are the independent variables, associated to t, T and S, being n the total number of variables. In the previous study, the traditional RSM, Eq. 1 represents coherently the behaviour of the extraction process of the target compounds from chestnut flow- ers [4]. In order to compare the optimization methods and to avoid data conflict, the estimation of the cost function was done based on a multivariate regression model. 4.2 Dynamic RSM For the proposed optimization method, briefly described in the flowchart shown in Fig. 3, the structure of the design of the experience was maintained, as well as the imposed restrictions on the responses and variables to elude awkward solutions. Fig. 3. Flowchart of dynamic RSM integrating genetic algorithm and cluster analysis to the process. The dynamic RSM method was build in MATLAB R using a programming code developed by the authors coupled with pre-existing functions from the statistical and optimization toolboxes of the software. The algorithm starts by generating a set of 15 random combinations between the levels of combinatorial analysis. From this initial experimental data, a multivariate regression model is calculated, being this model the objective function of the problem. Thereafter, a built-in GA-based solver was used to solve the optimization problem. The optimal combination is identified and it is used to define the objective function. The process stops when no new optimal solution is identified.
  • 22. Dynamic RSM Combined with GA to Optimize Extraction Process Problem 9 Considering the stochastic nature of this case study, clustering analysis is used to identify the best candidate optimal solution. In order to handle the variability of the achieved optimal solution, the bootstrap method is used to estimate the confidence interval at 95%. 5 Numerical Results The study using the traditional RSM returned the following optimal conditions for maximum yield: 120.0 min, 85.0 ◦ C, and 44.5% of ethanol in the solvent, pro- ducing 48.87% of dry extract. For total phenolic content, the optimal conditions were: 20.0 min, 25.0 ◦ C and S = 0.0% of ethanol in the solvent, producing 55.37 mg/g of dry extract. These data are displayed in Table 3. Table 3. Optimal responses and respective conditions using traditional and dynamic RSM based on confidence intervals at 95%. Method t (min) T (◦ C) S (%) Response Extraction yield (%) Traditional RSM 120.0 ± 12.4 85.0 ± 6.7 44.5 ± 9.7 48.87 Dynamic RSM 118.5 ± 1.4 84.07 ± 0.9 46.1 ± 0.85 45.87 Total phenolics (Phe) Traditional RSM 20.0 ± 3.7 25.0 ± 5.7 0.0 ± 8.7 55.37 Dynamic RSM 20.4 ± 1.5 25.1 ± 1.97 0.05 ± 0.05 55.64 For the implementation of dynamic RSM in this case study, 100 runs were carried out to evaluate the effectiveness of the method. For the yield, the esti- mated optimal conditions were: 118.5 min, 84.1 ◦ C, and 46.1% of ethanol in the solvent, producing 45.87% of dry extract. In this case, the obtained optimal con- ditions for time and temperature were in accordance with approximately 80% of the tests. For the total phenolic content, the optimal conditions were: 20.4 min, 25.1 ◦ C, and 0.05% of ethanol in the solvent, producing 55.64 mg/g of dry extract. The results were very similar to the previous report with the same data [4]. The clustering analysis for each response variable was performed considering the means (Figs. 4a and 5a) and the medoids (Figs. 4b and 5b) for the output population (optimal responses). The bootstrap analysis makes the inference con- cerning the results achieved and are represented graphically in terms of mean in Figs. 4c and 5c, and in terms of medoids in Figs. 4d and 5d.
  • 23. 10 L. A. Lima et al. The box plots of the group of optimal responses from dynamic RSM dis- played in Fig. 6 shows that the variance within each group is small, given that the difference between the set of responses is highly narrow. The histograms concerning the set of dynamic RSM responses and the bootstrap distribution of the mean (1000 resamples) are shown in Figs. 7 and 8. (a) Extraction yield responses and k-means (b) Extraction yield responses and k-medoids (c) Extraction yield bootstrap output and k-means (d) Extraction yield bootstrap output and k-medoids Fig. 4. Clustering analysis of the outputs from the extraction yield optimization using dynamic RSM. The responses are clustered in 3 distinct groups
  • 24. Dynamic RSM Combined with GA to Optimize Extraction Process Problem 11 (a) Total phenolic content responses and k-means (b) Total phenolic content responses and k-medoids (c) Total phenolic content bootstrap output and k-means (d) Total phenolic content bootstrap output and k-medoids Fig. 5. Clustering analysis of the outputs from the total phenolic content optimization using dynamic RSM. The responses are clustered in 3 distinct groups Fig. 6. Box plot of the dynamic RSM outputs for the extraction yield and total phenolic content before bootstrap analysis, respectively.
  • 25. 12 L. A. Lima et al. Fig. 7. Histograms of the extraction data (extraction yield) and the bootstrap means, respectively. Fig. 8. Histograms of the extraction data (total phenolic content) and the bootstrap means, respectively. The results obtained in this work are satisfactory since they were analogous for both methods, although dynamic RSM took 15 to 18 experimental points to find the optimal coordinates. Some authors use the design of experiments involving traditional RSM containing 20 different combinations, including the repetition of centroid [4]. However, in studies involving recent data or the absence of complementary data, evaluations about the influence of parameters and range are essential to obtain consistent results, making it necessary to make about 30 experimental points for optimization. Considering these cases, the dynamic RSM method proposes a different, competitive, and economical approach, in which fewer points are evaluated to obtain the maximum response. Genetic algorithms have been providing their efficiency in the search for optimal solutions in a wide variety of problems, given that they do not have some limitations found in traditional search methodologies, such as the require- ment of the derivative function, for example [9]. GA is attractive to identify the global solution of the problem. Considering the stochastic problem presented in this work, the association of genetic algorithm with the k-methods as clustering algorithm obtained satisfactory results. This solution can be used for problems involving small-scale data since GA manages to gather the best data for opti-
  • 26. Dynamic RSM Combined with GA to Optimize Extraction Process Problem 13 mization through its evolutionary method, while k-means or k-medoids make the grouping of optimum points. In addition to clustering analysis, bootstrapping was also applied, in which the sample distribution of the statistic of interest is simulated through the use of many resamples with replacement of the original sample, thus enabling to make the statistical inference. Bootstrapping was used to calculate the confidence intervals to obtain unbiased estimates from the proposed method. In this case, the confidence interval was calculated at the 95% level (two-tailed), since the same percentage was adopted by Caleja et al. (2019). It was observed that the Dynamic RSM approach also enables the estimation of confidence intervals with less margin of error than the Traditional RSM approach, conducting to define more precisely the optimum conditions for the experiment. 6 Conclusion and Future Work For the presented case study, applying dynamic RSM using Genetic Algorithm coupled with clustering analysis returned positive results, in accordance with previous published data [4]. Both methods seem attractive for the resolution of this particular case concerning the optimization of the extraction of target compounds from plant matrices. Therefore, the smaller number of experiments required for dynamic RSM can be an interesting approach for future studies. In brief, a smaller set of points was obtained that represent the best domain of optimization, thus eliminating the need for a large number of costly labora- tory experiments. The next steps involve the improvement of the dynamic RSM algorithm and the application of the proposed method in other areas of study. Acknowledgments. The authors are grateful to FCT for financial support through national funds FCT/MCTES UIDB/00690/2020 to CIMO and UIDB/05757/2020. M. Carocho also thanks FCT through the individual scientific employment program- contract (CEECIND/00831/2018). References 1. Beasley, D., Bull, D.R., Martin, R.R.: An overview of genetic algorithms: Part 1, fundamentals. Univ. Comput. 2(15), 1–16 (1993) 2. Box, G.E.P., Behnken, D.W.: Simplex-sum designs: a class of second order rotatable designs derivable from those of first order. Ann. Math. Stat. 31(4), 838–864 (1960) 3. Box, G.E.P., Wilson, K.B.: On the experimental attainment of optimum conditions. J. Roy. Stat. Soc. Ser. B (Methodol.) 13(1), 1–38 (1951) 4. Caleja C., Barros L., Prieto M. A., Bento A., Oliveira M.B.P., Ferreira, I.C.F.R.: Development of a natural preservative obtained from male chestnut flowers: opti- mization of a heat-assisted extraction technique. In: Food and Function, vol. 10, pp. 1352–1363 (2019) 5. Efron, B., Tibshirani, R.J.: An introduction to the Bootstrap, 1st edn. Wiley, New York (1994)
  • 27. 14 L. A. Lima et al. 6. Eftekhari, M., Yadollahi, A., Ahmadi, H., Shojaeiyan, A., Ayyari, M.: Development of an artificial neural network as a tool for predicting the targeted phenolic profile of grapevine (Vitis vinifera) foliar wastes. Front. Plant Sci. 9, 837 (2018) 7. El-Mihoub, T.A., Hopgood, A.A., Nolle, L., Battersby, A.: Hybrid genetic algo- rithms: a review. Eng. Lett. 11, 124–137 (2006) 8. Geiger, E.: Statistical methods for fermentation optimization. In: Vogel H.C., Todaro C.M., (eds.) Fermentation and Biochemical Engineering Handbook: Prin- ciples, Process Design, and Equipment, 3rd edn, pp. 415–422. Elsevier Inc. (2014) 9. Härdle, W.K., Simar, L.: Applied Multivariate Statistical Analysis, 4th edn. Springer, Heidelberg (2019) 10. Jin, X., Han, J.: K-medoids clustering. In: Sammut, C., Webb, G.I. (eds.) Ency- clopedia of Machine Learning, pp. 564–565. Springer, Boston (2011) 11. Şenaras, A.E.: Parameter optimization using the surface response technique in automated guided vehicles. In: Sustainable Engineering Products and Manufac- turing Technologies, pp. 187–197. Academic Press (2019) 12. Schneider, J., Kirkpatrick, S.: Genetic algorithms and evolution strategies. In: Stochastic Optimization, vol. 1, pp. 157–168, Springer-Verlag, Heidelberg (2006)
  • 28. Towards a High-Performance Implementation of the MCSFilter Optimization Algorithm Leonardo Araújo1,2 , Maria F. Pacheco2 , José Rufino2 , and Florbela P. Fernandes2(B) 1 Universidade Tecnológica Federal do Paraná, Campus de Ponta Grossa, Ponta Grossa 84017-220, Brazil 2 Research Centre in Digitalization and Intelligent Robotics (CeDRI), Instituto Politécnico de Bragança, 5300-252 Bragança, Portugal a46677@alunos.ipb.pt, {pacheco,rufino,fflor}@ipb.pt Abstract. Multistart Coordinate Search Filter (MCSFilter) is an opti- mization method suitable to find all minimizers – both local and global – of a non convex problem, with simple bounds or more generic constraints. Like many other optimization algorithms, it may be used in industrial contexts, where execution time may be critical in order to keep a pro- duction process within safe and expected bounds. MCSFilter was first implemented in MATLAB and later in Java (which introduced a signif- icant performance gain). In this work, a comparison is made between these two implementations and a novel one in C that aims at further performance improvements. For the comparison, the problems addressed are bound constraint, with small dimension (between 2 and 10) and mul- tiple local and global solutions. It is possible to conclude that the average time execution for each problem is considerable smaller when using the Java and C implementations, and that the current C implementation, though not yet fully optimized, already exhibits a significant speedup. Keywords: Optimization · MCSFilter method · MatLab · C · Java · Performance 1 Introduction The set of techniques and principals for solving quantitative problems known as optimization has become increasingly important in a broad range of applications in areas of research as diverse as engineering, biology, economics, statistics or physics. The application of the techniques and laws of optimization in these (and other) areas, not only provides resources to describe and solve the specific problems that appear within the framework of each area but it also provides the opportunity for new advances and achievements in optimization theory and its techniques [1,2,6,7]. c Springer Nature Switzerland AG 2021 A. I. Pereira et al. (Eds.): OL2A 2021, CCIS 1488, pp. 15–30, 2021. https://doi.org/10.1007/978-3-030-91885-9_2
  • 29. 16 L. Araújo et al. In order to apply optimization techniques, problems can be formulated in terms of an objective function that is to be maximized or minimized, a set of variables and a set of constraints (restrictions on the values that the variables can assume). The structure of the these three items – objective function, variables and constraints –, determines different subfields of optimization theory: linear, integer, stochastic, etc.; within each of these subfields, several lines of research can be pursued. The size and complexity of optimization problems that can be dealt with has increased enormously with the improvement of the overall per- formance of computers. As such, advances in optimization techniques have been following progresses in computer science as well as in combinatorics, operations research, control theory, approximation theory, routing in telecommunication networks, image reconstruction and facility location, among other areas [14]. The need to keep up with the challenges of our rapidly changing society and its digitalization means a continuing need to increase innovation and productiv- ity and improve the performance of the industry sector, and places very high expectations for the progress and adaptation of sophisticated optimization tech- niques that are applied in the industrial context. Many of those problems can be modelled as nonlinear programming problems [10–12] or mixed integer nonlinear programming problems [5,8]. The urgency to quickly output solutions to difficult multivariable problems, leads to an increasing need to develop robust and fast optimization algorithms. Considering that, for many problems, reliable informa- tion about the derivative of the objective function is unavailable, it is important to use a method that allows to solve the problem without this information. Algorithms that do not use derivatives are called derivative-free. The MCS- Filter method is such a method, being able to deal with discontinuous or non- differentiable functions that often appear in many applications. It is also a multi- local method, meaning that it finds all the minimizers, both local and global, and exhibits good results [9,10]. Moreover, a Java implementation was already used to solve processes engineering problems [4]. Considering that, from an industrial point of view, execution time is of utmost importance, a novel C reimplementa- tion, aimed at increased performance, is currently under way, having reached a stage at which it is already able to solve a broad set of problems with measur- able performance gains over the previous Java version. This paper presents the results of a preliminary evaluation of the new C implementation of the MCSFilter method, against the previously developed versions (in MATLAB and Java). The rest of this paper is organized as follows: in Sect. 2, the MCSFilter algo- rithm is briefly described; in Sect. 3 the set of problems that are used to compare the three implementations and the corresponding results are presented and ana- lyzed. Finally, in Sect. 4, conclusions and future work are addressed. 2 The Multistart Coordinate Search Filter Method The MCSFilter algorithm was initially developed in [10], with the aim of finding multiple solutions of nonconvex and nonlinear constrained optimization problems of the following type:
  • 30. Towards a High-Performance Implementation of the MCSFilter 17 min f(x) subject to gj(x) ≤ 0, j = 1, ..., m li ≤ xi ≤ ui, i = 1, ..., n (1) where f is the objective function, gj(x), j = 1, ..., m are the constraint functions and at least one of the functions f, gj : Rn −→ R is nonlinear; also, l and u are the bounds and Ω = {x ∈ Rn : g(x) ≤ 0 , l ≤ x ≤ u} is the feasible region. This method has two main different parts: i) the multistart part, related with the exploration feature of the method, and ii) the coordinate search filter local search, related with the exploitation of promising regions. The MCSFilter method does not require any information about the deriva- tives and is able to obtain all the solutions, both local and global, of a given non- convex optimization problem. This is an important asset of the method since, in industry problems, it is often not possible to know the derivative functions; moreover, a large number of real-life problems are nonlinear and nonconvex. As already stated, the MCSFilter algorithm relies on a multistart strategy and a local search repeatedly called inside the multistart. Briefly, the multistart strategy is a stochastic algorithm that applies more than once a local search to sample points aiming to converge to all the minimizers, local and global, of a multimodal problem. When the local search is repeatedly applied, some of the minimizers can be reached more than once. This leads to a waste of time since these minimizers have already been determined. To avoid these situations, a clustering technique based on computing the regions of attraction of previously identified minimizers is used. Thus, if the initial point belongs to the region of attraction of a previously detected minimizer, the local search procedure may not be performed, since it would converge to this known minimizer. Figure 1 illustrates the influence of the regions of attraction. The red/magenta lines between the initial approximation and the minimizer represent a local search that has been performed; the red line represents the first local search that converged to a given minimizer; the white dashed line between the two points represents a discarded local search, using the regions of attraction. Therefore, this representation intends to show the regions of attraction of each minimizer and the corresponding points around each one. These regions are dynamic in the sense that they may change every time a new initial point is used [3]. The local search uses a derivative-free strategy that consists of a coordinate search combined with a filter methodology in order to generate a sequence of approximate solutions that improve either the constraint violation or the objec- tive function relatively to the previous approximation; this strategy is called Coordinate Search Filter algorithm (CSFilter). In this way, the initial problem is previously rewritten as a bi-objective problem (2): min (θ(x), f(x)) x ∈ Ω (2)
  • 31. 18 L. Araújo et al. 3 4 5 6 7 8 9 10 11 12 13 3 4 5 6 7 8 9 10 11 12 13 Fig. 1. Illustration of the multistart strategy with regions of attraction [3]. aiming to minimize, simultaneously, the objective function f(x) and the non- negative continuous aggregate constraint violation function θ(x) defined in (3): θ(x) = g(x)+2 + (l − x)+2 + (x − u)+2 (3) where v+ = max{0, v}. For more details about this method see [9,10]. Algorithm 1 displays the steps of the MSCFilter method. The stopping condi- tion of CSFilter is related with the step size α of the method (see condition (4)): α αmin (4) with αmin 1 and close to zero. The main steps of the MCSFilter algorithm for finding global (as well as local) solutions to problem (1) are shown in Algorithm 2. The stopping condition of the MCSFilter algorithm is related to the number of minimizers found and to the number of local searches that were applied in the multistart strategy. Considering nl as the number of local searches used and nm as the number of minimizers obtained, then Pmin = nm(nm + 1) nl(nl − 1) . The MCSFilter algorithm stops when condition (5) is reached: Pmin (5) where 1. In this preliminary work, the main goal is to compare the performance of MCSFilter when bound constraint problems are addressed, using different
  • 32. Towards a High-Performance Implementation of the MCSFilter 19 Algorithm 1. CSFilter algorithm Require: x and parameter values, αmin; set x̃ = x, xinf F = x, z = x̃; 1: Initialize the filter; Set α = min{1, 0.05 n i=1 ui−li n }; 2: repeat 3: Compute the trial approximations zi a = x̃ + αei, for all ei ∈ D⊕; 4: repeat 5: Check acceptability of trial points zi a; 6: if there are some zi a acceptable by the filter then 7: Update the filter; 8: Choose zbest a ; set z = x̃, x̃ = zbest a ; update xinf F if appropriate; 9: else 10: Compute the trial approximations zi a = xinf F + αei, for all ei ∈ D⊕; 11: Check acceptability of trial points zi a; 12: if there are some zi a acceptable by the filter then 13: Update the filter; 14: Choose zbest a ; Set z = x̃, x̃ = zbest a ; update xinf F if appropriate; 15: else 16: Set α = α/2; 17: end if 18: end if 19: until new trial zbest a is acceptable 20: until α αmin implementations of the algorithm: the original implementation in MATLAB [10], a follow up implementation in Java (already used to solve problems from the Chemical Engineering area [3,4,13]), and a new implementation in C (evaluated for the first time in this paper). 3 Computational Results In order to compare the performance of the three implementations of the MCS- Filter optimization algorithm, a set of problems was chosen. The definition of each problem (a total of 15 bound constraint problems) is given below, along with the experimental conditions under which they were evaluated, as well as the obtained results (both numerical and performance-related). 3.1 Benchmark Problems The collection of problems was taken from [9] (and the references therein) and all the fifteen problems in study are listed below. The problems were chosen in such a way that different characteristics were addressed: they are multimodal problems with more than one minimizer (actually, the number of minimizers varies from 2 to 1024); they can have just one global minimizer or more than one global minimizer; the dimension of the problems varies between 2 and 10.
  • 33. 20 L. Araújo et al. Algorithm 2. MCSFilter algorithm Require: Parameter values; set M∗ = ∅, k = 1, t = 1; 1: Randomly generate x ∈ [l, u]; compute Bmin = mini=1,...,n{ui − li}; 2: Compute m1 = CSFilter(x), R1 = x − m1; set r1 = 1, M∗ = M∗ ∪ m1; 3: while the stopping rule is not satisfied do 4: Randomly generate x ∈ [l, u]; 5: Set o = arg minj=1,...,k dj ≡ x − mj; 6: if do Ro then 7: if the direction from x to yo is ascent then 8: Set prob = 1; 9: else 10: Compute prob = φ( do Ro , ro); 11: end if 12: else 13: Set prob = 1; 14: end if 15: if ζ‡ prob then 16: Compute m = CSFilter(x); set t = t + 1; 17: if m − mj γ∗ Bmin, for all j = 1, . . . , k then 18: Set k = k + 1, mk = m, rk = 1, M∗ = M∗ ∪ mk; compute Rk = x − mk; 19: else 20: Set Rl = max{Rl, x − ml}; rl = rl + 1; 21: end if 22: else 23: Set Ro = max{Ro, x − mo}; ro = ro + 1; 24: end if 25: end while – Problem (P1) min f(x) ≡ x2 − 5.1 4π2 x2 1 + 5 π x1 − 6 2 + 10 1 − 1 8π cos(x1) + 10 s.t. −5 ≤ x1 ≤ 10, 0 ≤ x2 ≤ 15 • known global minimum f∗ = 0.39789. – Problem (P2) min f(x) ≡ 4 − 2.1x2 1 + x4 1 3 x2 1 + x1x2 − 4(1 − x2 2)x2 2 s.t. −2 ≤ xi ≤ 2, i = 1, 2 • known global minimum: f∗ = −1.03160. – Problem (P3) min f(x) ≡ n i=1 sin(xi) + sin 2xi 3 s.t. 3 ≤ xi ≤ 13, i = 1, 2
  • 34. Towards a High-Performance Implementation of the MCSFilter 21 • known global minimum: f∗ = −2.4319. – Problem (P4) min f(x) ≡ 1 2 2 i=1 x4 i − 16x2 i + 5xi s.t. −5 ≤ xi ≤ 5, i = 1, 2 • known global minimum: f∗ = −78.3323. – Problem (P5) min f(x) ≡ 1 2 3 i=1 x4 i − 16x2 i + 5xi s.t. −5 ≤ xi ≤ 5, i = 1, · · · , 3 • known global minimum: f∗ = −117, 4983. – Problem (P6) min f(x) ≡ 1 2 4 i=1 x4 i − 16x2 i + 5xi s.t. −5 ≤ xi ≤ 5, i = 1, · · · , 4 • known global minimum: f∗ = −156, 665. – Problem (P7) min f(x) ≡ 1 2 5 i=1 x4 i − 16x2 i + 5xi s.t. −5 ≤ xi ≤ 5, i = 1, · · · , 5 • known global minimum: f∗ = −195, 839. – Problem (P8) min f(x) ≡ 1 2 6 i=1 x4 i − 16x2 i + 5xi s.t. −5 ≤ xi ≤ 5, i = 1, · · · , 6 • known global minimum: f∗ = −234, 997. – Problem (P9) min f(x) ≡ 1 2 8 i=1 x4 i − 16x2 i + 5xi s.t. −5 ≤ xi ≤ 5, i = 1, · · · , 8
  • 35. 22 L. Araújo et al. • known global minimum: f∗ = −313, 3287. – Problem (P10) min f(x) ≡ 1 2 10 i=1 x4 i − 16x2 i + 5xi s.t. −5 ≤ x1 ≤ 5, i = 1, · · · , 10 • known global minimum: f∗ = −391, 658. – Problem (P11) min f(x) ≡ (1 + (x1 + x2 + 1)2 (19 − 14x1 + 3x2 1 − 14x2 + 6x1x2 + 3x2 2)) × (30 + (2x1 − 3x2)2 (18 − 32x1 + 12x2 1 + 48x2 − 36x1x2 + 27x2 2)) s.t. −2 ≤ xi ≤ 2, i = 1, 2 • known global minimum: f∗ = 3. – Problem (P12) min f(x) ≡ 5 i=1 i cos((i + 1)x1 + i) 5 i=1 i cos((i + 1)x2 + i) s.t. −10 ≤ xi ≤ 10, i = 1, 2 • known global minimum: f∗ = −186, 731. – Problem (P13) min f(x) ≡ cos(x1) sin(x2) − x1 x2 2 + 1 s.t. −1 ≤ x1 ≤ 2, −1 ≤ x2 ≤ 1 • known global minimum: f∗ = −2, 0. – Problem (P14) min f(x) ≡ (1.5 − x1(1 − x2))2 + (2.25 − x1(1 − x2 2))2 + (2.625 − x1(1 − x3 2))2 s.t. −4.5 ≤ xi ≤ 4.5, i = 1, · · · , 2 • known global minimum: f∗ = 0. – Problem (P15) min f(x) ≡ 0.25x4 1 − 0.5x2 1 + 0.1x1 + 0.5x2 2 s.t. −2 ≤ xi ≤ 2, i = 1, · · · , 2 • known global minimum: f∗ = −0, 352386.
  • 36. Towards a High-Performance Implementation of the MCSFilter 23 3.2 Experimental Conditions The problems were evaluated in a computational system with the following rel- evant characteristics: CPU - 2.3/4.3 GHz 18-core Intel Xeon W-2195, RAM - 32 GB DDR4 2666 MHz ECCR, OS - Linux Ubuntu 20.04.2 LTS, MATLAB - version R2020a, JAVA - OpenJDK 8, C Compiler - gcc version 9.3.0 (-O2 option). Since the MCSFilter algorithm has a stochastic component, 10 runs were performed for each problem. The execution time of the first run was ignored and so the presented execution times are averages of the remaining 9 executions. All three implementations (MATLAB, Java and C) ran using the same parameters, namely, the ones related to the stopping conditions. For the local search CSFilter, αmin = 10−5 was considered in condition (4), as in previous works. For the stopping condition of MCSFilter = 10−2 was considered in condition (5). 3.3 Analysis of the Results Tables 1, 2 and 3 show the results obtained for each implementation. In all the tables the first column (Prob) has the name of each problem; the second col- umn (minavg) presents the average number of minimizers obtained in the 9 executions; in the third column (tavg) the average execution time (in seconds) of 9 runs is shown; the last column (best f∗ ) shows the best value achieved for the global minimum. One important feature visible in the results of all three implementations is that the global minimum is always achieved, in all problems. Table 1. Results obtained using MATLAB. Prob minavg nfavg tavg(s) best f∗ P1 3 5684,82 0,216 0,398 P2 6 8678,36 0,289 −1,032 P3 4 4265,55 0,178 −2,432 P4 4 4663,27 0,162 −78,332 P5 8 15877,09 0,480 −117,499 P6 16 51534,64 1,438 −156,665 P7 32 145749,64 3,898 −195,831 P8 64 391584,00 10,452 −234,997 P9 256 2646434,55 71,556 −313,329 P10 1023,63 15824590,64 551,614 −391,662 P11 4 39264,73 1,591 3,000 P12 64,36 239005,18 6,731 −186,731 P13 2,45 4653,18 0,160 −2,022 P14 2,27 52964,09 2,157 0,000 P15 2 2374,91 0,135 −0,352
  • 37. 24 L. Araújo et al. For the MATLAB implementation, looking at the second column of Table 1 it is possible to state that all the minimizers are found for all the problems except problems P10, P12, P13, P14. Nevertheless, it is important to note that P10 has 1024 minimizers and an average of 1023,63 were found, and P12 has 760 mini- mizers and an average of 64,36 minimizers were discovered. P13 is the problem where MCSFilter exhibits the worst behaviour, shared by other algorithms. It is also worth to remark that problem P10 has 10 variables and, taking into account the structure of the CSFilter algorithm, such leads to a large number of function evaluations. This, of course, impacts the execution time and, therefore, prob- lem P10 has the highest execution time of all the problems, taking an average 551,614 s per each run. Table 2. Results obtained using JAVA. Prob minavg nfavg tavg(s) best f∗ P1 3 9412,64 0,003 0,3980 P2 6 13461,82 0,005 −1,0320 P3 4 10118,73 0,006 −2,4320 P4 4 10011,91 0,005 −78,3320 P5 8 32990,73 0,013 −117,4980 P6 16 98368,73 0,038 −156,6650 P7 32 274812,36 0,118 −195,8310 P8 64 730754,82 0,363 −234,9970 P9 256 4701470,36 2,868 −313,3290 P10 1024 27608805,73 20,304 −391,6620 P11 4 59438,18 0,009 3,0000 P12 46,45 99022,91 0,035 −186,7310 P13 2,36 6189,09 0,003 −2,0220 P14 2,54 62806,64 0,019 0,0000 P15 2 5439,18 0,002 −0,3520 Considering now the results produced by the Java implementation (Table 2), it is possible to observe a behaviour similar to the MATLAB version, regarding to the best value of the global minimum. This value is always achieved, in all runs, as well as the known number of minimizers – except in problems P12, P13 and P14. It is noteworthy that using Java, all the 1024 minimizers were obtained. If the fourth column of Tables 1 and 2 are compared, it is possible to point out that in Java the algorithm clearly takes less time to obtain the same solutions.
  • 38. Towards a High-Performance Implementation of the MCSFilter 25 Finally, Table 3 shows the results obtained by the new C-based implementa- tion. It can be observed that the numerical behaviour of the algorithm is similar to that observed in the Java version: both implementations find, approximately, the same number of minimizers. However, comparing the execution times of the C version with those of the Java version (Table 2), the C version is clearly faster. Table 3. Results obtained using C. Prob minavg nfavg tavg(s) best f∗ P1 3 3434,64 0,001 0,3979 P2 6 7809,73 0,002 −1,0316 P3 4 5733,36 0,002 −2,4320 P4 4 6557,54 0,002 −78,3323 P5 8 17093,64 0,007 −117,4985 P6 16 51359,82 0,023 −156,6647 P7 32 162219,63 0,073 −195,8308 P8 64 403745,91 0,198 −234,9970 P9 255,18 2625482,73 1,600 −313,3293 P10 1020,81 15565608,53 14,724 −391,6617 P11 4 8723,34 0,004 3,0000 P12 38,81 43335,73 0,027 −186,7309 P13 2 2345,62 0,001 −2,0218 P14 2,36 24101,18 0,014 0,0000 P15 2 3334,53 0,002 −0,3524 In order to make discernible the differences in the computational performance of the three MCSFilter implementations, the execution times of all problems considered in this study are represented, side-by-side, in two different charts, accordingly with the order of magnitude of the execution times in the MATLAB version (the slowest one). The average execution times for problems P1 −P5, P13 and P15, are represented in Fig. 2; in MATLAB, all these problems executed in less than half a second. For the remaining problems, the execution times are rep- resented in Fig. 3: these include problems P6 −P12, and P14, which took between ≈1,5 s to ≈10,5 s to execute in MATLAB, and whose execution times are repre- sented against the primary (left) vertical axis; and also include problems P9 and P10, whose executions times in MATLAB were ≈71 s and ≈551 s, respectively, and are represented against the (right) secondary axis.
  • 39. 26 L. Araújo et al. Fig. 2. Average execution time (s) for problems P1 − P5, P13, P15. Fig. 3. Average execution time (s) for problems P6 − P12, P14. A quick overview of Fig. 2 and Fig. 3 is enough to conclude that the new C implementation is faster than the previous Java implementation, and much faster than the original MATLAB version of the MCSFilter algorithm (note that logarithmic scales were used in both figures due the different order of magnitude of the various execution times; also, in Fig. 3 different textures were used for
  • 40. Towards a High-Performance Implementation of the MCSFilter 27 problems P9 and P10 once their execution times are represented against the right vertical axis). It is also possible to observe that, in general, there is a direct proportionality between the execution times of the three code bases: when changing the optimization problem, if the execution time increases or decreases in one version, the same happens in the other versions. To quantify the performance improvement of a version of the algorithm, over a preceding implementation, one can calculate the Speedups (accelerations) achieved. Thus, the speedup of the X version of the algorithm against the Y version of the same algorithm is simply given by S(X, Y ) = T(Y )/T(X) where T(Y ) and T(X) are the average execution times of the Y and X implementations. The relevant speedups in the context of this study are represented in Table 4. Another perspective on the capabilities of the three MCSFilter implementa- tions herein considered builds on the comparison of their efficiency (or effective- ness) in discovering all known optimizers of the optimization problems at stake. A simple metric that yields such efficiency is E(X, Y ) = minavg(X)/minavg(Y ). Table 5 shows the relative optima search efficiency for several pairs of MCS- Filter implementations. For all but three problems, the MATLAB, JAVA and C implementations are able to find exactly the same number of optimizers (and so their relative efficiency is 1 or 100%). For problems P12, P13 and P14, however, the search efficiency may vary a lot. Compared to the MATLAB version, both the Java and C versions are unable to find as much optimizers for problems P12 and P13; for problem P14, however, the Java version is able to find 12% more optimizers than the MATLAB version, and the C version still lags behind Table 4. Speedups of the execution time. Problem S(Java, MATLAB) S(C, Java) S(C, MATLAB) P1 71,8 2,7 197,4 P2 57,9 2,1 122,3 P3 29,6 3,2 94,5 P4 32,3 2,0 64,7 P5 36,9 1,7 64,5 P6 37,8 1,6 61,2 P7 33,0 1,6 53,2 P8 28,8 1,8 52,7 P9 24,9 1,8 44,7 P10 27,2 1,4 37,5 P11 176,8 2,4 417,9 P12 192,3 1,3 252,3 P13 53,3 2,4 126,3 P14 113,5 1,3 150,4 P15 67,6 1,2 84,4
  • 41. 28 L. Araújo et al. Table 5. Optima search eficiency. Problem E(Java, MATLAB) E(C, Java) E(C, MATLAB) P1 1 1 1 P2 1 1 1 P3 1 1 1 P4 1 1 1 P5 1 1 1 P6 1 1 1 P7 1 1 1 P8 1 1 1 P9 1 1 1 P10 1 1 1 P11 1 1 1 P12 0,72 0,75 0,54 P13 0,96 0,85 0,81 P14 1,12 0,79 0,88 P15 1 1 1 (finding only 88% of the optimizers found by MATLAB). Also, compared to the Java version, the C version currently shows an inferior search efficiency regarding problems P12, P13 and P14, something to be tackled in future work. A final analysis is provided based on the data of Table 6. This table presents, for each problem Y , and for each MCSFilter implementation X, the precision achieved by that implementation as P(X, Y ) = |f∗ (Y ) − bestf∗ (X)|, that is, the modulus of the distance between the known global minimum of problem Y and the best value achieved for the global minimum by implementation X. The following conclusions may be derived: in all the problems the best f∗ is closer to the global minimum known in the literature since the measure used is close to zero; moreover, in the new implementation (using C) there are six problems for which this implementation overcomes the previous two; in five other problems all the implementations obtained the same precision for the best f∗ .
  • 42. Towards a High-Performance Implementation of the MCSFilter 29 Table 6. Global optima precision. Problem P(MATLAB) P(Java) P(C) P1 1,1E − 04 1,1E − 04 1,0E − 05 P2 4,0E − 04 4,0E − 04 0,0E + 00 P3 1,0E − 04 1,0E − 04 1,0E − 04 P4 3,0E − 04 3,0E − 04 0,0E + 00 P5 7,0E − 04 3,0E − 04 2,0E − 04 P6 0,0E + 00 0,0E + 00 3,0E − 04 P7 8,0E − 03 8,0E − 03 8,2E − 03 P8 0,0E + 00 0,0E + 00 0,0E + 00 P9 3,0E − 04 3,0E − 04 6,0E − 04 P10 4,0E − 03 4,0E − 03 3,7E − 03 P11 0,0E + 00 0,0E + 00 0,0E + 00 P12 0,0E + 00 0,0E + 00 1,0E − 04 P13 2,2E − 02 2,2E − 02 2,2E − 02 P14 0,0E + 00 0,0E+00 0,0E + 00 P15 3,9E − 04 3,9E − 04 1,4E − 05 4 Conclusions and Future Work The MCSFilter algorithm was used to solve bound constraint problems with dif- ferent dimensions, from two to ten. The algorithm was originally implemented in MATLAB and the results initially obtained were considered very promising. A second implementation was later developed in Java, which increased the perfor- mance considerably. In this work, the MCSFilter algorithm was re-coded in the C language and a comparison was made between all three implementations, both performance-wise and regarding search efficiency and precision. The evaluation results show that, for the set of problems considered, the novel C version, even though it is still a preliminary version, already surpasses the performance of the Java implementation. The search efficiency of the C version, however, must be improved. Regarding precision, the C version matched the previous in 6 problems and brought improvements on 5 other problems, in a total of 15 problems. Besides tackling the numerical efficiency and precision issues that still persist, future work will include testing the C code with other problems (including higher dimension and harder problems), and refining the code in order to improve its performance. In particular, and most relevant for the problems that still take a considerable amount of execution time, parallelization strategies will be exploited as a way to further accelerate the execution of the MCSFilter algorithm. Acknowledgements. This work has been supported by FCT - Fundação para a Ciência e Tecnologia within the Project Scope: UIDB/05757/2020.
  • 43. 30 L. Araújo et al. References 1. Abhishek, K., Leyffer, S., Linderoth, J.: FilMINT: an outer-approximation-based solver for convex mixed-integer nonlinear programs. INFORMS J. Comput. 22(4), 555–567 (2010) 2. Abramson, M., Audet, C., Chrissis, J., Walston, J.: Mesh adaptive direct search algorithms for mixed variable optimization. Optim. Lett. 3(1), 35–47 (2009). https://doi.org/10.1007/s11590-008-0089-2 3. Amador, A., Fernandes, F.P., Santos, L.O., Romanenko, A., Rocha, A.M.A.C.: Parameter estimation of the kinetic α-Pinene isomerization model using the MCS- Filter algorithm. In: Gervasi, O., et al. (eds.) ICCSA 2018, Part II. LNCS, vol. 10961, pp. 624–636. Springer, Cham (2018). https://doi.org/10.1007/978-3-319- 95165-2 44 4. Amador, A., Fernandes, F.P., Santos, L.O., Romanenko, A.: Application of MCS- Filter to estimate stiction control valve parameters. In: International Conference of Numerical Analysis and Applied Mathematics, AIP Conference Proceedings, vol. 1863, pp. 270005 (2017) 5. Belotti, P., Kirches, C., Leyffer, S., Linderoth, J., Mahajan, A.: Mixed-Integer Nonlinear Optimization. Acta Numer. 22, 1–131 (2013) 6. Bonami, P., et al.: An algorithmic framework for convex mixed integer nonlinear programs. Discrete Optim. 5(2), 186–204 (2008) 7. Bonami, P., Gonçalves, J.: Heuristics for convex mixed integer nonlinear programs. Comput. Optim. Appl. 51(2), 729–747 (2012) 8. D’Ambrosio, C., Lodi, A.: Mixed integer nonlinear programming tools: an updated practical overview. Ann. Oper. Res. 24, 301–320 (2013). https://doi.org/10.1007/ s10479-012-1272-5 9. Fernandes, F.P.: Programação não linear inteira mista e não convexa sem derivadas. PhD thesis, University of Minho, Braga (2014) 10. Fernandes, F.P., Costa, M.F.P., Fernandes, E.M.G.P., et al.: Multilocal program- ming: a derivative-free filter multistart algorithm. In: Murgante, B. (ed.) ICCSA 2013, Part I. LNCS, vol. 7971, pp. 333–346. Springer, Heidelberg (2013). https:// doi.org/10.1007/978-3-642-39637-3 27 11. Floudas, C., et al.: Handbook of Test Problems in Local and Global Optimization. Kluwer Academic Publishers, Boston (1999) 12. Hendrix, E.M.T., Tóth, B.G.: Introduction to Nonlinear and Global Optimization. Springer, New York (2010). https://doi.org/10.1007/978-0-387-88670-1 13. Romanenko, A., Fernandes, F.P., Fernandes, N.C. P.: PID controllers tuning with MCSFilter. In: AIP Conference Proceedings, vol. 2116, pp. 220003 (2019) 14. Yang, X.-S.: Optimization Techniques and Applications with Examples. Wiley, Hoboken (2018)
  • 44. On the Performance of the ORTHOMADS Algorithm on Continuous and Mixed-Integer Optimization Problems Marie-Ange Dahito1,2(B) , Laurent Genest1 , Alessandro Maddaloni2 , and José Neto2 1 Stellantis, Route de Gisy, 78140 Vélizy-Villacoublay, France {marieange.dahito,laurent.genest}@stellantis.com 2 Samovar, Telecom SudParis, Institut Polytechnique de Paris, 19 place Marguerite Perey, 91120 Palaiseau, France {alessandro.maddaloni,jose.neto}@telecom-sudparis.eu Abstract. ORTHOMADS is an instantiation of the Mesh Adaptive Direct Search (MADS) algorithm used in derivative-free and black- box optimization. We investigate the performance of the variants of ORTHOMADS on the bbob and bbob-mixint, respectively continuous and mixed-integer, testbeds of the COmparing Continuous Optimizers (COCO) platform and compare the considered best variants with heuris- tic and non-heuristic techniques. The results show a favourable perfor- mance of ORTHOMADS on the low-dimensional continuous problems used and advantages on the considered mixed-integer problems. Besides, a generally faster convergence is observed on all types of problems when the search phase of ORTHOMADS is enabled. Keywords: Derivative-free optimization · Blackbox optimization · Benchmarking · Mesh Adaptive Direct Search · Mixed-integer blackbox 1 Introduction Derivative-free optimization (DFO) and blackbox optimization (BBO) are branches of numerical optimization that have known a fast growth in the past years, especially with the growing need to solve real-world application problems but also with the development of methods to deal with unavailable or numeri- cally costly derivatives. DFO focuses on optimization techniques that make no use of derivatives while BBO deals with problems where the objective function is not analytically known, that is it is a blackbox. A regular blackbox objective is the output of a computer simulation: for instance, at Stellantis, the crash or acoustic outputs computed by the finite element simulation of a vehicle. The problems addressed in this paper are of the form: c Springer Nature Switzerland AG 2021 A. I. Pereira et al. (Eds.): OL2A 2021, CCIS 1488, pp. 31–47, 2021. https://doi.org/10.1007/978-3-030-91885-9_3
  • 45. 32 M.-A. Dahito et al. minimize x∈X f(x), (1) where X is a bounded domain of either Rn or Rc × Zi with c and i respectively the number of continuous and integer variables. n = c+i is the dimension of the problem and f is a blackbox function. Heuristic and non-heuristic techniques can tackle this kind of problems. Among the main approaches used in DFO are direct local search methods. The latter are iterative methods that, at each iteration, evaluate a set of points in a certain radius that can be increased if a better solution is found or decreased if the incumbent remains the best point at the current iteration. The Mesh Adaptive Direct Search (MADS) [1,4,5] is a famous direct local search method used in DFO and BBO that is an extension of the Generalized Pattern Search (GPS) introduced in [28]. MADS evolves on a mesh by first doing a global exploration called the search phase and then, if a better solution than the current iterate is not found, a local poll is performed. The points evaluated in the poll are defined by a finite set of poll directions that is updated at each iteration. The algorithm is derived in several instantiations available in the Non- linear Optimization with the MADS algorithm (NOMAD) software [7,19] and its performance is evaluated in several papers. As examples, a broad compar- ison of DFO optimizers is performed on 502 problems in [25] and NOMAD is used in [24] with a DACE surrogate and compared with other local and global surrogate-based approaches in the context of constrained blackbox optimization on an automotive optimization problem and twenty two test problems. Given the growing number of algorithms to deal with BBO problems, the choice of the most adapted method for solving a specific problem still remains complex. In order to help with this decision, some tools have been developed to compare the performance of algorithms. In particular, data profiles [20] are frequently used in DFO and BBO to benchmark algorithms: they show, given some precision or target value, the fraction of problems solved by an algorithm according to the number of function evaluations. There also exist suites of aca- demic test problems: although the latter are treated as blackbox functions, they are analytically known, which is an advantage to understand the behaviour of an algorithm. There are also available industrial applications but they are rare. Twenty two implementations of derivative-free algorithms for solving box- constrained optimization problems are benchmarked in [25] and compared with each other according to different criteria. They use a set of 502 problems that are categorized according to their convexity (convex or nonconvex), smoothness (smooth or non-smooth) and dimensions between 1 and 300. The algorithms tested include local-search methods such as MADS through NOMAD version 3.3 and global-search methods such as the NEW Unconstrained Optimization Algorithm (NEWUOA) [23] using trust regions and the Covariance Matrix Adap- tation - Evolution Strategy (CMA-ES) [16] which is an evolutionary algorithm. Simulation optimization deals with problems where at least some of the objec- tive or constraints come from stochastic simulations. A review of algorithms to solve simulation optimization is presented in [2], among which the NOMAD software. However, this paper does not compare them due to a lack of standard comparison tools and large-enough testbeds in this optimization branch.
  • 46. ORTHOMADS on Continuous and Mixed-Integer Optimization Problems 33 In [3], the MADS algorithm is used to optimize the treatment process of spent potliners in the production of aluminum. The problem is formalized as a 7–dimensional non-linear blackbox problem with 4 inequality constraints. In particular, three strategies are compared using absolute displacements, relative displacements and the latter with a global Latin hypercube sampling search. They show that the use of scaling is particularly beneficial on the considered chemical application. The instantiation ORTHOMADS is introduced in Ref. [1] and consists in using orthogonal directions in the poll step of MADS. It is compared to the initial LTMADS, where the poll directions are generated from a random lower trian- gular matrix, and to GPS algorithm on 45 problems from the literature. They show that MADS outperforms GPS and that the instantiation ORTHOMADS competes with LTMADS and has the advantage that its poll directions cover better the variable space. The ORTHOMADS algorithm, which is the default MADS instantiation used in NOMAD, presents variants in the poll directions of the method. To our knowl- edge, the performance of these different variants has not been discussed in the literature. The purpose of this paper is to explore this aspect by performing experiments with the ORTHOMADS variants. This work is part of a project conducted with the automotive group Stellantis to develop new approaches for solving their blackbox optimization problems. Our contributions are first the evaluations of the ORTHOMADS variants on continuous and mixed-integer opti- mization problems. Besides, the contribution of the search phase is studied and shows a general deterioration of the performance when the search is turned off. The effect however decreases with increasing dimension. Two from the best vari- ants of ORTHOMADS are identified on each of the used testbeds and their perfor- mance is compared with other algorithms including heuristic and non-heuristic techniques. Our experiments exhibit particular variants of ORTHOMADS per- forming best depending on problems features. Plots for analyses are available at the following link: https://github.com/DahitoMA/ResultsOrthoMADS. The paper is organized as follows. Section 2 gives an overview of the MADS algorithm and its ORTHOMADS variants. In Sect. 3, the variants of ORTHOMADS are evaluated on the bbob and bbob-mixint suites that con- sist respectively of continuous and mixed-integer functions. Then, two from the best variants of ORTHOMADS are compared with other algorithms in Sect. 4. Finally, Sect. 5 discusses the results of the paper. 2 MADS and the Variants of ORTHOMADS This section gives an overview of the MADS algorithm and explains the differ- ences among the ORTHOMADS variants. 2.1 The MADS Algorithm MADS is an iterative direct local search method used for DFO and BBO prob- lems. The method relies on a mesh Mk updated at each iteration and determined
  • 47. 34 M.-A. Dahito et al. by the current iterate xk, a mesh parameter size δk 0 and a matrix D whose columns consist of p positive spanning directions. The mesh is defined as follows: Mk := {xk + δkDy : y ∈ Np }, (2) where the columns of D form a positive spanning set {D1, D2, . . . , Dp} and N stands for natural numbers. The algorithm proceeds in two phases at each iteration: the search and the poll. The search phase is optional and similar to a design of experiment: a finite set of points Sk, stemming generally from a surrogate model prediction and a Nelder-Mead (N-M) search [21], are evaluated anywhere on the mesh. If the search fails at finding a better point, then a poll is performed. During the poll phase, a finite set of points are evaluated on the mesh in the neighbourhood of the incumbent. This neighbourhood is called the frame Fk and has a radius of Δk 0 that is called the poll size parameter. The frame is defined as follows: Fk := {x ∈ Mk : x − xk∞ ≤ Δkb}, (3) where b = max{d∞, d ∈ D} and D ⊂ {D1, D2, . . . , Dp} is a finite set of poll directions. The latter are such that their union over iterations grows dense on the unit sphere. The two size parameters are such that δk ≤ Δk and evolve after each itera- tion: if a better solution is found, they are increased and otherwise decreased. As the mesh size decreases more drastically than the poll size in case of an unsuc- cessful iteration, the choice of points to evaluate during the poll becomes greater with unsuccessful iteration. Usually, δk = min{Δk, Δ2 k}. The description of the MADS algorithm is given in Algorithm 1 and inspired from [6]. Algorithm 1: Mesh Adaptive Direct Search (MADS) Initialize k = 0, x0 ∈ Rn , D ∈ Rn×p , Δ0 0, τ ∈ (0, 1) ∩ Q, stop 0 1. Update δk = min{Δk, Δ2 k} 2. Search If f(x) f(xk) for x ∈ Sk then xk+1 ← x, Δk+1 ← τ−1 Δk and go to 4 Else go to 3 3. Poll Select Dk,Δk such that Pk := {xk + δkd : d ∈ Dk,Δk } ⊂ Fk If f(x) f(xk) for x ∈ Pk then xk+1 ← x, Δk+1 ← τ−1 Δk and go to 4 Else xk+1 ← xk and Δk+1 ← τΔk 4. Termination If Δk+1 ≥ stop then k ← k + 1 and go to 1 Else stop 2.2 ORTHOMADS Variants MADS has two main instantiations called ORTHOMADS and LTMADS, the latter being the first developed. Both variants are implemented in the NOMAD
  • 48. ORTHOMADS on Continuous and Mixed-Integer Optimization Problems 35 software but as ORTHOMADS is to be preferred for its coverage property in the variable space, it was used for the experiments of this paper with NOMAD version 3.9.1. The NOMAD implementation of ORTHOMADS provides 6 variants of the algorithm according to the number of directions used in the poll or according to the way that the last poll direction is computed. They are listed below. ORTHO N + 1 NEG computes n + 1 directions among which n are orthogonal and the (n + 1)th direction is the opposite sum of the n first ones. ORTHO N + 1 UNI computes n + 1 directions among which n are orthogonal and the (n + 1)th direction is generated from a uniform distribution. ORTHO N + 1 QUAD computes n + 1 directions among which n are orthogonal and the (n+1)th direction is generated from the minimization of a local quadratic model of the objective. ORTHO 2N computes 2n directions that are orthogonal. More precisely each direction is orthogonal to 2n−2 directions and collinear with the remaining one. ORTHO 1 uses only one direction in the poll. ORTHO 2 uses two opposite directions in the poll. In the plots, the variants will respectively be denoted using Neg, Uni, Quad, 2N, 1 and 2. 3 Test of the Variants of ORTHOMADS In this section, we try to identify potentially better direction types of ORTHOMADS and investigate the contribution of the search phase. 3.1 The COCO Platform and the Used Testbeds The COmparing Continuous Optimizers (COCO) platform [17] is a bench- marking framework for blackbox optimization. In this respect, several suites of standard test problems are provided and are declined in variants, also called instances. The latter are obtained from transformations in variable and objective space in order to make the functions less regular. In particular, the bbob testbed [13] provides 24 continuous problems for blackbox optimization, each of them available in 15 instances and in dimensions 2, 3, 5, 10, 20 and 40. The problems are categorized in five subgroups: separable functions, functions with low or moderate conditioning, ill-conditioned functions, multi-modal functions with global structure and multi-modal weakly structured functions. All problems are known to have their global optima in [−5, 5]n , where n is the size of a problem. The mixed-integer suite of problems bbob-mixint [29] derives the bbob and bbob-largescale [30] problems by imposing integer constraints on some vari- ables. It consists of the 24 functions of bbob available in 15 instances and in dimensions 5, 10, 20, 40, 80 and 160. COCO also provides various tools for algorithm comparison, notably Empir- ical Cumulative Distribution Function (ECDF) plots (or data profiles) that are
  • 49. 36 M.-A. Dahito et al. used in this paper. They show the empirical runtimes, computed as the num- ber of function evaluations to reach given function target values, divided by the dimension. A function target value is defined as ft = f∗ + Δft, where f∗ is the minimum value of a function f and Δft is a target precision. For the bbob and bbob-mixint testbeds, the target precisions are 51 values between 10−8 and 102 . Thus, if a method reaches 1 in the ordinate axis of an ECDF plot, it means 100% of function target values have been reached, including the smallest one f∗ + 10−8 . The presence of a cross on an ECDF curve indicates when the maximal budget of function evaluations is reached. After the cross, COCO estimates the runtimes: it is called simulated restarts. For bbob, an artificial solver called best 2009 is present on the plots and is used as reference solver. Its data comes from the BBOB-2009 workshop1 com- paring 31 solvers. The statistical significance of the results is evaluated in COCO using the rank-sum test. 3.2 Parameter Setting In order to test the performance of the different variants of ORTHOMADS, the bbob and bbob-mixint suites of COCO were used, in particular the problems that have a dimension lower than or equal to 20. This limit in the dimensions has two main reasons: the first one is the computational cost required for the exper- iments and, with the perspective of solving real-world problems, 20 is already a high dimension in this expensive blackbox context. Only the first 5 instances of each function were used, that is a total of respectively 600 and 360 problems used from bbob and bbob-mixint. A maximal function evaluation budget of 2 × 103 × n was set, with n being the dimension of the considered problem. To see the contribution of the search phase, the experiments on the vari- ants were divided in two subgroups: the first one using the default search of ORTHOMADS and the second one where the search phase is disabled. The latter is obtained by setting the four parameters NM SEARCH, VNS SEARCH, SPECULATIVE SEARCH and MODEL SEARCH of NOMAD to the value no. In the plots, the label NoSrch is used when the search is turned off. The search notably includes the use of a quadratic model and of the N-M method. The minimal mesh size was set to 10−11 . Experiments were run with restarts allowed for unsolved problems when the evaluation budget is not reached. This may happen due to internal stopping criteria of the solvers. The initial points used are suggested by COCO through the method initial solution proposal(). 3.3 Results Continuous Problems. As said previously, the contribution of the search phase was studied. The results aggregated on all functions in dimensions 5, 10 1 https://coco.gforge.inria.fr/doku.php?id=bbob-2009-results.
  • 50. ORTHOMADS on Continuous and Mixed-Integer Optimization Problems 37 and 20 on the bbob suite are depicted on Fig. 1. They show that enabling the search step in NOMAD generally leads to an equivalent or higher performance of the variants and this improvement can be important. Besides, using one or two directions with or without search is often far from being competitive with the other variants. In particular, 1 NoSrch is often the worst or among the worsts, except on Discus which is an ill-conditioned quadratic function, where it competes with the variants that do not use the search. As mentioned in Sect. 1, the plots depicting the results described in the paper are available online. Looking at the results aggregated on all functions for ORTHO 2N, ORTHO N + 1 NEG, ORTHO N + 1 QUAD and ORTHO N + 1 UNI, the search increases the success rate from nearly 70%, 55% and 40% up to 90%, 80% and 65% respectively in dimensions 2, 3 and 5, as shown in Fig. 1a for dimension 5. From dimension 10, the advantage of the search decreases and the performance of ORTHO N + 1 UNI visibly stands out from the other three variants mentioned above since it decreases with or without the search, as illustrated in Figs. 1b and 1c. Focusing on some families of functions, Neg NoSrch seems slightly less impacted than the other NoSrch variants by the increase of the dimension. On ill-conditioned problems, the variants using search are more sensitive to the increase of the dimension. Considering multi-modal functions with adequate global structure, 2N NoSrch solves 15% more problems than the other NoSrch variants in 2D. In this dimen- sion, the variants using search have a better success rate than the best 2009 up to a budget of 200 function evaluations. From 10D, all curves are rather flat: all ORTHOMADS variants tend to a local optimum. With increasing dimension, Neg is competitive or better than the others on multi-modal problems without global structure, followed by 2N. In particular, in dimension 20 both variants are competitive and outperform the remaining variants that use search on the Gallagher’s Gaussian 101–me peaks function, and Neg outperforms them with a gap of more than 20% in their success rate on the Gallagher’s Gaussian 21–hi peaks function which is also ill-conditioned. Since Neg and 2N are often among the best variants on the considered prob- lems and have an advantage on some multi-modal weakly structured functions, they are chosen for comparison with other solvers. Mixed-Integer Problems. The experiments performed on the mixed-integer problems also show a similar or improved performance of the ORTHOMADS variants when the search step is enabled in NOMAD, as illustrated in Fig. 2 in dimensions 5, 10 and 20. Looking at Fig. 2a for instance, in the given budget of 2 × 103 × n, the variant denoted as 2 solves 75% of the problems in dimension 5 against 42% for 2 NoSrch. However, it is not always the case: the only use of the poll directions is some- times favourable. It is notably the case on the Schwefel function in dimension 20 where the curve Neg NoSrch solves 43% of the problems, which is the highest success rate when the search and non-search settings are compared together.
  • 51. 38 M.-A. Dahito et al. (a) 5D (b) 10D (c) 20D Fig. 1. ECDF plots: the variants of ORTHOMADS with and without the search step on the bbob problems. Results aggregated on all functions in dimensions 5, 10 and 20. When the search is disabled, ORTHO 2N seems preferable in small dimension, namely here in 5D as presented in Fig. 2a. In this dimension, it is sometimes the only variant that solves all the instances of a function in the given budget: it is the case for the step-ellipsoidal function, the two Rosenbrock functions (original and rotated), the Schaffer functions, and the Schwefel function. It also solves all the separable functions in 5D and can therefore solve the different types of problems. Although the difference is less noticeable with the search step enabled, this variant is still a good choice, especially on multi-modal problems with adequate global structure. On the whole, looking at Fig. 2, ORTHO 1 and ORTHO 2 solve less problems than the other variants and the gap in performance with the other direction types increases with the dimension, whether using the search phase or not. Although the use of the search helps solving some functions in low dimension such as the sphere or linear slope functions in 5D, both variants perform poorly in dimension 20 on second-order separable functions, even if the search enables the solution of linear slope which is a linear function. Among these two variants, using 2 poll directions also seems better than only one, especially in dimension 10 where ORTHO 2 solves more than 23% and 40% of problems respectively without and with use of search, against 16% and 31% for ORTHO 1 as presented in Fig. 2b. Among the four remaining variants, ORTHO N + 1 UNI reaches equivalent or less targets than the others whether considering the setting where the search is available or when only the poll directions are used, as depicted in Fig. 2. In particular, in dimension 5, the four variants using more than n+1 poll directions solve more than 85% of the separable problems with or without search. But when the dimension increases, ORTHO N + 1 UNI has a disadvantage on the Rastrigin functions where the use of the search does not noticeably help the convergence of the algorithm. Focusing on the different function types, no algorithm among the variants ORTHO 2N, ORTHO N + 1 NEG and ORTHO N + 1 QUAD seem to particularly outperform the others in dimensions 10 and 20. A higher success rate is however noticeable on multimodal weakly structured problems with search available for ORTHO N + 1 NEG in comparison with ORTHO N + 1 QUAD and for the latter in comparison with
  • 52. ORTHOMADS on Continuous and Mixed-Integer Optimization Problems 39 ORTHO 2N. Besides, Neg reaches more targets on problems with low or moderate conditioning. For these reasons, ORTHO N + 1 NEG was chosen for comparison with other solvers. Besides, the mentioned slight advantage of ORTHO N + 1 QUAD over ORTHO 2N, its equivalent or better performance on separable and ill-conditioned functions compared with the latter variant, makes it a good second choice to represent ORTHOMADS. (a) 5D (b) 10D (c) 20D Fig. 2. ECDF plots: the variants of ORTHOMADS with and without the search step on the bbob-mixint problems. Results aggregated on all functions in dimensions 5, 10 and 20. 4 Comparison of ORTHOMADS with other solvers The previous experiments showed the advantage of using the search step in ORTHOMADS to speed up convergence. They also revealed the effectiveness of some variants that are used here for comparisons with other algorithms on the continuous and mixed-integer suites. 4.1 Compared Algorithms Apart from ORTHOMADS, the other algorithms used for comparison on bbob are first, three deterministic algorithms: the quasi-Newton Broyden-Fletcher- Goldfarb-Shanno (BFGS) method [22], the quadratic model-based NEWUOA and the adaptive N-M [14] that is a simplicial search. Stochastic methods are also used among which a Random Search (RS) algorithm [10] and three population- based algorithms: a surrogate-assisted CMA-ES, Differential Evolution (DE) [27] and Particle Swarm Optimization (PSO) [11,18]. In order to perform algorithm comparisons on bbob-mixint, data from four stochastic methods were collected: RS, the mixed-integer variant of CMA-ES, DE and the Tree-structured Parzen Estimator (TPE) [8] that is a stochastic model-based technique. BFGS is an iterative quasi-Newton linesearch method that uses approxima- tions of the Hessian matrix of the objective. At iteration k, the search direc- tion pk solves a linear system Bkpk = −∇f(xk), where xk is the iterate, f the
  • 53. 40 M.-A. Dahito et al. objective function and Bk ≈ ∇2 f(xk). The matrix Bk is then updated according to a formula. In the context of BBO, the derivatives are approximated with finite differences. NEWUOA is the Powell’s model-based algorithm for DFO. It is a trust-region method that uses sequential quadratic interpolation models to solve uncon- strained derivative-free problems. The N-M method is a heuristic DFO method that uses simplices. It begins with a non degenerated simplex. The algorithm identifies the worst point among the vertices of the simplex and tries to replace it by reflection, expansion or contraction. If none of these geometric transformations of the worst point enables to find a better point, a contraction preserving the best point is done. The adaptive N-M method uses the N-M technique with adaptation of parameters to the dimension, which is notably useful in high dimensions. RS is a stochastic iterative method that performs a random selection of candidates: at each iteration, a random point is sampled and the best between this trial point and the incumbent is kept. CMA-ES is a state-of-the art evolutionary algorithm used in DFO. Let N(m, C) denote a normal distribution of mean m and covariance matrix C. It can be represented by the ellipsoid x C−1 x = 1. The main axes of the ellip- soid are the eigenvectors of C and the square roots of their lengths correspond to the associated eigenvalues. CMA-ES iteratively samples its populations from multivariate normal distributions. The method uses updates of the covariance matrices to learn a quadratic model of the objective. DE is a meta-heuristic that creates a trial vector by combining the incumbent with randomly chosen individuals from a population. The trial vector is then sequentially filled with parameters from itself or the incumbent. Finally the best vector between the incumbent and the created vector is chosen. PSO is an archive-based evolutionary algorithm where candidate solutions are called particles and the population is a swarm. The particles evolve according to the global best solution encountered but also according to their local best points. TPE is an iterative model-based method for hyperparameter optimization. It sequentially builds a probabilistic model from already evaluated hyperparam- eters sets in order to suggest a new set of hyperparameters to evaluate on a score function that is to be minimized. 4.2 Parameter Setting To compare the considered best variants of ORTHOMADS with other methods, the 15 instances of each function were used and the maximal function evaluation budget was increased to 105 × n, with n being the dimension. For the bbob problems, the data used for BFGS, DE and the adaptive N-M method comes from the experiments of [31]. CMA-ES was tested in [15], the data of NEWUOA is from [26], the one of PSO is from [12] and RS results come from [9]. The comparison data of CMA-ES, DE, RS and TPE used on