05 history of cv a machine learning (theory) perspective on computer visionzukun
This document provides an overview of machine learning algorithms used in computer vision from the perspective of a machine learning theorist. It discusses how the theorist got involved in a computer vision project in 2002 and summarizes key algorithms at that time like boosting, support vector machines, and their developments. It also provides historical context and comparisons of algorithms like perceptron and Winnow. The document uses examples to explain concepts like kernels and the kernel trick in support vector machines.
Learning to discover monte carlo algorithm on spin ice manifoldKai-Wen Zhao
The global update Monte Carlo sampler can be discovered naturally by trained machine using policy gradient method on topologically constrained environment.
This document provides an introduction to concepts and applications of Global Navigation Satellite Systems (GNSS). It outlines topics to be covered, including basic concepts, collecting geospatial data, introducing GNSS, applications and software, resources, and acknowledgements. The introduction discusses the long history of human navigation from ancient to modern times. It will cover mathematical concepts required to understand GNSS such as Taylor series expansion, Jacobian, and least squares adjustment. Older surveying techniques for collecting geospatial data involved chains, tapes, sextants, theodolites and autolevels, while modern methods include GNSS.
This document discusses Bayesian dark knowledge and matrix factorization using stochastic gradient MCMC methods. It applies various SG-MCMC methods like SGLD, SG-HMC, and SG-NHT to Bayesian dark knowledge. It also combines GANs with Bayesian dark knowledge to generate unlabeled data. Finally, it applies SG-MCMC and neural networks to probabilistic matrix factorization. Results on MNIST and movie recommendation datasets are presented.
The document discusses inertial algorithms for minimizing convex functions. It begins by introducing the gradient method and accelerated/inertial gradient method. It then reviews several classic approaches for analyzing the convergence of inertial algorithms, such as algebraic proofs, estimate sequences, and viewing the algorithm as a discretization of an ordinary differential equation (ODE). More recent approaches discussed include analyzing inertial algorithms as a combination of primal and mirror descent steps or using Bregman estimate sequences. The document raises questions about interpreting the difference between inertial algorithms and the heavy ball method from an ODE perspective. It also discusses a new direction of analyzing inertial algorithms by viewing them as numerical integration schemes approximating the solution to an ODE.
Higher-order factorization machines (HOFMs) provide a framework for modeling feature interactions of arbitrary order in recommendation systems and link prediction tasks. The key ideas are:
(1) HOFMs express the prediction function as a weighted sum of ANOVA kernels of varying orders, capturing interactions between features.
(2) Computing the ANOVA kernel and its gradient can be done in linear time using dynamic programming, enabling efficient learning and prediction.
(3) Experiments on link prediction tasks show HOFMs can effectively model higher-order interactions to improve predictions compared to lower-order models like FM.
Pattern learning and recognition on statistical manifolds: An information-geo...Frank Nielsen
This document provides an overview of Frank Nielsen's talk on pattern learning and recognition using information geometry and statistical manifolds. The talk focuses on departing from vector space representations and dealing with (dis)similarities that do not have Euclidean or metric properties. This poses new theoretical and computational challenges for pattern recognition. The talk describes using exponential family mixture models defined on dually flat statistical manifolds induced by convex functions. On these manifolds, dual coordinate systems and dual affine geodesics allow for computing-friendly representations of divergences and similarities between probabilistic patterns. The techniques aim to achieve statistical invariance and enable algorithmic approaches to problems like Gaussian mixture modeling, shape retrieval, and diffusion tensor imaging analysis.
05 history of cv a machine learning (theory) perspective on computer visionzukun
This document provides an overview of machine learning algorithms used in computer vision from the perspective of a machine learning theorist. It discusses how the theorist got involved in a computer vision project in 2002 and summarizes key algorithms at that time like boosting, support vector machines, and their developments. It also provides historical context and comparisons of algorithms like perceptron and Winnow. The document uses examples to explain concepts like kernels and the kernel trick in support vector machines.
Learning to discover monte carlo algorithm on spin ice manifoldKai-Wen Zhao
The global update Monte Carlo sampler can be discovered naturally by trained machine using policy gradient method on topologically constrained environment.
This document provides an introduction to concepts and applications of Global Navigation Satellite Systems (GNSS). It outlines topics to be covered, including basic concepts, collecting geospatial data, introducing GNSS, applications and software, resources, and acknowledgements. The introduction discusses the long history of human navigation from ancient to modern times. It will cover mathematical concepts required to understand GNSS such as Taylor series expansion, Jacobian, and least squares adjustment. Older surveying techniques for collecting geospatial data involved chains, tapes, sextants, theodolites and autolevels, while modern methods include GNSS.
This document discusses Bayesian dark knowledge and matrix factorization using stochastic gradient MCMC methods. It applies various SG-MCMC methods like SGLD, SG-HMC, and SG-NHT to Bayesian dark knowledge. It also combines GANs with Bayesian dark knowledge to generate unlabeled data. Finally, it applies SG-MCMC and neural networks to probabilistic matrix factorization. Results on MNIST and movie recommendation datasets are presented.
The document discusses inertial algorithms for minimizing convex functions. It begins by introducing the gradient method and accelerated/inertial gradient method. It then reviews several classic approaches for analyzing the convergence of inertial algorithms, such as algebraic proofs, estimate sequences, and viewing the algorithm as a discretization of an ordinary differential equation (ODE). More recent approaches discussed include analyzing inertial algorithms as a combination of primal and mirror descent steps or using Bregman estimate sequences. The document raises questions about interpreting the difference between inertial algorithms and the heavy ball method from an ODE perspective. It also discusses a new direction of analyzing inertial algorithms by viewing them as numerical integration schemes approximating the solution to an ODE.
Higher-order factorization machines (HOFMs) provide a framework for modeling feature interactions of arbitrary order in recommendation systems and link prediction tasks. The key ideas are:
(1) HOFMs express the prediction function as a weighted sum of ANOVA kernels of varying orders, capturing interactions between features.
(2) Computing the ANOVA kernel and its gradient can be done in linear time using dynamic programming, enabling efficient learning and prediction.
(3) Experiments on link prediction tasks show HOFMs can effectively model higher-order interactions to improve predictions compared to lower-order models like FM.
Pattern learning and recognition on statistical manifolds: An information-geo...Frank Nielsen
This document provides an overview of Frank Nielsen's talk on pattern learning and recognition using information geometry and statistical manifolds. The talk focuses on departing from vector space representations and dealing with (dis)similarities that do not have Euclidean or metric properties. This poses new theoretical and computational challenges for pattern recognition. The talk describes using exponential family mixture models defined on dually flat statistical manifolds induced by convex functions. On these manifolds, dual coordinate systems and dual affine geodesics allow for computing-friendly representations of divergences and similarities between probabilistic patterns. The techniques aim to achieve statistical invariance and enable algorithmic approaches to problems like Gaussian mixture modeling, shape retrieval, and diffusion tensor imaging analysis.
Projectors and Projection Onto SubspacesIsaac Yowetu
The document discusses projections onto subspaces. It provides examples of projecting vectors onto lines and subspaces. For projecting a vector v onto a line defined by a vector u, it shows that the projection matrix is P=uuT/uTu. It also shows how to project vectors onto subspaces defined by matrices and how to decompose a vector into components within and orthogonal to a subspace.
This document summarizes different approaches for structure learning in graph neural networks. It discusses three main classes of methods: 1) metric-based learning which learns a similarity matrix between nodes, 2) probabilistic models which learn the parameters of a distribution over graphs, and 3) direct optimization which directly optimizes the graph adjacency matrix. The document provides examples of methods within each class and notes challenges such as the simplicity of probabilistic models and computational difficulties of direct optimization.
Subgradient Methods for Huge-Scale Optimization Problems - Юрий Нестеров, Cat...Yandex
We consider a new class of huge-scale problems, the problems with sparse subgradients. The most important functions of this type are piecewise linear. For optimization problems with uniform sparsity of corresponding linear operators, we suggest a very efficient implementation of subgradient iterations, the total cost of which depends logarithmically in the dimension. This technique is based on a recursive update of the results of matrix/vector products and the values of symmetric functions. It works well, for example, for matrices with few nonzero diagonals and for max-type functions.
We show that the updating technique can be efficiently coupled with the simplest subgradient methods. Similar results can be obtained for a new non-smooth random variant of a coordinate descent scheme. We also present promising results of preliminary computational experiments.
This document discusses various numerical methods for finding the roots of functions, including graphical methods, Newton's method, secant method, and bisection method. It provides examples of applying these methods to find the roots of example functions. Newton's and secant methods are shown to converge rapidly, finding roots within 6 iterations for one example problem. The bisection method is also demonstrated on the function x^2 - 4x + 3, finding roots that agree with the analytical solutions. The document provides information on roots, functions, and iterative algorithms for root-finding.
The document provides an overview of backpropagation for neural networks. It begins by defining the loss function and discussing gradient descent. It then walks through the computational graph of a simple perceptron and derives the gradients for each operation using the chain rule. This allows computing the gradient of the loss with respect to the weights and biases, which are then updated using gradient descent. It discusses computing gradients for different activation functions like sigmoid, ReLU, and max pooling. Finally, it notes that backpropagation allows estimating parameters across stacked neural network layers.
Animashree Anandkumar, Electrical Engineering and CS Dept, UC Irvine at MLcon...MLconf
Anima Anandkumar is a faculty at the EECS Dept. at U.C.Irvine since August 2010. Her research interests are in the area of large-scale machine learning and high-dimensional statistics. She received her B.Tech in Electrical Engineering from IIT Madras in 2004 and her PhD from Cornell University in 2009. She has been a visiting faculty at Microsoft Research New England in 2012 and a postdoctoral researcher at the Stochastic Systems Group at MIT between 2009-2010. She is the recipient of the Microsoft Faculty Fellowship, ARO Young Investigator Award, NSF CAREER Award, and IBM Fran Allen PhD fellowship.
C. Guyon, T. Bouwmans. E. Zahzah, “Foreground Detection via Robust Low Rank Matrix Factorization including Spatial Constraint with Iterative Reweighted Regression”, International Conference on Pattern Recognition, ICPR 2012, Tsukuba, Japan, November 2012.
https://telecombcn-dl.github.io/dlai-2019/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
Overview of the course. Introduction to image sciences, image processing and computer vision. Basics of machine learning, terminologies, paradigms. No-free lunch theorem. Supervised versus unsupervised learning. Clustering and K-Means. Classification and regression. Linear least squares and polynomial curve fitting. Model complexity and overfitting. Curse of dimensionality. Dimensionality reduction and principal component analysis. Image representation, semantic gap, image features, and classical computer vision pipelines.
The document discusses building robust machine learning systems that can handle concept drift. It introduces the challenges of concept drift when the underlying data distribution changes over time. It proposes using Gaussian process classifiers with an adaptive training window approach. The approach monitors for concept drift and retrains the model if detected. It tests the approach on artificial data streams with different drift scenarios and finds the adaptive approach performs better than a static model at handling concept drift. Future work could explore other drift detection methods and ensembles of adaptive Gaussian process classifiers.
Image generation. Gaussian models for human faces, limits and relations with linear neural networks. Generative adversarial networks (GANs), generators, discrinators, adversarial loss and two player games. Convolutional GAN and image arithmetic. Super-resolution. Nearest-neighbor, bilinear and bicubic interpolation. Image sharpening. Linear inverse problems, Tikhonov and Total-Variation regularization. Super-Resolution CNN, VDSR, Fast SRCNN, SRGAN, perceptual, adversarial and content losses. Style transfer: Gatys model, content loss and style loss.
Sampling strategies for Sequential Monte Carlo (SMC) methodsStephane Senecal
Sequential Monte Carlo methods use importance sampling and resampling to estimate distributions in state space models recursively over time. This document discusses strategies for sampling in sequential Monte Carlo methods, including:
- Using the optimal proposal distribution of the one-step ahead predictive distribution to minimize weight variance.
- Approximating the predictive distribution using mixtures, expansions, auxiliary variables, or Markov chain Monte Carlo methods.
- Considering blocks of variables over time rather than individual time steps to better diffuse particles, such as using a lagged block, reweighting particles before resampling, or sampling an extended block with an augmented state space.
Binary classification and linear separators. Perceptron, ADALINE, artifical neurons. Artificial neural networks (ANNs), activation functions, and universal approximation theorem. Linear versus non-linear classification problems. Typical tasks, architectures and loss functions. Gradient descent and back-propagation. Support Vector Machines (SVMs), soft-margins and kernel trick. Connexions between ANNs and SVMs.
This document discusses basic image transformations including translation, rotation, and scaling. Translation moves an image by adding offsets to x and y coordinates. Rotation transforms an image by applying a rotation matrix. Scaling enlarges or shrinks an image by multiplying x and y values. These transformations can be represented by matrices and concatenated to perform multiple operations. The inverse of a transformation matrix undoes the effects of the transformation and recovers the original image coordinates.
Localization and classification. Overfeat: class agnostic versu class specific localization, fully convolutional neural networks, greedy merge strategy. Multiobject detection. Region proposal and selective search. R-CNN, Fast R-CNN, Faster R-CNN and YOLO. Image segmentation. Semantic segmentation and transposed convolutions. Instance segmentation and Mask R-CNN. Image captioning. Recurrent Neural Networks (RNNs). Language generation. Long Short Term Memory (LSTMs). DeepImageSent, Show and Tell, and Show, Attend and Tell algorithms.
The document discusses recommender systems and sequential recommendation problems. It covers several key points:
1) Matrix factorization and collaborative filtering techniques are commonly used to build recommender systems, but have limitations like cold start problems and how to incorporate additional constraints.
2) Sequential recommendation problems can be framed as multi-armed bandit problems, where past recommendations influence future recommendations.
3) Various bandit algorithms like UCB, Thompson sampling, and LinUCB can be applied, but extending guarantees to models like matrix factorization is challenging. Offline evaluation on real-world datasets is important.
Lec-17: Sparse Signal Processing & Applications [notes]
Sparse signal processing, recovery of sparse signal via L1 minimization. Applications including face recognition, coupled dictionary learning for image super-resolution.
The document summarizes key concepts in social network analysis including metrics like degree distribution, path lengths, transitivity, and clustering coefficients. It also discusses models of network growth and structure like random graphs, small-world networks, and preferential attachment. Computational aspects of analyzing large networks like calculating shortest paths and the diameter are also covered.
https://telecombcn-dl.github.io/2018-dlai/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
Nelly Litvak – Asymptotic behaviour of ranking algorithms in directed random ...Yandex
There is a vast empirical research on the behaviour of ranking algorithms, e.g. Google PageRank, in scale-free networks. In this talk, we address this problem by analytical probabilistic methods. In particular, it is well-known that the PageRank in scale-free networks follows a power law with the same exponent as in-degree. Recent probabilistic analysis has provided an explanation for this phenomenon by obtaining a natural approximation for PageRank based on stochastic fixed-point equations. For these equations, explicit solutions can be constructed on weighted branching trees, and their tail behavior can be described in great detail.
In this talk we present a model for generating directed random graphs with prescribed degree distributions where we can prove that the PageRank of a randomly chosen node does indeed converge to the solution of the corresponding fixed-point equation as the number of nodes in the graph grows to infinity. The proof of this result is based on classical random graph coupling techniques combined with the now extensive literature on the behavior of branching recursions on trees.
Measuring the benefits of climate forecastsmatteodefelice
This document discusses measuring the benefits of seasonal climate forecasts for predicting photovoltaic (PV) power production in Europe. It analyzes the statistical skill of seasonal forecasts compared to observations, as well as how installed PV capacity and variability in solar radiation affect the potential value of forecasts. By considering skill, capacity, and variability together, the authors aim to better evaluate how climate forecasts could help the solar power sector improve decision-making.
Gecco 2011 - Effects of Topology on the diversity of spatially-structured evo...matteodefelice
This document summarizes research on the effect of topology on diversity in spatially-structured evolutionary algorithms (SSEAs). The researchers modeled SSEAs as spreading processes and investigated how network topology influences diversity. They found that lattice networks maintained more diversity than random networks, leading to finding more optima. Rewiring the lattice to have small-world properties showed how network structure can control the spreading of solutions and thus the algorithm's dynamics. This research was a first step toward understanding how to design network topologies to optimize problem-solving in SSEAs.
Projectors and Projection Onto SubspacesIsaac Yowetu
The document discusses projections onto subspaces. It provides examples of projecting vectors onto lines and subspaces. For projecting a vector v onto a line defined by a vector u, it shows that the projection matrix is P=uuT/uTu. It also shows how to project vectors onto subspaces defined by matrices and how to decompose a vector into components within and orthogonal to a subspace.
This document summarizes different approaches for structure learning in graph neural networks. It discusses three main classes of methods: 1) metric-based learning which learns a similarity matrix between nodes, 2) probabilistic models which learn the parameters of a distribution over graphs, and 3) direct optimization which directly optimizes the graph adjacency matrix. The document provides examples of methods within each class and notes challenges such as the simplicity of probabilistic models and computational difficulties of direct optimization.
Subgradient Methods for Huge-Scale Optimization Problems - Юрий Нестеров, Cat...Yandex
We consider a new class of huge-scale problems, the problems with sparse subgradients. The most important functions of this type are piecewise linear. For optimization problems with uniform sparsity of corresponding linear operators, we suggest a very efficient implementation of subgradient iterations, the total cost of which depends logarithmically in the dimension. This technique is based on a recursive update of the results of matrix/vector products and the values of symmetric functions. It works well, for example, for matrices with few nonzero diagonals and for max-type functions.
We show that the updating technique can be efficiently coupled with the simplest subgradient methods. Similar results can be obtained for a new non-smooth random variant of a coordinate descent scheme. We also present promising results of preliminary computational experiments.
This document discusses various numerical methods for finding the roots of functions, including graphical methods, Newton's method, secant method, and bisection method. It provides examples of applying these methods to find the roots of example functions. Newton's and secant methods are shown to converge rapidly, finding roots within 6 iterations for one example problem. The bisection method is also demonstrated on the function x^2 - 4x + 3, finding roots that agree with the analytical solutions. The document provides information on roots, functions, and iterative algorithms for root-finding.
The document provides an overview of backpropagation for neural networks. It begins by defining the loss function and discussing gradient descent. It then walks through the computational graph of a simple perceptron and derives the gradients for each operation using the chain rule. This allows computing the gradient of the loss with respect to the weights and biases, which are then updated using gradient descent. It discusses computing gradients for different activation functions like sigmoid, ReLU, and max pooling. Finally, it notes that backpropagation allows estimating parameters across stacked neural network layers.
Animashree Anandkumar, Electrical Engineering and CS Dept, UC Irvine at MLcon...MLconf
Anima Anandkumar is a faculty at the EECS Dept. at U.C.Irvine since August 2010. Her research interests are in the area of large-scale machine learning and high-dimensional statistics. She received her B.Tech in Electrical Engineering from IIT Madras in 2004 and her PhD from Cornell University in 2009. She has been a visiting faculty at Microsoft Research New England in 2012 and a postdoctoral researcher at the Stochastic Systems Group at MIT between 2009-2010. She is the recipient of the Microsoft Faculty Fellowship, ARO Young Investigator Award, NSF CAREER Award, and IBM Fran Allen PhD fellowship.
C. Guyon, T. Bouwmans. E. Zahzah, “Foreground Detection via Robust Low Rank Matrix Factorization including Spatial Constraint with Iterative Reweighted Regression”, International Conference on Pattern Recognition, ICPR 2012, Tsukuba, Japan, November 2012.
https://telecombcn-dl.github.io/dlai-2019/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
Overview of the course. Introduction to image sciences, image processing and computer vision. Basics of machine learning, terminologies, paradigms. No-free lunch theorem. Supervised versus unsupervised learning. Clustering and K-Means. Classification and regression. Linear least squares and polynomial curve fitting. Model complexity and overfitting. Curse of dimensionality. Dimensionality reduction and principal component analysis. Image representation, semantic gap, image features, and classical computer vision pipelines.
The document discusses building robust machine learning systems that can handle concept drift. It introduces the challenges of concept drift when the underlying data distribution changes over time. It proposes using Gaussian process classifiers with an adaptive training window approach. The approach monitors for concept drift and retrains the model if detected. It tests the approach on artificial data streams with different drift scenarios and finds the adaptive approach performs better than a static model at handling concept drift. Future work could explore other drift detection methods and ensembles of adaptive Gaussian process classifiers.
Image generation. Gaussian models for human faces, limits and relations with linear neural networks. Generative adversarial networks (GANs), generators, discrinators, adversarial loss and two player games. Convolutional GAN and image arithmetic. Super-resolution. Nearest-neighbor, bilinear and bicubic interpolation. Image sharpening. Linear inverse problems, Tikhonov and Total-Variation regularization. Super-Resolution CNN, VDSR, Fast SRCNN, SRGAN, perceptual, adversarial and content losses. Style transfer: Gatys model, content loss and style loss.
Sampling strategies for Sequential Monte Carlo (SMC) methodsStephane Senecal
Sequential Monte Carlo methods use importance sampling and resampling to estimate distributions in state space models recursively over time. This document discusses strategies for sampling in sequential Monte Carlo methods, including:
- Using the optimal proposal distribution of the one-step ahead predictive distribution to minimize weight variance.
- Approximating the predictive distribution using mixtures, expansions, auxiliary variables, or Markov chain Monte Carlo methods.
- Considering blocks of variables over time rather than individual time steps to better diffuse particles, such as using a lagged block, reweighting particles before resampling, or sampling an extended block with an augmented state space.
Binary classification and linear separators. Perceptron, ADALINE, artifical neurons. Artificial neural networks (ANNs), activation functions, and universal approximation theorem. Linear versus non-linear classification problems. Typical tasks, architectures and loss functions. Gradient descent and back-propagation. Support Vector Machines (SVMs), soft-margins and kernel trick. Connexions between ANNs and SVMs.
This document discusses basic image transformations including translation, rotation, and scaling. Translation moves an image by adding offsets to x and y coordinates. Rotation transforms an image by applying a rotation matrix. Scaling enlarges or shrinks an image by multiplying x and y values. These transformations can be represented by matrices and concatenated to perform multiple operations. The inverse of a transformation matrix undoes the effects of the transformation and recovers the original image coordinates.
Localization and classification. Overfeat: class agnostic versu class specific localization, fully convolutional neural networks, greedy merge strategy. Multiobject detection. Region proposal and selective search. R-CNN, Fast R-CNN, Faster R-CNN and YOLO. Image segmentation. Semantic segmentation and transposed convolutions. Instance segmentation and Mask R-CNN. Image captioning. Recurrent Neural Networks (RNNs). Language generation. Long Short Term Memory (LSTMs). DeepImageSent, Show and Tell, and Show, Attend and Tell algorithms.
The document discusses recommender systems and sequential recommendation problems. It covers several key points:
1) Matrix factorization and collaborative filtering techniques are commonly used to build recommender systems, but have limitations like cold start problems and how to incorporate additional constraints.
2) Sequential recommendation problems can be framed as multi-armed bandit problems, where past recommendations influence future recommendations.
3) Various bandit algorithms like UCB, Thompson sampling, and LinUCB can be applied, but extending guarantees to models like matrix factorization is challenging. Offline evaluation on real-world datasets is important.
Lec-17: Sparse Signal Processing & Applications [notes]
Sparse signal processing, recovery of sparse signal via L1 minimization. Applications including face recognition, coupled dictionary learning for image super-resolution.
The document summarizes key concepts in social network analysis including metrics like degree distribution, path lengths, transitivity, and clustering coefficients. It also discusses models of network growth and structure like random graphs, small-world networks, and preferential attachment. Computational aspects of analyzing large networks like calculating shortest paths and the diameter are also covered.
https://telecombcn-dl.github.io/2018-dlai/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
Nelly Litvak – Asymptotic behaviour of ranking algorithms in directed random ...Yandex
There is a vast empirical research on the behaviour of ranking algorithms, e.g. Google PageRank, in scale-free networks. In this talk, we address this problem by analytical probabilistic methods. In particular, it is well-known that the PageRank in scale-free networks follows a power law with the same exponent as in-degree. Recent probabilistic analysis has provided an explanation for this phenomenon by obtaining a natural approximation for PageRank based on stochastic fixed-point equations. For these equations, explicit solutions can be constructed on weighted branching trees, and their tail behavior can be described in great detail.
In this talk we present a model for generating directed random graphs with prescribed degree distributions where we can prove that the PageRank of a randomly chosen node does indeed converge to the solution of the corresponding fixed-point equation as the number of nodes in the graph grows to infinity. The proof of this result is based on classical random graph coupling techniques combined with the now extensive literature on the behavior of branching recursions on trees.
Measuring the benefits of climate forecastsmatteodefelice
This document discusses measuring the benefits of seasonal climate forecasts for predicting photovoltaic (PV) power production in Europe. It analyzes the statistical skill of seasonal forecasts compared to observations, as well as how installed PV capacity and variability in solar radiation affect the potential value of forecasts. By considering skill, capacity, and variability together, the authors aim to better evaluate how climate forecasts could help the solar power sector improve decision-making.
Gecco 2011 - Effects of Topology on the diversity of spatially-structured evo...matteodefelice
This document summarizes research on the effect of topology on diversity in spatially-structured evolutionary algorithms (SSEAs). The researchers modeled SSEAs as spreading processes and investigated how network topology influences diversity. They found that lattice networks maintained more diversity than random networks, leading to finding more optima. Rewiring the lattice to have small-world properties showed how network structure can control the spreading of solutions and thus the algorithm's dynamics. This research was a first step toward understanding how to design network topologies to optimize problem-solving in SSEAs.
Learning by Redundancy: how climate multi-model ensembles can help to fight t...matteodefelice
This is my talk for the Severo Ochoa Research Seminar Lecture Series at the Barcelona Supercomputing Center held the 23/09/2015 (http://www.bsc.es/marenostrum-support-services/hpc-education-and-training/severo-ochoa-research-seminar/2309-sors)
The abstract is the following:
Climate Models are sophisticate tools able to simulate the interactions among various components of the Earth system (atmosphere, oceans, bio-sphere, etc.). Those tools are nowadays used for many purposes: to improve the knowledge of our planet, to analyse the projections for the future climate and to forecast the climate at multiple time-scales for a wide range of applications. In the last decade the use of climate ensembles (and multi-model ensembles) has become very common, the dimensionality of climate datasets has increased drastically (thanks also to a general increment of temporal and spatial resolutions of models). Unfortunately, this rise of the dimensionality of datasets did not coincide with the development of techniques designed to cope effectively with this massive amount of information.
The slides of the talk I gave on April 2011 in Paris at the IEEE Symposium on Computational Intelligence Applications in Smart Grid (http://ieee-ssci.org/2011/ciasg-2011).
This document discusses ENEA, the Italian Energy, New Technologies and Environment Agency. ENEA's mission is to support Italy's competitiveness and sustainable development. The document discusses ENEA's focus areas including environment, biotechnology, nuclear energy, new materials, and energy efficiency/renewables. It then discusses using soft computing approaches for modeling ambient temperature and humidity, optimizing eco-building design, and forecasting regional energy consumption in Italy. Neural networks, genetic algorithms, and hybrid models are evaluated for developing accurate models with limited historical data.
Application of seasonal climate forecasts for electricity demand forecasting:...matteodefelice
The document discusses using seasonal climate forecasts to improve electricity demand forecasting. Currently, only climatological data is used for forecasts over 14 days. The authors aim to assess using seasonal forecasts for Italy, as electricity demand is sensitive to climate. They analyze temperature patterns that influence demand and compare forecasts from reanalysis data and seasonal predictions. Preliminary results show potential for seasonal forecasts to enhance demand modeling. Further work includes analyzing additional years and locations.
1. Machine learning techniques can be applied to 21cm cosmology studies in various ways such as image reconstruction, signal detection, data analysis, simulation, and foreground subtraction.
2. Neural networks can be used to estimate cosmological parameters from 21cm power spectra or directly recover statistics like bubble size distributions from power spectra.
3. Studies have shown neural networks can accurately recover bubble size distributions from 21cm power spectra, even when including thermal noise at SKA sensitivity levels. This avoids information loss from incomplete image reconstruction.
4. Other work has used neural networks to reconstruct hydrogen distribution maps from galaxy surveys, demonstrating the potential of machine learning to connect 21cm signals to astrophysical sources and properties.
This 3 sentence summary provides the key details from the document:
The document describes using a 1-D engine simulation model in GT-POWER to develop and test a model predictive control strategy for an internal combustion engine, where predictive models of the engine were identified using the LOLIMOT algorithm and incorporated into a model-based predictive controller, and this control strategy was first tested in a model-in-the-loop simulation and then later validated through hardware-in-the-loop experiments on a real engine testbed.
This document discusses fast algorithms for computing the discrete cosine transform (DCT) and inverse discrete cosine transform (IDCT) using Winograd's method.
The conventional DCT and IDCT algorithms have high computational complexity due to cosine functions. Winograd's algorithm reduces the number of multiplications required for matrix multiplication by rearranging terms.
The document proposes applying Winograd's algorithm to DCT and IDCT computation by representing the transforms as matrix multiplications. This approach reduces the number of multiplications required for an 8x8 block from over 16,000 to just 736 multiplications, with fewer additions and subtractions as well. This leads to faster DCT and IDCT computation compared
Presentation European Actuarial Journal conference 2016Thierry Moudiki
We introduce a model for the swap curve, whose static discount factors rely on the closed-form formulas for zero coupons available in exogenous (also known as no arbitrage) short rate models. After their calibration, the spot rates can be extrapolated to unobserved maturities by converging to a fixed ultimate forward rate.
If one is interested in no arbitrage pricing, then she can use simulations under the risk neutral probability of the corresponding exogenous short rates model.
Otherwise, yield curve forecasts can be obtained under the real world probability, by applying Functional Principal Components Analysis to the model's parameters.
Anomaly Detection in Sequences of Short Text Using Iterative Language ModelsCynthia Freeman
The document discusses various methods for anomaly detection in time series data. It begins by defining time series and anomalies, noting that anomaly detection is challenging due to issues like lack of labeled data and data imbalance. It then covers characteristics of time series like seasonality, trends, and concept drift, and how to detect them. Various anomaly detection methods are outlined, including STL, SARIMA, Prophet, Gaussian processes, and RNNs. Evaluation methods and factors to consider in choosing a detection method are also discussed. The document provides an overview of approaches to determining the optimal anomaly detection model for a given time series and application.
This document provides an overview of signals and systems in digital signal processing. It defines what a signal and system are, provides examples of common discrete-time signals like impulse functions and exponential functions. It also discusses signal operations such as addition, delaying, time reversing and rate changing. The document classifies signals as periodic/aperiodic, even/odd, energy/power signals. It also classifies systems as continuous/discrete-time, time-variant/invariant, linear/non-linear, stable/unstable systems. In addition, it provides representations of systems using impulse response, difference equations and transfer functions.
Mining of time series data base using fuzzy neural information systemsDr.MAYA NAYAK
This document discusses techniques for time series data mining and clustering. It introduces data mining and knowledge discovery in databases (KDD). Key techniques discussed include wavelet transforms, S-transforms, and Fourier transforms for feature extraction from time series data. Algorithms like K-means clustering and particle swarm optimization (PSO) are presented for clustering time series data based on extracted features. Hybrid approaches that combine K-means and PSO are also summarized for improved time series clustering.
The document presents a multi-frame marked point process model for extracting targets from ISAR (Inverse Synthetic Aperture Radar) image sequences. The model integrates information across frames using priors on target shape persistency and smooth motion. Experiments show the model achieves better target line and center extraction compared to frame-by-frame detection. Future work involves generalizing the model to identify other objects like airplanes and using extracted features for target classification.
The document describes techniques for image texture analysis and segmentation. It proposes a methodology using constraint satisfaction neural networks to integrate region-based and edge-based texture segmentation. The methodology initializes a CSNN using fuzzy c-means clustering, then iteratively updates the neuron probabilities and edge maps to refine the segmentation. Experimental results demonstrate improved segmentation by combining region and edge information.
The issues about maneuvering target track prediction were discussed in this paper. Firstly, using Kalman filter which based on current statistical model describes the state of maneuvering target motion, thereby analyzing time range of the target maneuvering occurred. Then, predict the target trajectory in real time by the improved gray prediction model. Finally, residual test and posterior variance test model accuracy, model accuracy is accurate.
Scratch to Supercomputers: Bottoms-up Build of Large-scale Computational Lens...inside-BigData.com
In this deck from the 2018 Swiss HPC Conference, Gilles Fourestey from EPFL presents: Scratch to Supercomputers: Bottoms-up Build of Large-scale Computational Lensing Software.
"LENSTOOL is a gravitational lensing software that models mass distribution of galaxies and clusters. It was developed by Prof. Kneib, head of the LASTRO lab at EPFL, et al., starting from 1996. It is used to obtain sub-percent precision measurements of the total mass in galaxy clusters and constrain the dark matter self-interaction cross-section, a crucial ingredient to understanding its nature.
However, LENSTOOL lacks efficient vectorization and only uses OpenMP, which limits its execution to one node and can lead to execution times that exceed several months. Therefore, the LASTRO and the EPFL HPC group decided to rewrite the code from scratch and in order to minimize risk and maximize performance, a bottom-up approach that focuses on exposing parallelism at hardware and instruction levels was used. The result is a high performance code, fully vectorized on Xeon, Xeon Phis and GPUs that currently scales up to hundreds of nodes on CSCS’ Piz Daint, one of the fastest supercomputers in the world."
Watch the video: https://wp.me/p3RLHQ-ili
Learn more: https://infoscience.epfl.ch/record/234382/files/EPFL_TH8338.pdf?subformat=pdfa
and
http://www.hpcadvisorycouncil.com/events/2018/swiss-workshop/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
This document provides an overview of convolutional neural networks (CNNs) for image and video recognition. It discusses that CNNs have greatly improved image classification accuracy on ImageNet over the years. CNNs consist of convolutional layers that apply filters to extract features, pooling layers that reduce the spatial size, and fully connected layers for classification. Training involves tuning parameters through backpropagation, while inference uses a trained model for classification. Example networks discussed include AlexNet, VGG16, GoogLeNet and ResNet, which contain increasing numbers of parameters and computational operations.
The document discusses modeling nonlinear digital integrated circuits (ICs) using system identification techniques. It explores using parametric models with different representations including local linear state-space models. Models are estimated from input-output port measurements and validated. Local linear state-space models provided the best results with good accuracy, a unique solution, and verified local stability, while also allowing efficient simulation. The models were successfully applied to simulate a mobile data link system.
A new directional weighted median filter is proposed for removing random-valued impulse noise. It uses differences between pixel values and neighbors in four directions to detect impulse noise. Then, a weighted median filter is applied, which can preserve edges while removing noise. However, it works poorly for highly corrupted images.
A decision-based unsymmetrical trimmed median filter is introduced to remove high density salt and pepper noise. It uses a 3x3 window and checks for corrupted pixels values of 0 or 255. Pixels are sorted and trimmed means or medians are applied depending on the number of corrupted pixels to denoise while preserving textures and edges.
Fixed-Point Code Synthesis for Neural Networksgerogepatton
Over the last few years, neural networks have started penetrating safety critical systems to take decisions in robots, rockets, autonomous driving car, etc. A problem is that these critical systems often have limited computing resources. Often, they use the fixed-point arithmetic for its many advantages (rapidity, compatibility with small memory devices.) In this article, a new technique is introduced to tune the formats (precision) of already trained neural networks using fixed-point arithmetic, which can be implemented using integer operations only. The new optimized neural network computes the output with fixed-point numbers without modifying the accuracy up to a threshold fixed by the user. A fixed-point code is synthesized for the new optimized neural network ensuring the respect of the threshold for any input vector belonging the range [xmin, xmax] determined during the analysis. From a technical point of view, we do a preliminary analysis of our floating neural network to determine the worst cases, then we generate a system of linear constraints among integer variables that we can solve by linear programming. The solution of this system is the new fixed-point format of each neuron. The experimental results obtained show the efficiency of our method which can ensure that the new fixed-point neural network has the same behavior as the initial floating-point neural network.
Fixed-Point Code Synthesis for Neural NetworksIJITE
Over the last few years, neural networks have started penetrating safety critical systems to take decisions in robots, rockets, autonomous driving car, etc. A problem is that these critical systems often have limited computing resources. Often, they use the fixed-point arithmetic for its many advantages (rapidity, compatibility with small memory devices.) In this article, a new technique is introduced to tune the formats (precision) of already trained neural networks using fixed-point arithmetic, which can be implemented using integer operations only. The new optimized neural network computes the output with fixed-point numbers without modifying the accuracy up to a threshold fixed by the user. A fixed-point code is synthesized for the new optimized neural network ensuring the respect of the threshold for any input vector belonging the range [xmin, xmax] determined during the analysis. From a technical point of view, we do a preliminary analysis of our floating neural network to determine the worst cases, then we generate a system of linear constraints among integer variables that we can solve by linear programming. The solution of this system is the new fixed-point format of each neuron. The experimental results obtained show the efficiency of our method which can ensure that the new fixed-point neural network has the same behavior as the initial floating-point neural network.
A Novel Methodology for Designing Linear Phase IIR FiltersIDES Editor
This paper presents a novel technique for
designing an Infinite Impulse Response (IIR) Filter with
Linear Phase Response. The design of IIR filter is always a
challenging task due to the reason that a Linear Phase
Response is not realizable in this kind. The conventional
techniques involve large number of samples and higher
order filter for better approximation resulting in complex
hardware for implementing the same. In addition, an
extensive computational resource for obtaining the inverse
of huge matrices is required. However, we propose a
technique, which uses the frequency domain sampling along
with the linear programming concept to achieve a filter
design, which gives a best approximation for the linear
phase response. The proposed method can give the closest
response with less number of samples (only 10) and is
computationally simple. We have presented the filter design
along with its formulation and solving methodology.
Numerical results are used to substantiate the efficiency of
the proposed method.
1) ICA can extract sparse and independent features from medical imaging data to build predictive models of conditions like brain trauma.
2) MCCA identifies joint patterns across multiple datasets, like functional MRI scans from a simulated driving experiment.
3) Dimension reduction with PCA improves ICA by addressing issues with high dimensionality, enhancing reproducibility of extracted patterns.
PCA is an unsupervised learning technique used to reduce the dimensionality of large data sets by transforming the data to a new set of variables called principal components. The first principal component accounts for as much of the variability in the data as possible, and each succeeding component accounts for as much of the remaining variability as possible. PCA is commonly used for applications like dimensionality reduction, data compression, and visualization. The document discusses PCA algorithms and applications of PCA in domains like face recognition, image compression, and noise filtering.
Supporting the Energy Union with data & knowledgematteodefelice
Slides presented at the 3rd General Assembly of the H2020 S2S4E project explaining the role of the Joint Research Centre of the European Commission in supporting a fair and effective Energy Union
How Barcodes Can Be Leveraged Within Odoo 17Celine George
In this presentation, we will explore how barcodes can be leveraged within Odoo 17 to streamline our manufacturing processes. We will cover the configuration steps, how to utilize barcodes in different manufacturing scenarios, and the overall benefits of implementing this technology.
Elevate Your Nonprofit's Online Presence_ A Guide to Effective SEO Strategies...TechSoup
Whether you're new to SEO or looking to refine your existing strategies, this webinar will provide you with actionable insights and practical tips to elevate your nonprofit's online presence.
Level 3 NCEA - NZ: A Nation In the Making 1872 - 1900 SML.pptHenry Hollis
The History of NZ 1870-1900.
Making of a Nation.
From the NZ Wars to Liberals,
Richard Seddon, George Grey,
Social Laboratory, New Zealand,
Confiscations, Kotahitanga, Kingitanga, Parliament, Suffrage, Repudiation, Economic Change, Agriculture, Gold Mining, Timber, Flax, Sheep, Dairying,
Chapter wise All Notes of First year Basic Civil Engineering.pptxDenish Jangid
Chapter wise All Notes of First year Basic Civil Engineering
Syllabus
Chapter-1
Introduction to objective, scope and outcome the subject
Chapter 2
Introduction: Scope and Specialization of Civil Engineering, Role of civil Engineer in Society, Impact of infrastructural development on economy of country.
Chapter 3
Surveying: Object Principles & Types of Surveying; Site Plans, Plans & Maps; Scales & Unit of different Measurements.
Linear Measurements: Instruments used. Linear Measurement by Tape, Ranging out Survey Lines and overcoming Obstructions; Measurements on sloping ground; Tape corrections, conventional symbols. Angular Measurements: Instruments used; Introduction to Compass Surveying, Bearings and Longitude & Latitude of a Line, Introduction to total station.
Levelling: Instrument used Object of levelling, Methods of levelling in brief, and Contour maps.
Chapter 4
Buildings: Selection of site for Buildings, Layout of Building Plan, Types of buildings, Plinth area, carpet area, floor space index, Introduction to building byelaws, concept of sun light & ventilation. Components of Buildings & their functions, Basic concept of R.C.C., Introduction to types of foundation
Chapter 5
Transportation: Introduction to Transportation Engineering; Traffic and Road Safety: Types and Characteristics of Various Modes of Transportation; Various Road Traffic Signs, Causes of Accidents and Road Safety Measures.
Chapter 6
Environmental Engineering: Environmental Pollution, Environmental Acts and Regulations, Functional Concepts of Ecology, Basics of Species, Biodiversity, Ecosystem, Hydrological Cycle; Chemical Cycles: Carbon, Nitrogen & Phosphorus; Energy Flow in Ecosystems.
Water Pollution: Water Quality standards, Introduction to Treatment & Disposal of Waste Water. Reuse and Saving of Water, Rain Water Harvesting. Solid Waste Management: Classification of Solid Waste, Collection, Transportation and Disposal of Solid. Recycling of Solid Waste: Energy Recovery, Sanitary Landfill, On-Site Sanitation. Air & Noise Pollution: Primary and Secondary air pollutants, Harmful effects of Air Pollution, Control of Air Pollution. . Noise Pollution Harmful Effects of noise pollution, control of noise pollution, Global warming & Climate Change, Ozone depletion, Greenhouse effect
Text Books:
1. Palancharmy, Basic Civil Engineering, McGraw Hill publishers.
2. Satheesh Gopi, Basic Civil Engineering, Pearson Publishers.
3. Ketki Rangwala Dalal, Essentials of Civil Engineering, Charotar Publishing House.
4. BCP, Surveying volume 1
THE SACRIFICE HOW PRO-PALESTINE PROTESTS STUDENTS ARE SACRIFICING TO CHANGE T...indexPub
The recent surge in pro-Palestine student activism has prompted significant responses from universities, ranging from negotiations and divestment commitments to increased transparency about investments in companies supporting the war on Gaza. This activism has led to the cessation of student encampments but also highlighted the substantial sacrifices made by students, including academic disruptions and personal risks. The primary drivers of these protests are poor university administration, lack of transparency, and inadequate communication between officials and students. This study examines the profound emotional, psychological, and professional impacts on students engaged in pro-Palestine protests, focusing on Generation Z's (Gen-Z) activism dynamics. This paper explores the significant sacrifices made by these students and even the professors supporting the pro-Palestine movement, with a focus on recent global movements. Through an in-depth analysis of printed and electronic media, the study examines the impacts of these sacrifices on the academic and personal lives of those involved. The paper highlights examples from various universities, demonstrating student activism's long-term and short-term effects, including disciplinary actions, social backlash, and career implications. The researchers also explore the broader implications of student sacrifices. The findings reveal that these sacrifices are driven by a profound commitment to justice and human rights, and are influenced by the increasing availability of information, peer interactions, and personal convictions. The study also discusses the broader implications of this activism, comparing it to historical precedents and assessing its potential to influence policy and public opinion. The emotional and psychological toll on student activists is significant, but their sense of purpose and community support mitigates some of these challenges. However, the researchers call for acknowledging the broader Impact of these sacrifices on the future global movement of FreePalestine.
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
This presentation was provided by Rebecca Benner, Ph.D., of the American Society of Anesthesiologists, for the second session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session Two: 'Expanding Pathways to Publishing Careers,' was held June 13, 2024.
1. Application of Computational Intelligence
to Energy Systems
Matteo De Felice
Scuola Dottorale di Ingegneria
Sezione di Informatica e Automazione
XXIII° Ciclo
8. CI and scientific literature
−3
x 10
5
Evolutionary Computation
Swarm Intelligence
4 Artificial Neural Networks
3
2
1
0
1994 1996 1998 2000 2002 2004 2006 2008 2010
year
Data from Thomson Reuters ISI considering Computer Science &
Technology (January 2010)
Two CI journals on the CS top 10 (IF 2009)
9. Is CI gaining interest?
Problems more and more
complex
More computational power
available
10. but...
Lack of well-established theory
Algorithms fragmentation
Tendency to unsystematic
approach and comparison
PSO APSO CPSO DPSO EPSO FPSO GPSO HPSO IPSO
LPSO MPSO NPSO OPSO PPSO QPSO RPSO SPSO TPSO
UPSO VPSO WPSO GA AGA BGA CGA DGA EGA FGA
HGA IGA KGA LGA MGA OGA PGA QGA RGA SGA
VGA ...
19. Time Series Forecasting
We can forecast future data
using known past data
And other useful (!)
information as well
20. NN approaches
y(t+1)
Input at Neural y(t+2)
... Direct Method
time t Network
y(t+N)
Input at
output t+1
time t Neural
Network
output t
Iterative Method
delay
21. NN approaches
y(t+1)
Input at Neural y(t+2)
... Direct Method
time t Network
y(t+N)
Input at
output t+1
time t Neural
Network
output t
Iterative Method
delay
29. Ensembling
1. Model creation with data subset
(Bagging)
2. Data samples weights related to their
‘importance’ (Adaboosting)
3. Interaction and cooperation among
estimators
30. Ensembling
1. Model creation with data subset
(Bagging)
2. Data samples weights related to their
‘importance’ (Adaboosting)
3. Interaction and cooperation among
estimators
31. Ensembling
[Hansen & Salomon, 1990]
Majority voting (classification)
Linear combination (regression)
N
1
F (x, D) = Fi (x, D)
N i=1
33. Application
STLF of a building located inside
ENEA Casaccia R.C. (C59)
Presentation at IEEE Symposium
on CI Applications in Smart Grid
M. De Felice and X. Yao, "Neural Networks Ensembles for Short-Term Load
Forecasting," in IEEE Symposium Series in Computational Intelligence 2011 (SSCI
2011), 2011
35. Methodology
40
24 hours
35
30
kW
25 training part
20
15
10
2010 2013 2016 2019 2022 2025 2028 2031 2034 2037 2040 2043 2046 2049 2052 2055 2058
hours
Measured data from September
to November 2009
Training (13 weeks) and testing
(one week split in T1 and T2) sets
49. External data
Introduction of: building
occupancy, info about hour, day
of the week, working days.
NN: additional inputs
50. External data
Introduction of: building
occupancy, info about hour, day
of the week, working days.
NN: additional inputs
SARIMA: additional linear term
69. Evolutionary Computation (EC)
Black-box optimization
Single- and multi-objective
Also discontinuous and not-
differentiable functions
Population-based meta-heuristics:
70. Evolutionary Computation (EC)
Black-box optimization
Single- and multi-objective
Also discontinuous and not-
differentiable functions
Population-based meta-heuristics:
71. Application
Start-up optimization of a CCPP
Minimization of time, fuel consumption,
emissions and thermal stress
Maximization of energy output
M. De Felice, I. Bertini, A. Pannicelli, and S. Pizzuti, "Soft Computing based
optimisation of combined cycled power plant start-up operation with fitness
approximation methods," Applied Soft Computing, (to appear).
I. Bertini, M. De Felice, F. Moretti, and S. Pizzuti, "Start-Up Optimisation of a
Combined Cycle Power Plant with Multiobjective Evolutionary Algorithms," in
Applications of Evolutionary Computation, 2010, pp. 151-160.
72. Project steps
1. Definition of performance index
2. Software simulator setup
3. EC algorithm using simulator
78. Financial Applications
Financial trend reversal detection
with nature-inspired and machine
learning approaches
A. Azzini, M. De Felice, and A. Tettamanzi, "Financial Trend Reversal Detection
Problem: a Comparison between Nature-Inspired and Machine Learning
Approaches", Natural Computing in Computational Finance, vol. 4, Springer, (to
appear)
79. Spatially-Structured EA
Evolutionary Algorithms on
complex networks
Diversity and convergence
M.De Felice, S. Meloni, and S. Panzieri. “Effect of Topology on Diversity of
Spatially-Structured Evolutionary Algorithms”, GECCO 2011: Parallel Evolutionary
Systems, 11-16 July 2011, Dublin