This document describes a multi-objective evolutionary algorithm that uses artificial neural networks to approximate fitness functions in order to reduce the number of exact function evaluations. The algorithm runs the evolutionary algorithm for an initial number of generations to collect a training dataset. It then trains a neural network on this dataset. The evolutionary algorithm continues running for additional generations, using the neural network to approximate some or all of the fitness function evaluations. The neural network approximation error is monitored, and the evolutionary algorithm switches back to using exact function evaluations when the error becomes too high. This process repeats until an acceptable Pareto front is found. The method was tested on benchmark multi-objective test functions and showed a 20-40% reduction in the number of exact function evaluations needed
Manifold learning for credit risk assessment Armando Vieira
The document outlines a paper on using manifold learning techniques for credit risk analysis. It discusses motivations for dimensionality reduction and bankruptcy prediction. The proposed approach uses Isomap and Supervised Isomap for nonlinear dimensionality reduction to better analyze financial data and predict credit risk. Experimental results are presented on a data set to evaluate the methodology. Conclusions and potential future work are discussed.
Mutual Information for Registration of Monomodal Brain Images using Modified ...IDES Editor
Image registration has great significance in medicine,
with a lot of techniques anticipated in it. This research work
implies an approach for medical image registration that
registers images of the mono modalities for CT or MRI images
using Modified Adaptive Polar Transform (MAPT). The
performance of the Adaptive Polar Transform (APT) with the
proposed technique is examined. The results prove that MAPT
performs better than APT technique. The proposed scheme not
only reduces the source of errors and also reduces the elapsed
time for registration of brain images. An analysis is presented
for the medical image processing on mutual- information-based
registration.
Despeckling of Ultrasound Imaging using Median Regularized Coupled PdeIDES Editor
This paper presents an approach for reducing speckle
in ultrasound images using Coupled Partial Differential
Equation (CPDE) which has been obtained by uniting secondorder
and the fourth-order partial differential equations. Using
PDE to reduce the speckle is the noise-smoothing methods
which is getting attention widely, because PDE can keep the
edge well when it reduces the noise. We also introduced a
median regulator to guide energy source to boost the features
in the image and regularize the diffusion. The proposed
method is tested in both simulated and real medical
ultrasound images. The proposed method is compared with
SRAD, Perona Malik diffusion and Non linear coherent
diffusion methods, our method gives better result in terms of
CNR, SSIM and FOM.
This document discusses using machine learning to objectively assess quality of experience (QoE). It begins with a brief introduction to machine learning and outlines the steps to set up an ML-based objective metric: defining the feature space, selecting an ML paradigm, and robust model selection and testing. It then provides an example of using features related to image structure and color to select an algorithm for image restoration. The document concludes with a SWOT analysis of using machine learning for objective QoE assessment.
Ph.D. Thesis Presentation: A Study of Priors and Algorithms for Signal Recove...Shunsuke Ono
This document summarizes a dissertation on developing new priors and algorithms for signal recovery problems solved via convex optimization. Chapter 4 proposes a blockwise low-rank prior called the Block Nuclear Norm (BNN) to better model texture patterns in images. BNN represents textures as locally low-rank blocks under different shears. Chapter 5 introduces the Local Color Nuclear Norm (LCNN) prior to promote the color-line property and reduce color artifacts in restored images. Chapter 6 develops a hierarchical convex optimization algorithm using primal-dual splitting to solve problems with non-unique solutions and non-strictly convex objectives.
One-Sample Face Recognition Using HMM Model of Fiducial AreasCSCJournals
In most real world applications, multiple image samples of individuals are not easy to collate for direct implementation of recognition or verification systems. Therefore there is a need to perform these tasks even if only one training sample per person is available. This paper describes an effective algorithm for recognition and verification with one sample image per class. It uses two dimensional discrete wavelet transform (2D DWT) to extract features from images and hidden Markov model (HMM) was used for training, recognition and classification. It was tested with a subset of the AT&T database and up to 90% correct classification (Hit) and false acceptance rate (FAR) of 0.02% was achieved.
IRJET - Face Recognition based Attendance SystemIRJET Journal
This document describes a face recognition-based attendance system. It begins with an introduction to face recognition and the challenges of implementing such a system in real-time. It then reviews related work on algorithms used for face detection (Haar cascade), feature extraction (Histogram of Oriented Gradients), and recognition (Convolutional Neural Networks). The proposed system is described as collecting a student database, extracting encodings from images using CNN, and comparing real-time detected faces to the database using HOG detection and Euclidean distance matching to mark attendance. Experimental results aimed to test recognition under different training, lighting, and pose conditions.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Manifold learning for credit risk assessment Armando Vieira
The document outlines a paper on using manifold learning techniques for credit risk analysis. It discusses motivations for dimensionality reduction and bankruptcy prediction. The proposed approach uses Isomap and Supervised Isomap for nonlinear dimensionality reduction to better analyze financial data and predict credit risk. Experimental results are presented on a data set to evaluate the methodology. Conclusions and potential future work are discussed.
Mutual Information for Registration of Monomodal Brain Images using Modified ...IDES Editor
Image registration has great significance in medicine,
with a lot of techniques anticipated in it. This research work
implies an approach for medical image registration that
registers images of the mono modalities for CT or MRI images
using Modified Adaptive Polar Transform (MAPT). The
performance of the Adaptive Polar Transform (APT) with the
proposed technique is examined. The results prove that MAPT
performs better than APT technique. The proposed scheme not
only reduces the source of errors and also reduces the elapsed
time for registration of brain images. An analysis is presented
for the medical image processing on mutual- information-based
registration.
Despeckling of Ultrasound Imaging using Median Regularized Coupled PdeIDES Editor
This paper presents an approach for reducing speckle
in ultrasound images using Coupled Partial Differential
Equation (CPDE) which has been obtained by uniting secondorder
and the fourth-order partial differential equations. Using
PDE to reduce the speckle is the noise-smoothing methods
which is getting attention widely, because PDE can keep the
edge well when it reduces the noise. We also introduced a
median regulator to guide energy source to boost the features
in the image and regularize the diffusion. The proposed
method is tested in both simulated and real medical
ultrasound images. The proposed method is compared with
SRAD, Perona Malik diffusion and Non linear coherent
diffusion methods, our method gives better result in terms of
CNR, SSIM and FOM.
This document discusses using machine learning to objectively assess quality of experience (QoE). It begins with a brief introduction to machine learning and outlines the steps to set up an ML-based objective metric: defining the feature space, selecting an ML paradigm, and robust model selection and testing. It then provides an example of using features related to image structure and color to select an algorithm for image restoration. The document concludes with a SWOT analysis of using machine learning for objective QoE assessment.
Ph.D. Thesis Presentation: A Study of Priors and Algorithms for Signal Recove...Shunsuke Ono
This document summarizes a dissertation on developing new priors and algorithms for signal recovery problems solved via convex optimization. Chapter 4 proposes a blockwise low-rank prior called the Block Nuclear Norm (BNN) to better model texture patterns in images. BNN represents textures as locally low-rank blocks under different shears. Chapter 5 introduces the Local Color Nuclear Norm (LCNN) prior to promote the color-line property and reduce color artifacts in restored images. Chapter 6 develops a hierarchical convex optimization algorithm using primal-dual splitting to solve problems with non-unique solutions and non-strictly convex objectives.
One-Sample Face Recognition Using HMM Model of Fiducial AreasCSCJournals
In most real world applications, multiple image samples of individuals are not easy to collate for direct implementation of recognition or verification systems. Therefore there is a need to perform these tasks even if only one training sample per person is available. This paper describes an effective algorithm for recognition and verification with one sample image per class. It uses two dimensional discrete wavelet transform (2D DWT) to extract features from images and hidden Markov model (HMM) was used for training, recognition and classification. It was tested with a subset of the AT&T database and up to 90% correct classification (Hit) and false acceptance rate (FAR) of 0.02% was achieved.
IRJET - Face Recognition based Attendance SystemIRJET Journal
This document describes a face recognition-based attendance system. It begins with an introduction to face recognition and the challenges of implementing such a system in real-time. It then reviews related work on algorithms used for face detection (Haar cascade), feature extraction (Histogram of Oriented Gradients), and recognition (Convolutional Neural Networks). The proposed system is described as collecting a student database, extracting encodings from images using CNN, and comparing real-time detected faces to the database using HOG detection and Euclidean distance matching to mark attendance. Experimental results aimed to test recognition under different training, lighting, and pose conditions.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
The document discusses denoising techniques for images captured by single-sensor digital cameras using a color filter array (CFA). It compares principal component analysis (PCA) and independent component analysis (ICA) based denoising of CFA images. PCA and ICA are linear adaptive transforms that can be used to represent image data in a way that better distinguishes signal from noise. The document outlines the PCA and ICA algorithms and discusses how K-means clustering can be used with them. It generates noise to add to a reference image and implements PCA and ICA based denoising in MATLAB. Performance is evaluated using metrics like PSNR, WPSNR, SSIM and correlation coefficient.
Lec12: Shape Models and Medical Image SegmentationUlaş Bağcı
ShapeModeling – M-reps
– Active Shape Models (ASM)
– Oriented Active Shape Models (OASM)
– Application in anatomy recognition and segmentation – Comparison of ASM and OASM
ActiveContour(Snake) • LevelSet • Applications Enhancement, Noise Reduction, and Signal Processing • MedicalImageRegistration • MedicalImageSegmentation • MedicalImageVisualization • Machine Learning in Medical Imaging • Shape Modeling/Analysis of Medical Images Deep Learning in Radiology Fuzzy Connectivity (FC) – Affinity functions • Absolute FC • Relative FC (and Iterative Relative FC) • Successful example applications of FC in medical imaging • Segmentation of Airway and Airway Walls using RFC based method Energy functional – Data and Smoothness terms • GraphCut – Min cut – Max Flow • ApplicationsinRadiologyImages
my poster presentation in the jcms2011 conferencePawitra Masa-ah
1) The study created a new scheme for calculating standardized uptake values (SUVs) from DICOM files using MATLAB and tested it by comparing results to a GE healthcare software.
2) The SUVs calculated from the MATLAB scheme showed a high correlation of 0.974 with the GE software. The accuracy was 85% on average based on a 95% confidence interval.
3) The results demonstrated the SUVs from the MATLAB scheme can be used interchangeably with the GE software, providing increased accessibility for physicians to interpret PET/CT scans without other applications.
Image Compression based on DCT and BPSO for MRI and Standard ImagesIJERA Editor
Nowadays, digital image compression has become a crucial factor of modern telecommunication systems. Image compression is the process of reducing total bits required to represent an image by reducing redundancies while preserving the image quality as much as possible. Various applications including internet, multimedia, satellite imaging, medical imaging uses image compression in order to store and transmit images in an efficient manner. Selection of compression technique is an application-specific process. In this paper, an improved compression technique based on Butterfly-Particle Swarm Optimization (BPSO) is proposed. BPSO is an intelligence-based iterative algorithm utilized for finding optimal solution from a set of possible values. The dominant factors of BPSO over other optimization techniques are higher convergence rate, searching ability and overall performance. The proposed technique divides the input image into 88 blocks. Discrete Cosine Transform (DCT) is applied to each block to obtain the coefficients. Then, the threshold values are obtained from BPSO. Based on this threshold, values of the coefficients are modified. Finally, quantization followed by the Huffman encoding is used to encode the image. Experimental results show the effectiveness of the proposed method over the existing method.
This document discusses modifications made to the particle swarm optimization (PSO) algorithm to overcome disadvantages of the standard PSO. It provides background on PSO, describing it as a population-based heuristic optimization technique inspired by bird flocking. The document then reviews the standard PSO algorithm and its advantages, including simple concept, easy implementation, and robustness. It also notes some limitations like slow convergence near optima. The paper aims to study modifications to PSO parameters and performance to address such limitations.
Fuzzy clustering Approach in segmentation of T1-T2 brain MRIIDES Editor
Segmentation is a difficult and challenging
problem in the magnetic resonance images, and it
considered as important in computer vision and artificial
intelligence. Many researchers have applied various
techniques however fuzzy c-means (FCM) based
algorithms is more effective compared to other methods.
In this paper, we present a novel FCM algorithm for
weighted bias (also called intensity in-homogeneities)
estimation and segmentation of MRI. Normally, the
intensity inhomogeneities are attributed to imperfections
in the radio-frequency coils or to the problems associated
with the image acquisition. Our algorithm is formulated
by modifying the objective function of the standard FCM
and it has the advantage that it can be applied at an early
stage in an automated data analysis. Further this paper
proposes a center knowledge method in order to reduce
the running time of proposed algorithm. The proposed
method can deal with the intensity in-homogeneities and
image noise effectively. We have compared our results
with other reported methods. The results using real MRI
data show that our method provides better results
compared to standard FCM based algorithms and other
modified FCM-based techniques.
Uniform and non uniform single image deblurring based on sparse representatio...ijma
Considering the sparseness property of images, a sparse representation based iterative deblurring method
is presented for single image deblurring under uniform and non-uniform motion blur. The approach taken
is based on sparse and redundant representations over adaptively training dictionaries from single
blurred-noisy image itself. Further, the K-SVD algorithm is used to obtain a dictionary that describes the
image contents effectively. Comprehensive experimental evaluation demonstrate that the proposed
framework integrating the sparseness property of images, adaptive dictionary training and iterative
deblurring scheme together significantly improves the deblurring performance and is comparable with the
state-of-the art deblurring algorithms and seeks a powerful solution to an ill-conditioned inverse problem.
An Information Maximization approach of ICA for Gender ClassificationIDES Editor
In this paper, a novel and successful method for
gender classification from human faces using dimensionality
reduction technique is proposed. Independent Component
Analysis (ICA) is one of such techniques. In the current
scheme, a thrust is given on the different algorithms and
architectures of ICA. An information maximization ICA is
discussed with its two architecture and compared with the two
architectures of fast ICA. Support Vector Machine (SVM) is
used as a classifier for the separation of male and female
classes. All experiments are done on FERET database. Results
are obtained for the different combinations of train and test
database sizes. For larger
training set SVM is performing with an accuracy of 98%. The
accuracy values are varied for change in size of testing set and
the proposed system performs with an average accuracy of
96%. An improvement in performance is achieved using class
discriminability which performs with 100% accuracy.
This document provides an overview of the course "Statistical Learning Theory and Applications" being taught at MIT in the spring of 2003. The course will cover supervised learning theory and algorithms including regularization networks and support vector machines. It will explore applications of learning from examples in various domains including bioinformatics, computer vision, and text classification. The course will take a multidisciplinary approach, exploring learning from the perspectives of mathematics, algorithms, and neuroscience. Students will complete problem sets and a final project, and participation will be part of the grading.
Human: Thank you for the summary. Can you summarize the document in 2 sentences or less?
ElectroencephalographySignalClassification based on Sub-Band Common Spatial P...IOSRJVSP
Brain-computer interface (BCI) is a communication pathway between brain and an external device. It translates human thought into commands to control the external devices.Electroencephalography (EEG) is cost effective and easier way to implement the BCI. This paper presents a novel method for classifying EEG during motor imagery by the combination of common spatial pattern (CSP) and linear discriminant analysis (LDA). In the proposed method, the EEG signal is bandpass-filtered into multiple frequency bands. The CSP features are then extracted from each of these bands. The LDA classifier is subsequently used to classify the CSP features. In this paper, experimental results are presented on a publicly available BCI competition dataset and the performance is compared with existing approaches. The experimental result shows that the proposed method yields comparatively superior cross validation accuracies compared to prevailing methods.
This document summarizes a research paper that proposes an improved deconvolution algorithm to estimate blood flow velocity in nailfold vessels more accurately. The paper describes limitations in existing algorithms related to blurring and proposes using deconvolution and other image enhancement techniques. Results show the new algorithm takes less time (20-21 seconds vs 42-43 seconds) and tracks particle movement more accurately, allowing more precise flow measurements. This helps diagnosis of diseases. Future work could involve additional segmentation and machine learning to further automate and improve reliability.
Gesture Recognition using Principle Component Analysis & Viola-Jones AlgorithmIJMER
Gesture recognition pertains to recognizing meaningful expressions of motion by a human,
involving the hands, arms, face, head, and/or body. It is of utmost importance in designing an intelligent
and efficient human–computer interface. The applications of gesture recognition are manifold, ranging
from sign language through medical rehabilitation to virtual reality. In this paper, we provide a survey on
gesture recognition with particular emphasis on hand gestures and facial expressions. Applications
involving wavelet transform and principal component analysis for face and hand gesture recognition on
digital images
iNEDI - Accuracy Improvements and Artifacts Removal in Edge Based Image Inter...Tecnick.com LTD
In this paper we analyse the problem of general purpose image upscaling that preserves edge features and
natural appearance and we present the results of subjective and objective evaluation of images interpolated
using different algorithms. In particular, we consider the well-known NEDI (New Edge Directed Interpolation,
Li and Orchard, 2001) method, showing that by modifying it in order to reduce numerical instability and
making the region used to estimate the low resolution covariance adaptive, it is possible to obtain relevant
improvements in the interpolation quality. The implementation of the new algorithm (iNEDI, improved New
Edge Directed Interpolation), even if computationally heavy (as the Li and Orchard’s method), obtained, in
both subjective and objective tests, quality scores that are notably higher than those obtained with NEDI and
other methods presented in the literature.
Highly Adaptive Image Restoration In Compressive Sensing Applications Using S...IJARIDEA Journal
This document presents a method for highly adaptive image restoration in compressive sensing applications using sparse dictionary learning (SDL) technique. It begins with an introduction to image restoration and compressive sensing. Then it discusses related works including total variation minimization, cosine algorithm, discrete wavelet transform, and Metropolis-Hastings algorithm. The proposed scheme is described involving sparse dictionary learning, extracting patches from an image, matching patches to a dictionary, stacking similar patches, and reconstructing the image. Results show the SDL technique achieves higher PSNR values than other methods compared. In conclusion, images can be effectively restored with adaptive dictionary learning in compressive sensing, though it requires more computation time than other methods.
VIDEO SEGMENTATION & SUMMARIZATION USING MODIFIED GENETIC ALGORITHMijcsa
Video summarization of the segmented video is an essential process for video thumbnails, video surveillance and video downloading. Summarization deals with extracting few frames from each scene and creating a summary video which explains all course of action of full video with in short duration of time. The proposed research work discusses about the segmentation and summarization of the frames. A genetic algorithm (GA) for segmentation and summarization is required to view the highlight of an event by selecting few important frames required. The GA is modified to select only key frames for summarization and the comparison of modified GA is done with the GA.
Detection of Carotid Artery from Pre-Processed Magnetic Resonance AngiogramIDES Editor
Boundary detection is playing an important role in
the medical image analysis. In certain cases it becomes very
difficult for the doctors to assess the carotid arteries from the
magnetic resonance angiography (MRA) of the neck. In this
paper an attempt has been made to detect carotid arteries
from the neck magnetic resonance angiograms, so as to
overcome such difficulties. The algorithm pre-processes the
magnetic resonance angiograms and subsequently detects the
carotid artery. Stenosis is expected to reduce the diameter of
the vessel. The diameter can be measured from the vasculature
detected image. As the algorithm successfully detects the
carotid artery from the neck magnetic resonance angiograms,
therefore it will help doctors for diagnosis and serve as a step
in the prevention of cardiovascular diseases.
Disease Classification using ECG Signal Based on PCA Feature along with GA & ...IRJET Journal
This document describes a method for classifying ECG signals to detect cardiovascular diseases using principal component analysis (PCA), genetic algorithms, and artificial neural networks. PCA is used to extract features from ECG signals. A genetic algorithm is then used to select optimal features and train an artificial neural network classifier. The method is tested on datasets from Physionet.org to classify ECG signals as normal or indicating conditions like bradycardia or tachycardia with high accuracy. The goal is to develop an automated system for ECG analysis and heart disease diagnosis.
Black-box modeling of nonlinear system using evolutionary neural NARX modelIJECEIAES
Nonlinear systems with uncertainty and disturbance are very difficult to model using mathematic approach. Therefore, a black-box modeling approach without any prior knowledge is necessary. There are some modeling approaches have been used to develop a black box model such as fuzzy logic, neural network, and evolution algorithms. In this paper, an evolutionary neural network by combining a neural network and a modified differential evolution algorithm is applied to model a nonlinear system. The feasibility and effectiveness of the proposed modeling are tested on a piezoelectric actuator SISO system and an experimental quadruple tank MIMO system.
Face recognition using gaussian mixture model & artificial neural networkeSAT Journals
Abstract
Face recognition is a non-contact and friendly biometric identification technology. It has broad application prospects in the
military, public security and economic security. In this work, we also consider illumination variable database. The images have
taken from far distance and do not consider the close view face of the individual as in most of the face databases, clear face view
has been considered. In this first we located face as region of interest and then LBP and LPQ descriptors are used which is
illuminance invariant in nature. After this GMM has been used to reduce feature set by taking negative log-likelihood from each
LBP and LPQ descripted image histograms. After this ANN consumes stayed used for organization purposes. The investigational
consequencesshow excellent correctness rates in overall testing of input data.
Keywords: Illumination invariant, face recognition, LBP, LPQs,GMM,ANN
IRJET- K-SVD: Dictionary Developing Algorithms for Sparse Representation ...IRJET Journal
This document discusses the K-SVD algorithm, which is a dictionary learning method for sparse representations of signals. It alternates between sparse coding to represent signals using the current dictionary, and dictionary updating to learn a new dictionary from the signal representations. The algorithm generalizes the K-means clustering process. The document provides details on the methodology of K-SVD, including its basic steps and application to synthetic and real image signals. It is concluded that K-SVD learns dictionaries that better represent given signal groups compared to alternative methods.
The document discusses denoising techniques for images captured by single-sensor digital cameras using a color filter array (CFA). It compares principal component analysis (PCA) and independent component analysis (ICA) based denoising of CFA images. PCA and ICA are linear adaptive transforms that can be used to represent image data in a way that better distinguishes signal from noise. The document outlines the PCA and ICA algorithms and discusses how K-means clustering can be used with them. It generates noise to add to a reference image and implements PCA and ICA based denoising in MATLAB. Performance is evaluated using metrics like PSNR, WPSNR, SSIM and correlation coefficient.
Lec12: Shape Models and Medical Image SegmentationUlaş Bağcı
ShapeModeling – M-reps
– Active Shape Models (ASM)
– Oriented Active Shape Models (OASM)
– Application in anatomy recognition and segmentation – Comparison of ASM and OASM
ActiveContour(Snake) • LevelSet • Applications Enhancement, Noise Reduction, and Signal Processing • MedicalImageRegistration • MedicalImageSegmentation • MedicalImageVisualization • Machine Learning in Medical Imaging • Shape Modeling/Analysis of Medical Images Deep Learning in Radiology Fuzzy Connectivity (FC) – Affinity functions • Absolute FC • Relative FC (and Iterative Relative FC) • Successful example applications of FC in medical imaging • Segmentation of Airway and Airway Walls using RFC based method Energy functional – Data and Smoothness terms • GraphCut – Min cut – Max Flow • ApplicationsinRadiologyImages
my poster presentation in the jcms2011 conferencePawitra Masa-ah
1) The study created a new scheme for calculating standardized uptake values (SUVs) from DICOM files using MATLAB and tested it by comparing results to a GE healthcare software.
2) The SUVs calculated from the MATLAB scheme showed a high correlation of 0.974 with the GE software. The accuracy was 85% on average based on a 95% confidence interval.
3) The results demonstrated the SUVs from the MATLAB scheme can be used interchangeably with the GE software, providing increased accessibility for physicians to interpret PET/CT scans without other applications.
Image Compression based on DCT and BPSO for MRI and Standard ImagesIJERA Editor
Nowadays, digital image compression has become a crucial factor of modern telecommunication systems. Image compression is the process of reducing total bits required to represent an image by reducing redundancies while preserving the image quality as much as possible. Various applications including internet, multimedia, satellite imaging, medical imaging uses image compression in order to store and transmit images in an efficient manner. Selection of compression technique is an application-specific process. In this paper, an improved compression technique based on Butterfly-Particle Swarm Optimization (BPSO) is proposed. BPSO is an intelligence-based iterative algorithm utilized for finding optimal solution from a set of possible values. The dominant factors of BPSO over other optimization techniques are higher convergence rate, searching ability and overall performance. The proposed technique divides the input image into 88 blocks. Discrete Cosine Transform (DCT) is applied to each block to obtain the coefficients. Then, the threshold values are obtained from BPSO. Based on this threshold, values of the coefficients are modified. Finally, quantization followed by the Huffman encoding is used to encode the image. Experimental results show the effectiveness of the proposed method over the existing method.
This document discusses modifications made to the particle swarm optimization (PSO) algorithm to overcome disadvantages of the standard PSO. It provides background on PSO, describing it as a population-based heuristic optimization technique inspired by bird flocking. The document then reviews the standard PSO algorithm and its advantages, including simple concept, easy implementation, and robustness. It also notes some limitations like slow convergence near optima. The paper aims to study modifications to PSO parameters and performance to address such limitations.
Fuzzy clustering Approach in segmentation of T1-T2 brain MRIIDES Editor
Segmentation is a difficult and challenging
problem in the magnetic resonance images, and it
considered as important in computer vision and artificial
intelligence. Many researchers have applied various
techniques however fuzzy c-means (FCM) based
algorithms is more effective compared to other methods.
In this paper, we present a novel FCM algorithm for
weighted bias (also called intensity in-homogeneities)
estimation and segmentation of MRI. Normally, the
intensity inhomogeneities are attributed to imperfections
in the radio-frequency coils or to the problems associated
with the image acquisition. Our algorithm is formulated
by modifying the objective function of the standard FCM
and it has the advantage that it can be applied at an early
stage in an automated data analysis. Further this paper
proposes a center knowledge method in order to reduce
the running time of proposed algorithm. The proposed
method can deal with the intensity in-homogeneities and
image noise effectively. We have compared our results
with other reported methods. The results using real MRI
data show that our method provides better results
compared to standard FCM based algorithms and other
modified FCM-based techniques.
Uniform and non uniform single image deblurring based on sparse representatio...ijma
Considering the sparseness property of images, a sparse representation based iterative deblurring method
is presented for single image deblurring under uniform and non-uniform motion blur. The approach taken
is based on sparse and redundant representations over adaptively training dictionaries from single
blurred-noisy image itself. Further, the K-SVD algorithm is used to obtain a dictionary that describes the
image contents effectively. Comprehensive experimental evaluation demonstrate that the proposed
framework integrating the sparseness property of images, adaptive dictionary training and iterative
deblurring scheme together significantly improves the deblurring performance and is comparable with the
state-of-the art deblurring algorithms and seeks a powerful solution to an ill-conditioned inverse problem.
An Information Maximization approach of ICA for Gender ClassificationIDES Editor
In this paper, a novel and successful method for
gender classification from human faces using dimensionality
reduction technique is proposed. Independent Component
Analysis (ICA) is one of such techniques. In the current
scheme, a thrust is given on the different algorithms and
architectures of ICA. An information maximization ICA is
discussed with its two architecture and compared with the two
architectures of fast ICA. Support Vector Machine (SVM) is
used as a classifier for the separation of male and female
classes. All experiments are done on FERET database. Results
are obtained for the different combinations of train and test
database sizes. For larger
training set SVM is performing with an accuracy of 98%. The
accuracy values are varied for change in size of testing set and
the proposed system performs with an average accuracy of
96%. An improvement in performance is achieved using class
discriminability which performs with 100% accuracy.
This document provides an overview of the course "Statistical Learning Theory and Applications" being taught at MIT in the spring of 2003. The course will cover supervised learning theory and algorithms including regularization networks and support vector machines. It will explore applications of learning from examples in various domains including bioinformatics, computer vision, and text classification. The course will take a multidisciplinary approach, exploring learning from the perspectives of mathematics, algorithms, and neuroscience. Students will complete problem sets and a final project, and participation will be part of the grading.
Human: Thank you for the summary. Can you summarize the document in 2 sentences or less?
ElectroencephalographySignalClassification based on Sub-Band Common Spatial P...IOSRJVSP
Brain-computer interface (BCI) is a communication pathway between brain and an external device. It translates human thought into commands to control the external devices.Electroencephalography (EEG) is cost effective and easier way to implement the BCI. This paper presents a novel method for classifying EEG during motor imagery by the combination of common spatial pattern (CSP) and linear discriminant analysis (LDA). In the proposed method, the EEG signal is bandpass-filtered into multiple frequency bands. The CSP features are then extracted from each of these bands. The LDA classifier is subsequently used to classify the CSP features. In this paper, experimental results are presented on a publicly available BCI competition dataset and the performance is compared with existing approaches. The experimental result shows that the proposed method yields comparatively superior cross validation accuracies compared to prevailing methods.
This document summarizes a research paper that proposes an improved deconvolution algorithm to estimate blood flow velocity in nailfold vessels more accurately. The paper describes limitations in existing algorithms related to blurring and proposes using deconvolution and other image enhancement techniques. Results show the new algorithm takes less time (20-21 seconds vs 42-43 seconds) and tracks particle movement more accurately, allowing more precise flow measurements. This helps diagnosis of diseases. Future work could involve additional segmentation and machine learning to further automate and improve reliability.
Gesture Recognition using Principle Component Analysis & Viola-Jones AlgorithmIJMER
Gesture recognition pertains to recognizing meaningful expressions of motion by a human,
involving the hands, arms, face, head, and/or body. It is of utmost importance in designing an intelligent
and efficient human–computer interface. The applications of gesture recognition are manifold, ranging
from sign language through medical rehabilitation to virtual reality. In this paper, we provide a survey on
gesture recognition with particular emphasis on hand gestures and facial expressions. Applications
involving wavelet transform and principal component analysis for face and hand gesture recognition on
digital images
iNEDI - Accuracy Improvements and Artifacts Removal in Edge Based Image Inter...Tecnick.com LTD
In this paper we analyse the problem of general purpose image upscaling that preserves edge features and
natural appearance and we present the results of subjective and objective evaluation of images interpolated
using different algorithms. In particular, we consider the well-known NEDI (New Edge Directed Interpolation,
Li and Orchard, 2001) method, showing that by modifying it in order to reduce numerical instability and
making the region used to estimate the low resolution covariance adaptive, it is possible to obtain relevant
improvements in the interpolation quality. The implementation of the new algorithm (iNEDI, improved New
Edge Directed Interpolation), even if computationally heavy (as the Li and Orchard’s method), obtained, in
both subjective and objective tests, quality scores that are notably higher than those obtained with NEDI and
other methods presented in the literature.
Highly Adaptive Image Restoration In Compressive Sensing Applications Using S...IJARIDEA Journal
This document presents a method for highly adaptive image restoration in compressive sensing applications using sparse dictionary learning (SDL) technique. It begins with an introduction to image restoration and compressive sensing. Then it discusses related works including total variation minimization, cosine algorithm, discrete wavelet transform, and Metropolis-Hastings algorithm. The proposed scheme is described involving sparse dictionary learning, extracting patches from an image, matching patches to a dictionary, stacking similar patches, and reconstructing the image. Results show the SDL technique achieves higher PSNR values than other methods compared. In conclusion, images can be effectively restored with adaptive dictionary learning in compressive sensing, though it requires more computation time than other methods.
VIDEO SEGMENTATION & SUMMARIZATION USING MODIFIED GENETIC ALGORITHMijcsa
Video summarization of the segmented video is an essential process for video thumbnails, video surveillance and video downloading. Summarization deals with extracting few frames from each scene and creating a summary video which explains all course of action of full video with in short duration of time. The proposed research work discusses about the segmentation and summarization of the frames. A genetic algorithm (GA) for segmentation and summarization is required to view the highlight of an event by selecting few important frames required. The GA is modified to select only key frames for summarization and the comparison of modified GA is done with the GA.
Detection of Carotid Artery from Pre-Processed Magnetic Resonance AngiogramIDES Editor
Boundary detection is playing an important role in
the medical image analysis. In certain cases it becomes very
difficult for the doctors to assess the carotid arteries from the
magnetic resonance angiography (MRA) of the neck. In this
paper an attempt has been made to detect carotid arteries
from the neck magnetic resonance angiograms, so as to
overcome such difficulties. The algorithm pre-processes the
magnetic resonance angiograms and subsequently detects the
carotid artery. Stenosis is expected to reduce the diameter of
the vessel. The diameter can be measured from the vasculature
detected image. As the algorithm successfully detects the
carotid artery from the neck magnetic resonance angiograms,
therefore it will help doctors for diagnosis and serve as a step
in the prevention of cardiovascular diseases.
Disease Classification using ECG Signal Based on PCA Feature along with GA & ...IRJET Journal
This document describes a method for classifying ECG signals to detect cardiovascular diseases using principal component analysis (PCA), genetic algorithms, and artificial neural networks. PCA is used to extract features from ECG signals. A genetic algorithm is then used to select optimal features and train an artificial neural network classifier. The method is tested on datasets from Physionet.org to classify ECG signals as normal or indicating conditions like bradycardia or tachycardia with high accuracy. The goal is to develop an automated system for ECG analysis and heart disease diagnosis.
Black-box modeling of nonlinear system using evolutionary neural NARX modelIJECEIAES
Nonlinear systems with uncertainty and disturbance are very difficult to model using mathematic approach. Therefore, a black-box modeling approach without any prior knowledge is necessary. There are some modeling approaches have been used to develop a black box model such as fuzzy logic, neural network, and evolution algorithms. In this paper, an evolutionary neural network by combining a neural network and a modified differential evolution algorithm is applied to model a nonlinear system. The feasibility and effectiveness of the proposed modeling are tested on a piezoelectric actuator SISO system and an experimental quadruple tank MIMO system.
Face recognition using gaussian mixture model & artificial neural networkeSAT Journals
Abstract
Face recognition is a non-contact and friendly biometric identification technology. It has broad application prospects in the
military, public security and economic security. In this work, we also consider illumination variable database. The images have
taken from far distance and do not consider the close view face of the individual as in most of the face databases, clear face view
has been considered. In this first we located face as region of interest and then LBP and LPQ descriptors are used which is
illuminance invariant in nature. After this GMM has been used to reduce feature set by taking negative log-likelihood from each
LBP and LPQ descripted image histograms. After this ANN consumes stayed used for organization purposes. The investigational
consequencesshow excellent correctness rates in overall testing of input data.
Keywords: Illumination invariant, face recognition, LBP, LPQs,GMM,ANN
IRJET- K-SVD: Dictionary Developing Algorithms for Sparse Representation ...IRJET Journal
This document discusses the K-SVD algorithm, which is a dictionary learning method for sparse representations of signals. It alternates between sparse coding to represent signals using the current dictionary, and dictionary updating to learn a new dictionary from the signal representations. The algorithm generalizes the K-means clustering process. The document provides details on the methodology of K-SVD, including its basic steps and application to synthetic and real image signals. It is concluded that K-SVD learns dictionaries that better represent given signal groups compared to alternative methods.
A COMPARATIVE STUDY OF BACKPROPAGATION ALGORITHMS IN FINANCIAL PREDICTIONIJCSEA Journal
Stock market price index prediction is a challenging task for investors and scholars. Artificial neural networks have been widely employed to predict financial stock market levels thanks to their ability to model nonlinear functions. The accuracy of backpropagation neural networks trained with different heuristic and numerical algorithms is measured for comparison purpose. It is found that numerical algorithm outperform heuristic techniques.
EFFICIENT USE OF HYBRID ADAPTIVE NEURO-FUZZY INFERENCE SYSTEM COMBINED WITH N...csandit
This research study proposes a novel method for automatic fault prediction from foundry data
introducing the so-called Meta Prediction Function (MPF). Kernel Principal Component
Analysis (KPCA) is used for dimension reduction. Different algorithms are used for building the
MPF such as Multiple Linear Regression (MLR), Adaptive Neuro Fuzzy Inference System
(ANFIS), Support Vector Machine (SVM) and Neural Network (NN). We used classical
machine learning methods such as ANFIS, SVM and NN for comparison with our proposed
MPF. Our empirical results show that the MPF consistently outperform the classical methods.
Applications of Artificial Neural Networks in Cancer PredictionIRJET Journal
This document discusses applications of artificial neural networks in cancer prediction and prognosis. It summarizes several studies that have used ANNs to predict breast cancer prognosis and recurrence, as well as classify types of lung cancer.
For breast cancer prognosis, a Maximum Entropy Estimation model was shown to outperform multi-layer perceptrons and probabilistic neural networks. For predicting breast cancer recurrence, an ANN achieved the best performance compared to other machine learning algorithms based on accuracy and AUC.
An ANN combined with a genetic algorithm was also able to successfully identify genes that classify lung cancer status. The ANN-GA model achieved over 97% accuracy in classifying different types of lung cancer based on gene expression data.
IRJET- A Novel Gabor Feed Forward Network for Pose Invariant Face Recogni...IRJET Journal
The document proposes an Analytic Gabor Feed Forward Network (AGFN) for pose invariant face recognition. AGFN uses a single hidden layer to efficiently extract Gabor features from raw face images without computationally expensive convolutions. Features from multiple orientations and scales are fused at the output layer. The network is trained using total error rate minimization to find a globally optimal solution without iterations. Experiments on several public face datasets showed the AGFN approach achieved accurate face recognition while being computationally efficient.
Adaptive modified backpropagation algorithm based on differential errorsIJCSEA Journal
A new efficient modified back propagation algorithm with adaptive learning rate is proposed to increase the convergence speed and to minimize the error. The method eliminates initial fixing of learning rate through trial and error and replaces by adaptive learning rate. In each iteration, adaptive learning rate for output and hidden layer are determined by calculating differential linear and nonlinear errors of output layer and hidden layer separately. In this method, each layer has different learning rate in each iteration. The performance of the proposed algorithm is verified by the simulation results.
Saliency Based Hookworm and Infection Detection for Wireless Capsule Endoscop...IRJET Journal
This document presents a method for detecting hookworm infection and ulcers in wireless capsule endoscopy images using saliency-based segmentation. The proposed method uses multi-level superpixel segmentation followed by feature extraction of color and texture properties. A particle swarm optimization algorithm is then used to classify images as healthy or infected/ulcerous based on the extracted features. Experimental results on capsule endoscopy images demonstrate the effectiveness of the proposed method at automatically detecting abnormalities in an efficient and non-invasive manner.
On the Performance of the Pareto Set Pursuing (PSP) Method for Mixed-Variable...Amir Ziai
This document describes a study on modifying the Pareto Set Pursuing (PSP) method to solve multi-objective optimization problems with mixed continuous and discrete variables. The PSP method was originally developed for problems with only continuous variables. The modifications allow it to handle mixed variable problems. The performance of the modified PSP method is compared to other multi-objective algorithms based on metrics like efficiency, robustness, and closeness to the true Pareto front with a limited number of function evaluations. Preliminary results on benchmark problems and two engineering design examples show that the modified PSP is competitive when the number of function evaluations is limited, but its performance decreases as the number of design variables increases.
Improved nonlocal means based on pre classification and invariant block matchingIAEME Publication
One of the most popular image denoising methods based on self-similarity is called nonlocal
means (NLM). Though it can achieve remarkable performance, this method has a few shortcomings,
e.g., the computationally expensive calculation of the similarity measure, and the lack of reliable
candidates for some non repetitive patches. In this paper, we propose to improve NLM by integrating
Gaussian blur, clustering, and row image weighted averaging into the NLM framework.
Experimental results show that the proposed technique can perform denoising better than the original
NLM both quantitatively and visually, especially when the noise level is high.
Improved nonlocal means based on pre classification and invariant block matchingIAEME Publication
This document summarizes an article that proposes improvements to the nonlocal means (NLM) image denoising algorithm. The proposed method first applies Gaussian blurring to pre-process the noisy image. It then extracts features from image patches using Hu's moment invariants. Next, it performs k-means clustering on the feature vectors to group similar patches. Finally, it applies row image weighted averaging to reconstruct the image. The experimental results showed this method can perform better denoising than the original NLM, especially at higher noise levels, by providing more reliable candidate patches for the weighted averaging.
SURVEY ON POLYGONAL APPROXIMATION TECHNIQUES FOR DIGITAL PLANAR CURVESZac Darcy
This document summarizes and compares three techniques for polygonal approximation of digital planar curves:
1) Masood's technique which iteratively deletes redundant points and uses a stabilization process to optimize point locations.
2) Carmona's technique which suppresses redundant points using a breakpoint suppression algorithm and threshold.
3) Tanvir's adaptive optimization algorithm which focuses on high curvature points and applies an optimization procedure.
The techniques are evaluated on standard shapes using measures like number of points, compression ratio, error, and weighted error. Masood's technique generally had lower error while Tanvir's often achieved the highest compression.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Artificial Intelligence Applications in Petroleum Engineering - Part IRamez Abdalla, M.Sc
This document discusses applications of artificial intelligence, specifically artificial neural networks and genetic algorithms, in petroleum engineering. It provides an overview of neural networks in OnePetro papers, describes the basic concepts and training processes of neural networks and genetic algorithms. It then discusses various applications of these techniques in reservoir engineering, production technologies, and oil well drilling, including reservoir characterization, modeling, well test analysis, permeability prediction, production monitoring, drilling optimization, and more. The presentation aims to explore these applications in more depth.
Quality Prediction in Fingerprint CompressionIJTET Journal
A new algorithm for fingerprint compression based on sparse representation is introduced. At first, dictionary is constructed by sparse combination of set of fingerprint patches. Designing dictionaries can be done by either selecting one from a prespecified set or adapting a dictionary to a set of training signals. In this paper, we use K-SVD algorithm to construct dictionary. After computation of dictionary, the image gets quantized, filtered and encoded. The resultant image obtained may be of three qualities: Good, Bad and Ugly (GBU problem). In this paper, we overcome the GBU problem by prediction the quality of image.
Neuro-Genetic Optimization of LDO-fired Rotary Furnace Parameters for the Pro...IJERD Editor
This document describes a study that uses a neuro-genetic optimization technique to determine optimal parameters for a rotary furnace used in casting production. A neural network is trained on experimental data to model the relationship between furnace parameters (e.g. rotational speed, air temperature) and melting rate. The neural network is then integrated with a genetic algorithm to rapidly find parameter values that maximize melting rate in a single run and optimize casting quality. The technique was able to determine furnace parameter values that correlated well with experimental data.
Neuro-Genetic Optimization of LDO-fired Rotary Furnace Parameters for the Pro...IJERD Editor
The rising demand for high quality homogenous castings necessitate that vast amount of
manufacturing knowledge be incorporated in manufacturing systems. Rotary furnace involves several critical
parameters like excess air, flame temperature, rotational speed of the furnace drum, melting time, preheat air
temperature, fuel consumption and melting rate of the molten metal which should be controlled throughout the
melting process. A complex relationship exists between these manufacturing parameters and hence there is a
need to develop models which can capture this complex interrelationship and enable fast computation. In the
present work, we propose a generic approach where the applicability and effectiveness of neural network in
function approximation is used for rapid estimation of melting rate and they are integrated into the framework
of genetic evolutionary algorithm to form a neuro-genetic optimization technique. A neural network model is
trained with the experimental results. The results indicate that the heuristic converges to better solutions rapidly
as it provides the values of various process parameters for optimizing the objective in a single run and thus
assists for the improvement of quality in development of sound parts
Improving Insurance Risk Prediction with Generative Adversarial Networks (GANs)Armando Vieira
Generative adversarial networks (GANs) show promise for addressing data imbalance issues in insurance modeling. GANs were originally developed for computer vision tasks but have also been applied to tabular data. Conditional GANs and CycleGANs can generate synthetic minority class examples to balance datasets. In a case study on insurance fraud detection, GANs outperformed traditional resampling techniques like SMOTE in improving precision, recall, and F1-score. However, GANs require dense feature representations and consistency over time to be effective for tabular data imbalance problems.
Predicting online user behaviour using deep learning algorithmsArmando Vieira
We propose a robust classifier to predict buying intentions based on user behaviour within a large e-commerce website. In this work we compare traditional machine learning techniques with the most advanced deep learning approaches. We show that both Deep Belief Networks and Stacked Denoising auto-Encoders achieved a substantial improvement by extracting features from high dimensional data during the pre-train phase. They prove also to be more convenient to deal with severe class imbalance.
Boosting conversion rates on ecommerce using deep learning algorithmsArmando Vieira
This document summarizes an approach to use deep learning algorithms to predict the probability that online shoppers will purchase a product based on their website interactions. The approach involves using stacked auto-encoders to reduce the high dimensionality of the product interaction data before applying classification algorithms. Testing on various datasets showed that random forest outperformed logistic regression and that incorporating time data and more training examples improved prediction performance. Further work proposed applying stacked auto-encoders and deep belief networks to fully leverage the large amount of product interaction data.
Seasonality effects on second hand cars salesArmando Vieira
This document analyzes seasonality effects on car sales using weekly aggregated car deal data from October 2012 to November 2014. It finds that:
1) A sudden drop in the last week's sales can be explained by statistical fluctuations based on the normal distribution of weekly deals over the period.
2) Months with the lowest deals (November and December) still show that last week's sales of 154 were a normal occurrence based on the mean and standard deviation for those months.
3) Google trends data for the keyword "used cars" shows a clear seasonality pattern of decreasing searches before the end of the year and increasing searches at the start and middle of the year.
Visualizations of high dimensional data using R and ShinyArmando Vieira
This document discusses building interactive visualizations with Shiny and R to explore social and health care data from the UK. It describes using inputs like demographics, economic deprivation, and health metrics to create outputs like a health score and stress score. Visualizations were created with Shiny and Google Motion Charts to compare districts. The document concludes discussing using machine learning techniques like embeddings and exploring causality.
The document discusses GPU computing for machine learning. It notes that machine learning algorithms are computationally expensive and their requirements increase with data size. GPUs provide significant performance gains over CPUs for parallel problems like machine learning. Many machine learning algorithms have been implemented on GPUs, achieving speedups of 1-2 orders of magnitude. However, most GPU implementations are closed-source. Open-source implementations provide advantages like reproducibility and fair algorithm comparisons.
This document provides an overview of deep learning algorithms, including deep neural networks, convolutional neural networks, deep belief networks, and restricted Boltzmann machines. It discusses key concepts such as learning in deep neural networks, the evolution timeline of deep learning approaches, deep architectures, and restricted Boltzmann machines. It also covers training restricted Boltzmann machines using contrastive divergence, constructing deep belief networks by stacking restricted Boltzmann machines, and practical considerations for pre-training and fine-tuning deep belief networks.
Extracting Knowledge from Pydata London 2015Armando Vieira
The document discusses using deep learning techniques like word embeddings to jointly embed text and knowledge graphs for information extraction purposes. Word embeddings represent words as vectors in a way that captures semantic meaning, allowing related words to have similar embeddings. Knowledge graphs explicitly represent entities and relations. The document proposes combining text corpora with knowledge graphs by training a model on both to generate embeddings that incorporate information from both sources. This allows extracting knowledge expressed in text and transforming it into a machine-readable format.
We propose an algorithm for training Multi Layer Preceptrons for classification problems, that we named Hidden Layer Learning Vector Quantization (H-LVQ). It consists of applying Learning Vector Quantization to the last hidden layer of a MLP and it gave very successful results on problems containing a large number of correlated inputs. It was applied with excellent results on classification of Rurtherford
backscattering spectra and on a benchmark problem of image recognition. It may also be used for efficient feature extraction.
machine learning in the age of big data: new approaches and business applicat...Armando Vieira
Presentation at University of Lisbon on Machine Learning and big data.
Deep learning algorithms and applications to credit risk analysis, churn detection and recommendation algorithms
Neural Networks and Genetic Algorithms Multiobjective accelerationArmando Vieira
by Neural
Network
- The document proposes a hybrid multi-objective evolutionary algorithm that uses an artificial neural network to reduce the number of objective function evaluations needed. It combines a multi-objective evolutionary algorithm (MOEA) with an artificial neural network (ANN) to approximate solutions. The ANN is trained on solutions evaluated by the MOEA and then used to estimate fitness for unevaluated solutions to further guide the search. This approach aims to improve optimization efficiency over existing MOEAs for problems with computationally expensive objective functions.
Optimization of digital marketing campaignsArmando Vieira
This document discusses using machine learning techniques to optimize digital marketing campaigns. Specifically, it analyzes data from campaigns using clustering, visualization and predictive models. Unsupervised learning methods like k-means clustering, PCA, MDS and SOM are used to identify patterns in large digital data. Supervised models like SVMs and random forests predict conversions. The goal is to extract actionable insights to improve ROI, engagement and sales through optimization of parameters like ad design, keywords, bids, channels and budget allocation.
Credit risk with neural networks bankruptcy prediction machine learningArmando Vieira
The document discusses credit risk management with AI tools. It summarizes that credit scoring is used to statistically quantify risk by converting applicant information into numbers and a score. The objective is to forecast future performance based on past client behavior. It then discusses using various machine learning models like HLVQ-C and neural networks to predict financial distress, classify companies, and improve bankruptcy prediction. The models were tested on real world credit and financial datasets.
This document outlines a proposal called "Democracy 2" which aims to define a new democratic model that is more citizen-centric and suited to today's society. It proposes moving beyond representative democracy by giving citizens a more direct role in important political decisions through information technology. The initiative will define the new model through contributions from citizens and experts across three streams focusing on political, social, and technology issues. It will also conduct a proof of concept trial of the new model at the local/regional level in multiple countries. The overall goal is to create a more open and representative democratic system.
Sairmais.com is a new tourism web portal that uses a recommendation system to provide personalized recommendations to users. It analyzes a user's social connections and preferences to filter vast amounts of tourism information and provide the most relevant options. The portal aims to be a one-stop platform for comprehensive geo-referenced tourism data. It incorporates review sharing and social networking features commonly seen on sites like Amazon, Facebook and TripAdvisor. Sairmais.com's recommendation system analyzes the relationships between users, items, and ratings to provide customized recommendations tailored to each individual user's interests. The system seeks to simplify the travel planning process and provide a more personal touch than other major tourism websites.
Sairmais.com is a new tourism web portal that uses a recommendation system to provide personalized recommendations to users. It analyzes a user's social connections and ratings of tourism items like hotels and restaurants to filter vast amounts of online tourism information and provide the most relevant options. The portal aims to be a one-stop platform for comprehensive geo-referenced tourism data. It incorporates social networking features allowing users to share experiences and opinions to improve recommendations for others. The system utilizes collaborative tagging and ratings within a user's social network to build profiles and predict their preferences, helping users more easily plan trips by finding the best options tailored specifically for them.
Manifold learning for bankruptcy predictionArmando Vieira
This document presents a method for bankruptcy prediction and analysis using manifold learning. Specifically, it applies the Isomap algorithm with class label information incorporated into the dissimilarity matrix (S-Isomap) on a real dataset of French companies. S-Isomap is shown to have comparable testing accuracy to other classifiers like SVM and better than KNN and RVM, while providing excellent lower-dimensional visualization with only 3 dimensions. The S-Isomap approach achieves separability of patterns from healthy to bankrupt firms in the embedded space. This preprocessing technique using manifold learning is a promising approach for bankruptcy prediction and analysis on high-dimensional financial data.
This document presents a study comparing several machine learning models for personal credit scoring: logistic regression, multilayer perceptron, support vector machine, AdaBoostM1, and Hidden Layer Learning Vector Quantization (HLVQ-C). The models were tested on datasets from a Portuguese bank. HLVQ-C achieved the highest accuracy and was the most useful model according to a proposed measure that considers earnings from denying bad credits and losses from denying good credits. While other models had higher error rates for good credits, HLVQ-C balanced accuracy and usefulness the best, making it suitable for commercial credit scoring applications.
O autor descreve como a curiosidade natural das crianças é inibida pelo sistema educativo, transformando o ensino da ciência em algo abstrato e livresco em vez de prático e exploratório. Isto leva ao desinteresse dos alunos pela ciência e ao pequeno papel de Portugal na investigação científica. Defende que a educação deve estimular a curiosidade das crianças em vez de a reprimir.
Artificial neural networks for ion beam analysisArmando Vieira
The document discusses using artificial neural networks (ANNs) for ion beam analysis. Specifically, it discusses:
1) Using ANNs to analyze Rutherford backscattering spectroscopy (RBS) data in an automated way, by recognizing patterns in the data related to sample properties without explicit knowledge of causes.
2) Training ANNs on datasets of RBS spectra with known sample parameters to allow the ANNs to relate spectral features to things like layer thickness, composition, and depth.
3) The potential for ANNs to enable real-time automated analysis and optimization of ion beam experiments.
1. International Congress on Evolutionary Methods for Design,
Optimization and Control with Applications to Industrial Problems
EUROGEN 2003
G. Bugeda, J. A- Désidéri, J. Periaux, M. Schoenauer and G. Winter (Eds)
CIMNE, Barcelona, 2003
A MULTI-OBJECTIVE EVOLUTIONARY ALGORITHM USING
APPROXIMATE FITNESS EVALUATION
António Gaspar-Cunha* and Armando Vieira†
*
Department of Polymer Engineering
University of Minho
Campus de Azurém, 4800-058 Guimarães, Portugal
e-mail: gaspar@dep.uminho.pt, web page: http://www.dep.uminho.pt
†
Department of Physics
Instituto Superior de Engenharia do Porto
R. S. Tomé, 4200 Porto, Portugal
e-mail: asv@isep.ipp.pt - Web page: http://www.isep.ipp.pt
Key words: Multi-Objective, Evolutionary Algorithms, Approximate Fitness Evaluation,
Polymer Extrusion.
Abstract. In this work a method to accelerate the search of a MOEA using Artificial Neural
Networks (ANN) to approximate the fitness functions to be optimized is proposed. The
algorithm is constituted by the following steps. Initially the MOEA runs over a small number
of generations. Then a neural network is trained using the evaluations obtained by the
evolutionary algorithm. After the ANN is trained the MOEA runs for another set of
generations but using the ANN as an approximation to the exact fitness function. As the
algorithm evolves the population moves to different regions of the search space and the
quality of the approximation performed by the neural network deteriorates. When the error
becomes prohibitively high the evolutionary algorithm will proceed using the exact functions.
A new training dataset is then collected and used to retrain the ANN. The process continues
until an acceptable Pareto-front is found. This method was applied to several benchmark
multi-optimization functions and to a real problem as well, namely the optimization of a
polymer extrusion process. A reduction in the number of exact functions calls between 20 and
40% was achieved.
1
2. António Gaspar-Cunha and Armando Vieira.
1 INTRODUCTION
A Multi-Objective Evolutionary Algorithm (MOEA) using an approximate fitness
evaluation obtained with an artificial neural network is proposed. The objective is to reduce
the number of fitness evaluations in MOEAs on computational expensive problems while
maintaining their good search capabilities. We show that this approach may save considerable
computational time.
One of the major difficulties in applying MOEAs to real problems is the large number of
evaluations of the objective functions, of the order of thousands, necessary to obtain an
acceptable solution. Often these are time-consuming evaluations obtained solving numerical
codes with expensive methods like finite-differences or finite-elements. Reducing the number
of evaluations necessary to reach an acceptable solution is thus of major importance [1,2].
This difficulty may be alleviated using distributed computations where each fitness evaluation
in performed on a separate processor. However, this requires a large number of networked
computers and an adequate parallelisation of the numerical code.
Here we investigate an alternative solution to this problem by approximating the functions
to be evaluated during optimization. There are several methods that can be used to
approximate the fitness evaluation. The surface response method and the Kriging statistical
model are often applied in engineering and experiments design respectively.
2 NEURAL NETWORKS AS FUNCTION APPROXIMATIONS
In this work Artificial Neural Networks (ANN) will be used to approximate the fitness
function. It is well known that, given sufficient training data, a neural network can
approximate any function with arbitrary accuracy. ANNs are particularly well suited for non-
linear regression analysis on high-dimensional data [3]. In this case the neural networks is
trained using the previous function evaluations that are being performed by the evolutionary
algorithm. With enough data points the training error becomes sufficiently small and the ANN
is considered to be a good estimator of the fitness function.
As with any other approximation method, the performance of the neural network is closely
related to the quality of the training data. If the training data does not cover all the domain
range huge errors may occur due to extrapolation. Errors may also occur when the set of
training points is not sufficiently dense and uniform. These problems are particularly acute for
approximations to functions used in MOEA optimization. First, these fitness functions may
have strong oscillations, and second, the domain where we perform the approximation varies
at each iteration.
A different hybrid approach is proposed, where Neural Networks are used to estimate the
functions used by a Multi-Objective Genetic Algorithm, namely the Reduced Pareto Set
Genetic Algorithm (RPSGAe) [4, 5].
3 ALGORITHM PROPOSED
Figure 1 illustrates the method proposed. First the Genetic Algorithm runs during p
generations to obtain the first set of evaluations necessary for the first train of the neural
network. At this point two methods may be considered. First method, that will call A, is to
2
3. António Gaspar-Cunha and Armando Vieira.
simple use the approximate model to evaluate all the solutions during the next q generations.
The other method, that we call B, consists in evaluating exactly only a fraction M of the
population, consisting of N individuals, and estimating the remaining N-M individuals using
the output of the trained neural network.
In method B the evolution of error produced by the approximate model, eNN can be directly
verified. As the algorithm evolves, points on the search space converge to the desired
solution.
Method B has the advantage that both parameters p and q are automatically determined
using a simple criterion. In this method the error introduced by the approximations (eNN) can
be directly monitored by:
M K (C NN
− Ci , j )
2 (1)
∑ ∑
j =1 i =1
i, j
K
e NN =
M
where K is the number of criteria, M the number of solutions evaluated using both the exact
function and the ANN, C iNN is the value of criteria i for solution j evaluated by ANN and Ci , j
,j
is the value of criteria i for solution j evaluated by exact function.
Neural Network Neural Network
learning using some learning using some
solutions of the p solutions of the p
generations generations
p generations q generations p generations q generations ... ... p generations
RPSGA with RPSGA with RPSGA with RPSGA with RPSGA with
exact function Neural Network exact function Neural Network exact function
evaluation evaluation evaluation evaluation evaluation
Figure 1: Schematic structure of the method A algorithm
As the algorithm evolves it may drift to regions outside the domain covered by the initial
training points where the approximation from the neural network may not be adequate. The
error term allow us to monitor this situation and thus automatically specify the number of q
generations in which the approximated model is used. Thus q is the number of generations for
which the following inequality holds:
e NN < e0 (2)
being e0 a value that can be fixed by the user or adapted over the evolution towards the
desired Pareto-front .
3
4. António Gaspar-Cunha and Armando Vieira.
4 RESULTS AND DISCUSSION
The proposed method was tested using the ZDT1, ZDT2, ZDT3, ZDT4 and ZDT6 bi-
objective functions [6], with 30 variables each (except for ZDT4 where 10 variables were
used). The aim is to cover various types of Pareto-optimal fronts, such as convex, non-
convex, discrete, multimodal and non-uniform [6].
In order to achieve a clear comparison of the performance of our method the following
criterion is used:
S NN − S (3)
S* =
S NN
where, S NN and S are the averages of the S-metric obtained with and without ANN,
respectively.
Initially, the relevance of some parameters on the algorithm performance was studied,
namely: number of generations evaluated by the exact function, p (5, 10 and 15 generations),
number of generations evaluated by the approximate model, m (5, 10 and 15 generations),
number of hidden neurons of the neural network, Nh (10, 20 and 30 neurons), learning rate of
the neural network, η (0.1, 0.2, 0.3 and 0.4) and fraction individuals evaluated by the real
function in each q generations, ξ (10, 30 and 50%).
During the first p generations the RPSGAe uses the exact function evaluation and a
population size of 100, 300 generations, a roulette wheel selection strategy, a crossover
probability of 0.8, a mutation probability of 0.05, a random seed of 0.5, a number of ranks of
30 and the limits of indifference of the clustering technique of 0.2 [4]. These data were used
for the first train of the neural network obtaining a mean square error of less than 1%. This
error increases as the search proceeds but never exceeding 7%.
The results obtained with this approach are compared with the ones obtained using
RPSGAe alone. The comparison was quantified using the S metric proposed by Zitzler [7],
which is adequate for problems with few objective dimensions [8]. Each run was performed 5
times in order to take into account the variation of the random initial population. Since the
computation time required to train and test the neural network is negligible, we decided to use
the number of real objective function evaluations as the significant running parameter. For
each generation we calculate the average of the 5 runs of the metric as a function of the
number of evaluations effectuated so far.
Figure 2 compares the results obtained with traditional RPSGAe and the results obtained
with the present two methods A and B for the ZDT1 function. From this figure is possible to
see that, the number of exact evaluations to reach the same S-metric is reduced to about 36%,
for method A and 28% for method B. However, method B has the advantage that no
parameter optimization is needed and therefore results are obtained in a single run. Similar
results were achieved with different levels of allowed errors, resulting in a decrease of the
number of exact evaluations necessary as the error increases.
4
5. António Gaspar-Cunha and Armando Vieira.
S*(%) METHOD A Eval*(%) S*(%) METHOD B Eval*(%)
0 0
25 25
-10 -10
15 15
-20 -20
5 5
-5 -30 -5 -30
-15 -40 -15 -40
0 100 200 300 0 100 200 300
Generations Generations
S* Eval*
Figure 3: Evolution of the S metric and number of evaluations differences for ZDT1 test problem, using methods
A and B. The following parameters were used: p = 15, q =10, Nh = 10 and η = 0.2
5 APPLICATION TO POLYMER EXTRUSION
The metodology proposed (method B) was applied to the screw geometry optimization of a
single-screw polymer extruder. The extruder is characterized by an Archimedes-type screw
that rotates inside a heated barrel. The extruder receives the solid pellets at the inlet and melts,
mix and homogenise the material. Then, the melted polymer is pumped through the die in
order to produce an extrudate with a prescribed cross-section. For modelling purposes, the
process is considered as a succession of functional zones characterized by stress, mass, heat or
force balances, coupled by adequate boundary conditions in the interface between the zones.
The resolution of these differential equations is performed through the method of finite
differences. A detailed description of these models and the required optimization can be found
elsewhere [12, 18]. Recently the process was proposed as a real test problem for EMO
algorithms and was made available through the internet to the EMO community [19, 20].
Method B was applied to reduce the number of exact evaluations of a MOEA applied to
the problem of determining the geometry of a conventional screw extruded that
simultaneously maximize the mass output and the mixing degree. Ten runs are carried out,
five using the RPSGA without neural networks and five using method B with an allowed
error of 3%. The ANN parameters are settled to Nh =10, η = 0.2 and α = 0.25. Figure 3 shows
the evolution of S* and the difference of exact evaluations as a function of the number of
generations. The improvement obtained in the number of exact function evaluations necessary
is approximately 40%. This implies a reduction from 8.5 to 6.0 hours on the computation time
when a PC with an AMD processor at 1666 MHz is used.
6 CONCLUSIONS
The efficiency of this approach is strongly dependent not only on the difficulty of the
functions to be optimized but also on the degree of approximation chosen. Using a
conservative approximation produces no relevant gain in computation time while a more
aggressive approach may lead to large errors in objective functions and consequently a poor
5
6. António Gaspar-Cunha and Armando Vieira.
Pareto-front. Two methods to select an adequate approximation by the ANN have been
proposed. Method A characterized by the manual adjust of the parameters that control the
training of the ANN and the generalization error, and method B where these parameters were
selected automatically from specifying an accuracy criterion. Method B is clearly superior
since it does not require apriori selection of the parameters that control the degree of
approximation used.
Both methods were applied to several benchmark problems and to a real world problem.
This approach may save considerable computational time, ranging from 13% to about 40%.
This is particularly relevant when the evaluation of the solutions involves the use of numerical
methods having large computational costs, such the real optimization problem on polymer
extrusion tested here.
0 0
-20 -10
-20
Eval* (%)
-40
S* (%)
-30
-60 S*
-40
-80 -50 Eval*
-100 -60
0 10 20 30 40 50
Generations
Figure 3: Evolution of the S metric and number of evaluations differences for polymer extrusion problem
REFERENCES
[1] Nain, P.K.S., Deb, K., A Computationally Effective Multi-Objective Search and
Optimization Technique Using Coarse-to-Fine Grain Modeling. Kangal Report No.
2002005 (2002).
[2] Jin, Y., Olhofer, M., Sendhof, B., A Framework for Evolutionary Optimization with
Approximate Fitness Functions, IEEE Trans. On Evolt. Comp., 6, pp. 481-494 (2002)
[3] Bishop, C.M., Neural Networks for Pattern Recognition, Oxford University Press, Oxford
(1995).
[4] Gaspar-Cunha, A., Covas, J.A. - RPSGAe - A Multiobjective Genetic Algorithm with
Elitism: Application to Polymer Extrusion, Submitted for publication in a Lecture Notes
in Economics and Mathematical Systems volume, Springer (2002).
[5] Gaspar-Cunha, A.: Reduced Pareto Set Genetic Algorithm (RPSGAe): A Comparative
Study, The Second Workshop on Multiobjective Problem Solving from Nature (MPSN-
II), Granada, Spain (2002).
[6] Zitzler, E., Deb, K., Thiele, L.: Comparison of Multiobjective Evolutionary Algorithms:
Empirical Results, Evolutionary Computation, 8, pp. 173—195 (2000).
6
7. António Gaspar-Cunha and Armando Vieira.
[7] Zitzler, E.: Evolutionary Algorithms for Multiobjective Optimization: Methods and
Applications, PhD Thesis, Swiss Federal Institute of Technology (ETH), Zurich,
Switzerland (1999).
[8] Knowles, J.D., Corne, D.W., On Metrics for Comparing Non-Dominated Sets. In
Proceedings of the 2002 Congress on Evolutionary Computation Conference (CEC02),
pp. 711-716, IEEE Press (2002).
7