Standard Statistical Feature analysis of Image Features for Facial Images using Principal Component Analysis and its Comparative study with Independent Component Analysis
This document compares Principal Component Analysis (PCA) and Independent Component Analysis (ICA) and their application to facial image analysis. It provides an introduction to both PCA and ICA, including their processes and differences. The document then summarizes previous literature comparing PCA and ICA, describes implementations of PCA for facial recognition on Japanese, African, and Asian datasets in MATLAB, and calculates statistical metrics for the original and recognized images. It concludes that PCA is effective for pattern recognition and dimensionality reduction in facial analysis applications.
Facial expression recognition based on wapa and oepa fasticaijaia
Face is one of the most important biometric traits
for its uniqueness and robustness. For this reason
researchers from many diversified fields, like: sec
urity, psychology, image processing, and computer
vision, started to do research on face detection as
well as facial expression recognition. Subspace le
arning
methods work very good for recognizing same facial
features. Among subspace learning techniques PCA,
ICA, NMF are the most prominent topics. In this wor
k, our main focus is on Independent Component
Analysis (ICA). Among several architectures of ICA,
we used here FastICA and LS-ICA algorithm. We
applied Fast-ICA on whole faces and on different fa
cial parts to analyze the influence of different pa
rts for
basic facial expressions. Our extended algorithm WA
PA-FastICA and OEPA-FastICA are discussed in
proposed algorithm section. Locally Salient ICA is
implemented on whole face by using 8x8 windows to
find the more prominent facial features for facial
expression. The experiment shows our proposed OEPA-
FastICA and WAPA-FastICA outperform the existing pr
evalent Whole-FastICA and LS-ICA methods
THE IMPLICATION OF STATISTICAL ANALYSIS AND FEATURE ENGINEERING FOR MODEL BUI...ijcseit
Scrutiny for presage is the era of advance statistics where accuracy matter the most. Commensurate
between algorithms with statistical implementation provides better consequence in terms of accurate
prediction by using data sets. Prolific usage of algorithms lead towards the simplification of mathematical
models, which provide less manual calculations. Presage is the essence of data science and machine
learning requisitions that impart control over situations. Implementation of any dogmas require proper
feature extraction which helps in the proper model building that assist in precision. This paper is
predominantly based on different statistical analysis which includes correlation significance and proper
categorical data distribution using feature engineering technique that unravel accuracy of different models
of machine learning algorithms.
Feature selection using modified particle swarm optimisation for face recogni...eSAT Journals
Abstract
One of the major influential factors which affects the accuracy of classification rate is the selection of right features. Not all features have vital role in classification. Many of the features in the dataset may be redundant and irrelevant, which increase the computational cost and may reduce classification rate. In this paper, we used DCT(Discrete cosine transform) coefficients as features for face recognition application. The coefficients are optimally selected based on a modified PSO algorithm. In this, the choice of coefficients is done by incorporating the average of the mean normalized standard deviations of various classes and giving more weightage to the lower indexed DCT coefficients. The algorithm is tested on ORL database. A recognition rate of 97% is obtained. Average number of features selected is about 40 percent for a 10 × 10 input. The modified PSO took about 50 iterations for convergence. These performance figures are found to be better than some of the work reported in literature.
Keywords: Particle swarm optimization, Discrete cosine transform, feature extraction, feature selection, face recognition, classification rate.
Facial expression recognition based on wapa and oepa fasticaijaia
Face is one of the most important biometric traits
for its uniqueness and robustness. For this reason
researchers from many diversified fields, like: sec
urity, psychology, image processing, and computer
vision, started to do research on face detection as
well as facial expression recognition. Subspace le
arning
methods work very good for recognizing same facial
features. Among subspace learning techniques PCA,
ICA, NMF are the most prominent topics. In this wor
k, our main focus is on Independent Component
Analysis (ICA). Among several architectures of ICA,
we used here FastICA and LS-ICA algorithm. We
applied Fast-ICA on whole faces and on different fa
cial parts to analyze the influence of different pa
rts for
basic facial expressions. Our extended algorithm WA
PA-FastICA and OEPA-FastICA are discussed in
proposed algorithm section. Locally Salient ICA is
implemented on whole face by using 8x8 windows to
find the more prominent facial features for facial
expression. The experiment shows our proposed OEPA-
FastICA and WAPA-FastICA outperform the existing pr
evalent Whole-FastICA and LS-ICA methods
THE IMPLICATION OF STATISTICAL ANALYSIS AND FEATURE ENGINEERING FOR MODEL BUI...ijcseit
Scrutiny for presage is the era of advance statistics where accuracy matter the most. Commensurate
between algorithms with statistical implementation provides better consequence in terms of accurate
prediction by using data sets. Prolific usage of algorithms lead towards the simplification of mathematical
models, which provide less manual calculations. Presage is the essence of data science and machine
learning requisitions that impart control over situations. Implementation of any dogmas require proper
feature extraction which helps in the proper model building that assist in precision. This paper is
predominantly based on different statistical analysis which includes correlation significance and proper
categorical data distribution using feature engineering technique that unravel accuracy of different models
of machine learning algorithms.
Feature selection using modified particle swarm optimisation for face recogni...eSAT Journals
Abstract
One of the major influential factors which affects the accuracy of classification rate is the selection of right features. Not all features have vital role in classification. Many of the features in the dataset may be redundant and irrelevant, which increase the computational cost and may reduce classification rate. In this paper, we used DCT(Discrete cosine transform) coefficients as features for face recognition application. The coefficients are optimally selected based on a modified PSO algorithm. In this, the choice of coefficients is done by incorporating the average of the mean normalized standard deviations of various classes and giving more weightage to the lower indexed DCT coefficients. The algorithm is tested on ORL database. A recognition rate of 97% is obtained. Average number of features selected is about 40 percent for a 10 × 10 input. The modified PSO took about 50 iterations for convergence. These performance figures are found to be better than some of the work reported in literature.
Keywords: Particle swarm optimization, Discrete cosine transform, feature extraction, feature selection, face recognition, classification rate.
FACE RECOGNITION USING PRINCIPAL COMPONENT ANALYSIS WITH MEDIAN FOR NORMALIZA...csandit
Recognizing Faces helps to name the various subjects present in the image. This work focuses
on labeling faces on an image which includes faces of humans being of various age group
(heterogeneous set ). Principal component analysis concentrates on finds the mean of the data
set and subtracts the mean value from the data set with an intention to normalize that data.
Normalization with respect to image is the removal of common features from the data set. This
work brings in the novel idea of deploying the median another measure of central tendency for
normalization rather than mean. The above work was implemented using matlab. Results show
that Median is the best measure for normalization for a heterogeneous data set which gives
raise to outliers.
Implementation of Improved ID3 Algorithm to Obtain more Optimal Decision Tree.IJERD Editor
Decision tree learning is the discipline to create a predictive model to map the different items in the
set and respective target values and associate them in a way that is true to every element. This concept is used in
statistics, data mining and machine learning due to its simple and effectiveness.
Among the various strategies available to construct the decision trees ID3 is one of the simplest and
most widely used decision tree algorithm, but ID3 algorithm gives more importance to attributes having
multiple values while selecting node. This major shortcoming affects the accuracy of decision tree. In this paper
we are proposing improvement in ID3 algorithm using association function (AF). The Experimental result
shows improved ID3 algorithm can overcome shortcomings of ID3 which will also improve the accuracy of ID3
algorithm.
BINARY SINE COSINE ALGORITHMS FOR FEATURE SELECTION FROM MEDICAL DATAacijjournal
A well-constructed classification model highly depends on input feature subsets from a dataset, which may contain redundant, irrelevant, or noisy features. This challenge can be worse while dealing with medical datasets. The main aim of feature selection as a pre-processing task is to eliminate these features and select the most effective ones. In the literature, metaheuristic algorithms show a successful performance to find optimal feature subsets. In this paper, two binary metaheuristic algorithms named S-shaped binary Sine Cosine Algorithm (SBSCA) and V-shaped binary Sine Cosine Algorithm (VBSCA) are proposed for feature selection from the medical data. In these algorithms, the search space remains continuous, while a binary position vector is generated by two transfer functions S-shaped and V-shaped for each solution. The proposed algorithms are compared with four latest binary optimization algorithms over five medical datasets from the UCI repository. The experimental results confirm that using both bSCA variants enhance the accuracy of classification on these medical datasets compared to four other algorithms.
Control chart pattern recognition using k mica clustering and neural networksISA Interchange
Automatic recognition of abnormal patterns in control charts has seen increasing demands nowadays in manufacturing processes. This paper presents a novel hybrid intelligent method (HIM) for recognition of the common types of control chart pattern (CCP). The proposed method includes two main modules: a clustering module and a classifier module. In the clustering module, the input data is first clustered by a new technique. This technique is a suitable combination of the modified imperialist competitive algorithm (MICA) and the K-means algorithm. Then the Euclidean distance of each pattern is computed from the determined clusters. The classifier module determines the membership of the patterns using the computed distance. In this module, several neural networks, such as the multilayer perceptron, probabilistic neural networks, and the radial basis function neural networks, are investigated. Using the experimental study, we choose the best classifier in order to recognize the CCPs. Simulation results show that a high recognition accuracy, about 99.65%, is achieved.
K-Medoids Clustering Using Partitioning Around Medoids for Performing Face Re...ijscmcj
Face recognition is one of the most unobtrusive biometric techniques that can be used for access control as well as surveillance purposes. Various methods for implementing face recognition have been proposed with varying degrees of performance in different scenarios. The most common issue with effective facial biometric systems is high susceptibility of variations in the face owing to different factors like changes in pose, varying illumination, different expression, presence of outliers, noise etc. This paper explores a novel technique for face recognition by performing classification of the face images using unsupervised learning approach through K-Medoids clustering. Partitioning Around Medoids algorithm (PAM) has been used for performing K-Medoids clustering of the data. The results are suggestive of increased robustness to noise and outliers in comparison to other clustering methods. Therefore the technique can also be used to increase the overall robustness of a face recognition system and thereby increase its invariance and make it a reliably usable biometric modality.
Gesture Recognition using Principle Component Analysis & Viola-Jones AlgorithmIJMER
Gesture recognition pertains to recognizing meaningful expressions of motion by a human,
involving the hands, arms, face, head, and/or body. It is of utmost importance in designing an intelligent
and efficient human–computer interface. The applications of gesture recognition are manifold, ranging
from sign language through medical rehabilitation to virtual reality. In this paper, we provide a survey on
gesture recognition with particular emphasis on hand gestures and facial expressions. Applications
involving wavelet transform and principal component analysis for face and hand gesture recognition on
digital images
Solving linear equations from an image using anneSAT Journals
Abstract
Optical character recognition has a great impact in image processing application. This paper combines the concept of OCR and feed-forward artificial neural network to solve mathematical linear equations. We implement blob analysis and feature extraction to extract the individual characters to a captured image which having some mathematical equations. We are constructing 39 character set which having some numbers, alphabet and operators. Training of these character set is done by using supervised learning rule. If that image satisfying linear equation condition then our proposed algorithm solve this equation and generate the output. This paper tries to increase the recognition rate more than 87%. The result achieved from the training and testing on the network of the letter recognition is satisfactory.
Keywords: Artificial Neural Network, Linear Equation, Recognized rate, Optical Character Recognition.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Fuzzy logic applications for data acquisition systems of practical measurement IJECEIAES
In laboratory works, the error in measurement, reading the measurring devices, similarity of experimental data and lack of understanding of practicum materials are often found. These will lead to the inacurracy and invalid in data obtanined. As an alternative solution, application of fuzzy logic to the data acquisition system using a web server. This research focuses on the design of data acquisition systems with the target of reducing the error rate in measuring experimental data on the laboratory. Data measurement on laboratory practice module is done by taking the analog data resulted from the measurement. Furthermore, the data are converted into digital data via arduino and stored on the server. To get valid data, the server will process the data by using fuzzy logic method. The valid data are integrated into a web server so that it can be accessed as needed. The results showed that the data acquisition system based on fuzzy logic is able to provide recommendation of measurement result on the lab works based on the degree value of membership and truth value. Fuzzy logic will select the measured data with a maximum error percentage of 5% and select the measurement result which has minimum error rate.
Engineering Research Publication
Best International Journals, High Impact Journals,
International Journal of Engineering & Technical Research
ISSN : 2321-0869 (O) 2454-4698 (P)
www.erpublication.org
Independent Component Analysis of Edge Information for Face RecognitionCSCJournals
In this paper we address the problem of face recognition using edge information as independent components. The edge information is obtained by using Laplacian of Gaussian (LoG) and Canny edge detection methods then preprocessing is done by using Principle Component analysis (PCA) before applying the Independent Component Analysis (ICA) algorithm for training of images. The independent components obtained by ICA algorithm are used as feature vectors for classification. The Euclidean distance and Mahalanobis distance classifiers are used for testing of images. The algorithm is tested on two different databases of face images for variation in illumination and facial poses up to 180 degree rotation angle.
FACE RECOGNITION USING PRINCIPAL COMPONENT ANALYSIS WITH MEDIAN FOR NORMALIZA...csandit
Recognizing Faces helps to name the various subjects present in the image. This work focuses
on labeling faces on an image which includes faces of humans being of various age group
(heterogeneous set ). Principal component analysis concentrates on finds the mean of the data
set and subtracts the mean value from the data set with an intention to normalize that data.
Normalization with respect to image is the removal of common features from the data set. This
work brings in the novel idea of deploying the median another measure of central tendency for
normalization rather than mean. The above work was implemented using matlab. Results show
that Median is the best measure for normalization for a heterogeneous data set which gives
raise to outliers.
Implementation of Improved ID3 Algorithm to Obtain more Optimal Decision Tree.IJERD Editor
Decision tree learning is the discipline to create a predictive model to map the different items in the
set and respective target values and associate them in a way that is true to every element. This concept is used in
statistics, data mining and machine learning due to its simple and effectiveness.
Among the various strategies available to construct the decision trees ID3 is one of the simplest and
most widely used decision tree algorithm, but ID3 algorithm gives more importance to attributes having
multiple values while selecting node. This major shortcoming affects the accuracy of decision tree. In this paper
we are proposing improvement in ID3 algorithm using association function (AF). The Experimental result
shows improved ID3 algorithm can overcome shortcomings of ID3 which will also improve the accuracy of ID3
algorithm.
BINARY SINE COSINE ALGORITHMS FOR FEATURE SELECTION FROM MEDICAL DATAacijjournal
A well-constructed classification model highly depends on input feature subsets from a dataset, which may contain redundant, irrelevant, or noisy features. This challenge can be worse while dealing with medical datasets. The main aim of feature selection as a pre-processing task is to eliminate these features and select the most effective ones. In the literature, metaheuristic algorithms show a successful performance to find optimal feature subsets. In this paper, two binary metaheuristic algorithms named S-shaped binary Sine Cosine Algorithm (SBSCA) and V-shaped binary Sine Cosine Algorithm (VBSCA) are proposed for feature selection from the medical data. In these algorithms, the search space remains continuous, while a binary position vector is generated by two transfer functions S-shaped and V-shaped for each solution. The proposed algorithms are compared with four latest binary optimization algorithms over five medical datasets from the UCI repository. The experimental results confirm that using both bSCA variants enhance the accuracy of classification on these medical datasets compared to four other algorithms.
Control chart pattern recognition using k mica clustering and neural networksISA Interchange
Automatic recognition of abnormal patterns in control charts has seen increasing demands nowadays in manufacturing processes. This paper presents a novel hybrid intelligent method (HIM) for recognition of the common types of control chart pattern (CCP). The proposed method includes two main modules: a clustering module and a classifier module. In the clustering module, the input data is first clustered by a new technique. This technique is a suitable combination of the modified imperialist competitive algorithm (MICA) and the K-means algorithm. Then the Euclidean distance of each pattern is computed from the determined clusters. The classifier module determines the membership of the patterns using the computed distance. In this module, several neural networks, such as the multilayer perceptron, probabilistic neural networks, and the radial basis function neural networks, are investigated. Using the experimental study, we choose the best classifier in order to recognize the CCPs. Simulation results show that a high recognition accuracy, about 99.65%, is achieved.
K-Medoids Clustering Using Partitioning Around Medoids for Performing Face Re...ijscmcj
Face recognition is one of the most unobtrusive biometric techniques that can be used for access control as well as surveillance purposes. Various methods for implementing face recognition have been proposed with varying degrees of performance in different scenarios. The most common issue with effective facial biometric systems is high susceptibility of variations in the face owing to different factors like changes in pose, varying illumination, different expression, presence of outliers, noise etc. This paper explores a novel technique for face recognition by performing classification of the face images using unsupervised learning approach through K-Medoids clustering. Partitioning Around Medoids algorithm (PAM) has been used for performing K-Medoids clustering of the data. The results are suggestive of increased robustness to noise and outliers in comparison to other clustering methods. Therefore the technique can also be used to increase the overall robustness of a face recognition system and thereby increase its invariance and make it a reliably usable biometric modality.
Gesture Recognition using Principle Component Analysis & Viola-Jones AlgorithmIJMER
Gesture recognition pertains to recognizing meaningful expressions of motion by a human,
involving the hands, arms, face, head, and/or body. It is of utmost importance in designing an intelligent
and efficient human–computer interface. The applications of gesture recognition are manifold, ranging
from sign language through medical rehabilitation to virtual reality. In this paper, we provide a survey on
gesture recognition with particular emphasis on hand gestures and facial expressions. Applications
involving wavelet transform and principal component analysis for face and hand gesture recognition on
digital images
Solving linear equations from an image using anneSAT Journals
Abstract
Optical character recognition has a great impact in image processing application. This paper combines the concept of OCR and feed-forward artificial neural network to solve mathematical linear equations. We implement blob analysis and feature extraction to extract the individual characters to a captured image which having some mathematical equations. We are constructing 39 character set which having some numbers, alphabet and operators. Training of these character set is done by using supervised learning rule. If that image satisfying linear equation condition then our proposed algorithm solve this equation and generate the output. This paper tries to increase the recognition rate more than 87%. The result achieved from the training and testing on the network of the letter recognition is satisfactory.
Keywords: Artificial Neural Network, Linear Equation, Recognized rate, Optical Character Recognition.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Fuzzy logic applications for data acquisition systems of practical measurement IJECEIAES
In laboratory works, the error in measurement, reading the measurring devices, similarity of experimental data and lack of understanding of practicum materials are often found. These will lead to the inacurracy and invalid in data obtanined. As an alternative solution, application of fuzzy logic to the data acquisition system using a web server. This research focuses on the design of data acquisition systems with the target of reducing the error rate in measuring experimental data on the laboratory. Data measurement on laboratory practice module is done by taking the analog data resulted from the measurement. Furthermore, the data are converted into digital data via arduino and stored on the server. To get valid data, the server will process the data by using fuzzy logic method. The valid data are integrated into a web server so that it can be accessed as needed. The results showed that the data acquisition system based on fuzzy logic is able to provide recommendation of measurement result on the lab works based on the degree value of membership and truth value. Fuzzy logic will select the measured data with a maximum error percentage of 5% and select the measurement result which has minimum error rate.
Engineering Research Publication
Best International Journals, High Impact Journals,
International Journal of Engineering & Technical Research
ISSN : 2321-0869 (O) 2454-4698 (P)
www.erpublication.org
Similar to Standard Statistical Feature analysis of Image Features for Facial Images using Principal Component Analysis and its Comparative study with Independent Component Analysis
Independent Component Analysis of Edge Information for Face RecognitionCSCJournals
In this paper we address the problem of face recognition using edge information as independent components. The edge information is obtained by using Laplacian of Gaussian (LoG) and Canny edge detection methods then preprocessing is done by using Principle Component analysis (PCA) before applying the Independent Component Analysis (ICA) algorithm for training of images. The independent components obtained by ICA algorithm are used as feature vectors for classification. The Euclidean distance and Mahalanobis distance classifiers are used for testing of images. The algorithm is tested on two different databases of face images for variation in illumination and facial poses up to 180 degree rotation angle.
Comparison on PCA ICA and LDA in Face Recognitionijdmtaiir
Face recognition is used in wide range of application.
In recent years, face recognition has become one of the most
successful applications in image analysis and understanding.
Different statistical method and research groups reported a
contradictory result when comparing principal component
analysis (PCA) algorithm, independent component analysis
(ICA) algorithm, and linear discriminant analysis (LDA)
algorithm that has been proposed in recent years. The goal of
this paper is to compare and analyze the three algorithms and
conclude which is best. Feret Dataset is used for consistency
WCTFR : W RAPPING C URVELET T RANSFORM B ASED F ACE R ECOGNITIONcsandit
The recognition of a person based on biological fea
tures are efficient compared with traditional
knowledge based recognition system. In this paper w
e propose Wrapping Curvelet Transform
based Face Recognition (WCTFR). The Wrapping Curve
let Transform (WCT) is applied on
face images of database and test images to derive c
oefficients. The obtained coefficient matrix is
rearranged to form WCT features of each image. The
test image WCT features are compared
with database images using Euclidean Distance (ED)
to compute Equal Error Rate (EER) and
True Success Rate (TSR). The proposed algorithm wit
h WCT performs better than Curvelet
Transform algorithms used in [1], [10] and [11].
An Approach to Face Recognition Using Feed Forward Neural NetworkEditor IJCATR
Abstract: Many approaches have been proposed for face recognition but there are major constraints like illumination, lightning, pose
etc., when taken into consideration, results in poor recognition rate. We propose a method to improve the recognition rate of the face
recognition system which uses various methods like homogeneity, energy, covariance, contrast, asymmetry, correlation, mean,
standard deviation, entropy, kurtosis to extract the facial features for a better recognition rate. Also the extracted features are trained
and it is associated with a feed forward back propagation neural network used for classification to render better results.
Statistical theory is a branch of mathematics and statistics that provides the foundation for understanding and working with data, making inferences, and drawing conclusions from observed phenomena. It encompasses a wide range of concepts, principles, and techniques for analyzing and interpreting data in a systematic and rigorous manner. Statistical theory is fundamental to various fields, including science, social science, economics, engineering, and more.
Segmentation and recognition of handwritten digit numeral string using a mult...ijfcstjournal
In this paper, the use of Multi-Layer Perceptron (MLP) Neural Network model is proposed for recognizing
unconstrained offline handwritten Numeral strings. The Numeral strings are segmented and isolated
numerals are obtained using a connected component labeling (CCL) algorithm approach. The structural
part of the models has been modeled using a Multilayer Perceptron Neural Network. This paper also
presents a new technique to remove slope and slant from handwritten numeral string and to normalize the
size of text images and classify with supervised learning methods. Experimental results on a database of
102 numeral string patterns written by 3 different people show that a recognition rate of 99.7% is obtained
on independent digits contained in the numeral string of digits includes both the skewed and slant data.
Abstract Face recognition is a form of computer vision that uses faces to identify a person or verify a person’s claimed identity. In this paper, a neural based algorithm is presented, to detect frontal views of faces. The dimensionality of input face image is reduced by the Principal component analysis and the Classification is by the neural back propagation network. This method is robust for a dataset of 300 face images and has better performance in terms of 80 – 90 % recognition rate.
Implementation of Face Recognition in Cloud Vision Using Eigen FacesIJERA Editor
Cloud computing comes in several different forms and this article documents how service, Face is a complex multidimensional visual model and developing a computational model for face recognition is difficult. The papers discuss a methodology for face recognition based on information theory approach of coding and decoding the face image. Proposed System is connection of two stages – Feature extraction using principle component analysis and recognition using the back propagation Network. This paper also discusses our work with the design and implementation of face recognition applications using our mobile-cloudlet-cloud architecture named MOCHA and its initial performance results. The dispute lies with how to performance task partitioning from mobile devices to cloud and distribute compute load among cloud servers to minimize the response time given diverse communication latencies and server compute powers
Similar to Standard Statistical Feature analysis of Image Features for Facial Images using Principal Component Analysis and its Comparative study with Independent Component Analysis (20)
Software Metrics, Project Management and EstimationBulbul Agrawal
Software metrics are used to measure the quality of software products and processes. In the process domain, metrics are used to monitor and control the software development process. Metrics such as defect density, code complexity, and productivity can help identify areas that need improvement. In the project domain, metrics are used to measure the progress and success of a software project. Metrics such as schedule variance, cost variance, and earned value can help project managers make informed decisions.
Design and Analysis of Algorithm help to design the algorithms for solving different types of problems in Computer Science. It also helps to design and analyze the logic of how the program will work before developing the actual code for a program.
Age Estimation And Gender Prediction Using Convolutional Neural Network.pptxBulbul Agrawal
Identifying the attributes of humans such as age, gender, ethnicity, emotions etc. using computer vision have been given increased attention in recent years. Such attributes can play an important role in many applications such as human-computer interaction, surveillance, searching, biometrics, sale of product, entertainment, and cosmetology. Generally, it is possible to classify human life into one of four age groups: Children, Young, Adult, and Old. The image of a person’s face exhibits many variations which may affect the ability of a computer vision system to recognize the gender. In this dissertation, we evaluate the CNN architecture along with the PCA for gaining good performance.
Image segmentation is an important image processing step, and it is used everywhere if we want to analyze what is inside the image. Image segmentation, basically provide the meaningful objects of the image.
Image enhancement is the process of adjusting digital images so that the results are more suitable for display or further image analysis. For example, you can remove noise, sharpen, or brighten an image, making it easier to identify key features.
Here are some useful examples and methods of image enhancement:
Filtering with morphological operators, Histogram equalization, Noise removal using a Wiener filter, Linear contrast adjustment, Median filtering, Unsharp mask filtering, Contrast-limited adaptive histogram equalization (CLAHE). Decorrelation stretch
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
The Art of the Pitch: WordPress Relationships and Sales
Standard Statistical Feature analysis of Image Features for Facial Images using Principal Component Analysis and its Comparative study with Independent Component Analysis
1. Standard Statistical Feature analysis of Image
Features for Facial Images using Principal
Component Analysis and its Comparative study
with Independent Component Analysis
Madhav Institute of Technology and Science
Gwalior (M.P.)
By: Bulbul Agrawal
3. Introduction
• The two most important techniques have been explored here as these approaches are
very useful in many fields. One of the most interesting field, scilicet as face
recognition has become a hot topic in computer vision algorithms. Similarly,
dimensionality reduction, compression, signal separation, image filtering, and many
other statistical approaches are becoming popular and of great use in many fields day
by day.
• In the computer vision, PCA is a popular method and applied mainly in face
recognition whereas ICA was initially originated for distinct and the mixed audio
signals into independent sources[10].
• The literature on the subject is conflicting. Some assert that PCA outperforms ICA
others claim that ICA outperforms PCA and some claims that there is no difference
in their performance statistically[8]. Thus, this paper compares the PCA technique to
a newer technique ICA.
4. Literature Review
• Bing Luo et al This paper discusses about the PCA and ICA and compares them in
face recognition application and gives differences between them. They used PCA
derived from eigen faces and ICA derived from linear representation of non-
gaussian data.
• Bruce A. Draper et al This paper compares principal component analysis (PCA)
and independent component analysis (ICA). Depending upon the task statement,
the ICA algorithm, PCA subspace distance metric relative performance has been
measured. in the context of a baseline face recognition system, a comparison
motivated by contradictory.
• Zaid Abdi Alkareem Alyasseri In this paper, two algorithms are discussed to
recognize a face, namely, Eigenface and ICA. This paper shows how the error rate
between the original face and the recognized face has been improved and the
results are shown by the various graphs.
5. Principal Component Analysis
• The main idea of principal component analysis (PCA) is to reduce the
dimensionality of a data set consisting of many variables correlated with each other.
• The same is done by transforming the variables to a new set of variables, which are
known as the principal components (or simply, the PCs)[2] and are orthogonal.
• The principal components are the eigenvectors of a covariance matrix, and hence
they are orthogonal.
• The direction of the PCA space serves as the direction of the maximum variance of
the given data points[9].
6. Process of computing the PCA space
A. Acquire some data
B. Calculate the covariance matrix
from mean centering data
C. Find the eigenvalues and
eigenvectors of the covariance
matrix
Plot the eigenvectors / principal
components over the scaled data
Fig 1: Process of computing PCs
7. Independent Component Analysis
• In the method of ICA, not only statistical characteristics in second order or higher
order are considered, but also basis vectors decomposed from face images obtained
by ICA are more localized in distribution space than those by PCA[12].
• Localized characteristics are favorable for face recognition, because human faces
are non-rigid bodies, and because localized characteristics are not easily influenced
by face expression changes, location position, or partial occlusion [11].
• Independent Component Analysis is basically used to solve the Blind Source
Seperation/ Cocktail Party Problem[8].
ICAAmbiguities:
• The two most common ambiguities arise in ICA are that the method to calculate the
variances of the independent components cannot be determined. Another problem is
that the order of the independent components is also cannot be identifiable.
8. Example of ICA
• Imagine that you are a weaver, and you have a loom of
colorful strings. Each string represents a unique pattern in
the data. With actual data, each of these strings would be a
vector of numbers that can be fit with a linear equation. As
we see the strings in fig 1, they are well organized.
• Unfortunately, when we collect data in the real world it
does not come to us neat and organized. Our unique strings
get mixed up with other strings, and random signal such as
noise as shown in fig 2. In our example above, a monkey
has come along and mixed up our strings. How do we
untangle them?
• We could know something special about each string,
maybe a feature like color, and manually unmix, however
it we are dealing with a huge dataset and don’t have a clue
about any special features, we are powerless. This is where
ICA comes in. We start with our mixed data and assume
1) we have mixed up data (our loom) that is
2) comprised of independent signals
Fig 2: Original Data
Fig 3: MIXED UP DATA
INDEPENDENCE
9. • We start with this mixed up data, X, and we know
that it was generated by the monkey applying
some sequence of movements to it (the “monkey
madness”). We call this series of transformations
that the monkey applies to the unmixed data, s our
mixing matrix. This matrix would consist of
vectors of numbers that, when multiplied with s,
produce the observed data X.
X=A x S
• To solve this problem and recover our original
strings from the mixed ones, we just need to solve
this equation for s. We know X, so we just need to
figure out what the inverse of A is. This is
normally referred to as “W” or the un-mixing
matrix. We are going to choose the numbers in
this matrix that maximize the probability of our
data.
S= A-1 x X
Mixed Strings Monkey Madness
(Mixing Matrix)
Original Strings
(Original Data)
Original Strings
(Original Data)
Unmixing
matrix W (A-1)
Mixed Strings
(Observed Data)
Fig 4
Fig 5
10. Comparison between PCA & ICA
S.No. Principal Component Analysis (PCA) Independent Component Analysis (ICA)
1. In the image database, PCA relies only on pairwise
relationships between pixels.
Detecting the component from a multivariate data is
done by ICA.
2. PCA takes details of statistical changes from second
order statistics.
It can have details up to higher order statistics.
3. With the help of PCA, higher order relations cannot be
removed but it is useful for removing correlations.
ICA, on the other hand removes both correlations as
well as higher order dependence.
4. It works with the Gaussian model. ICA works with the non-Gaussian model.
5. Based on their eigenvalues some of the components in
PCA are given more importance with respect to others.
In ICA, all of the components are of equal
importance.
6. It prefers orthogonal vectors. Non-orthogonal vectors are used.
7. PCA performance is based on the task statement, the
subspace distance metric, and the number of subspace
dimensions retained.
The performance of ICA depends on the task, the
algorithm used to approximate ICA, and the number of
subspace dimensions retained.
8. For compression purpose, PCA uses low-rank matrix It uses full rank matrix factorization to eliminate
Table 1: Comparison between PCA & ICA[7][12] :
11. Applications of PCA & ICA:
PCA:
• Dimensionality reduction
• Image compression
• Medical imaging
• Gene expression analysis
• Data classification
• Noise reduction
ICA:
• Artifacts separation in MEG data
• ICA in text mining
• ICA for CDMA communication
• Searching for hidden factors in
Financial Data
• Noise Reduction in natural images
Applications of PCA and ICA are following:
12. Proposed Work
• In this paper, we have implemented the facial recognition system for the different
types of dataset such as Japanese, African and Asian by the principal component
analysis using Euclidean distance.
• In our work, the equivalent tested image is obtained as a output for the
corresponding original input image and the various statistical parameters for
facial input image and recognised equivalent image is computed.
• All the parameters of the different dataset have been calculated in the MATLAB
2018a.
• In addition to the above mentioned points, comparision of both the techniques
(PCA/ICA) along with their application in various field is discussed.
13. Implementation of face recognition system by
PCA using MATLAB
Analysis on Japanese Dataset
S.No. Parameters Test Image Equivalent
Image
1. Entropy 7.3298 7.2780
2. Standard
Deviation
78.4347 76.4705
3. Mean 127.9513 121.8242
4. Median 152 142
5. Variance 8.8563 8.7447
6. Mode 2 2
7. Correlation 0.9514
8. SSIM 0.5976
Fig 6: Analysis of face recognition system for
Japanese dataset
Table 2: Calculation of different parameters
14. Implementation of face recognition system by
PCA using MATLAB
Analysis on Africans Dataset
S.No
.
Parameters Test Image Equivalent
Image
1. Entropy 4.8991 4.9450
2. Standard
Deviation
110.4553 109.0322
3. Mean 139.2155 142.5504
4. Median 110 129
5. Variance 10.5097 10.4418
6. Mode 255 255
7. Correlation 0.8993
8. SSIM 0.6804
Fig 7: Analysis of face recognition system for
Africans dataset
Table 3: Calculation of different parameters
15. Implementation of face recognition system by
PCA using MATLAB
Analysis on Asian Dataset
S.N
o.
Parameters Test Image Equivalent
Image
1. Entropy 4.6686 4.8430
2. Standard
Deviation
97.0983 98.4387
3. Mean 174.5845 169.2308
4. Median 255 252
5. Variance 9.8538 9.9216
6. Mode 255 255
7. Correlation 0.9075
8. SSIM 0.7840
Table 4: Calculation of different parameters
Fig 8: Analysis of face recognition system for Asian
dataset
16. Conclusion
• We have implemented the PCA techniques on different types of datasets namely
African, Japanese and Asian datasets and have calculated various parameters such
as entropy, standard deviation, mean, median, variance, mode, correlation.
• We have also compared PCA and ICA based on certain specifications. In our study,
we have also concluded that face based PCA is a classical fruitful technique and
PCA justifies its strength in pattern recognition and dimensionality reduction.
17. References
1. Swati Goel Akhilesh Verma Savita Goel KomalJuneja, “ICA in Image Processing: A Survey” IEEE International
Conference on Computational Intelligence & Communication Technology (2015).
2. Shlens, Jonathon. "A tutorial on principal component analysis." arXiv preprint arXiv:1404.1100 (2014).
3. Martis, Roshan Joy, U. Rajendra Acharya, and Lim Choo Min. "ECG beat classification using PCA, LDA, ICA and
discrete wavelet transform." Biomedical Signal Processing and Control8.5 (2013): 437-448
4. Chawla, M. P. S. "PCA and ICA processing methods for removal of artifacts and noise in electrocardiograms: A
survey and comparison." Applied Soft Computing 11.2 (2011): 2216-2226.
5. Gupta, Varun, et al. "An introduction to principal component analysis and its importance in biomedical signal
processing." International Conference on Life Science and Technology, IPCBEE. Vol. 3. 2011.
6. Comon, Pierre, and Christian Jutten, eds. Handbook of Blind Source Separation: Independent component analysis
and applications. Academic press, 2010.
7. Bing Luo, Yu-Jie Hao, Wei-Hua Zhang, Zhi-Shen Liu “Comparison of PCA and ICA in Face Recognition.”
8. Simon HaykinZhe Chen, “The Cocktail Party Problem” Neural Computation17, 18751902 (2005).
9. Yang, Jian, et al. "Two-dimensional PCA: a new approach to appearance-based face representation and recognition."
IEEE transactions on pattern analysis and machine intelligence 26.1 (2004): 131-137.
10. Draper, Bruce A., et al. "Recognizing faces with PCA and ICA." Computer vision and image understanding 91.1-2
(2003): 115-137.
11. Ziga Zaplotnik, “Indepenent Component Analysis”, April 2014.
12. Baek, K., Draper, B.A., Beveridge, J.R. and She, K., 2002, March. PCA vs. ICA: A Comparison on the FERET Data
Set. In JCIS (pp. 824-827).
18. CITATION OF THIS PAPER
Agrawal, Bulbul, Shradha Dubey, and Manish Dixit. "Standard Statistical
Feature Analysis of Image Features for Facial Images Using Principal
Component Analysis and Its Comparative Study with Independent Component
Analysis." International Conference on Intelligent Computing and Smart
Communication 2019. Springer, Singapore, 2020.