Meta learned Confidence for Few-shot LearningKIMMINHA3
This was presented Meta learned Confidence for Few-shot Learning on CVPR in 2020.
Few-shot learning is an important challenge under data scarcity.
When there is a lot of unlabeled data and data scarcity,
a) leveraging nearest neighbor graph
b) using predicted soft or hard labels on unlabeled samples to update the class prototype.
the model confidence may be unreliable, which may lead to incorrect predictions.
Classification of Grasp Patterns using sEMGPriyanka Reddy
This document summarizes research on classifying grasp patterns using surface electromyography (sEMG) data. The goal was to build a classification model that identifies spherical and tip grasps. A male subject performed each grasp type 100 times daily for 3 days, providing 600 total instances. A hidden Markov model was used to classify the grasps, with 90% of data for training and 10% for testing. The model achieved 73.3% overall accuracy, with higher accuracy for spherical grasps. Suggestions for improving the model included adding more features and stratifying the training/test sets.
WEAKLY SUPERVISED FINE-GRAINED CATEGORIZATION WITH PART-BASED IMAGE REPRESENT...Nexgen Technology
TO GET THIS PROJECT COMPLETE SOURCE ON SUPPORT WITH EXECUTION PLEASE CALL BELOW CONTACT DETAILS
MOBILE: 9791938249, 0413-2211159, WEB: WWW.NEXGENPROJECT.COM,WWW.FINALYEAR-IEEEPROJECTS.COM, EMAIL:Praveen@nexgenproject.com
NEXGEN TECHNOLOGY provides total software solutions to its customers. Apsys works closely with the customers to identify their business processes for computerization and help them implement state-of-the-art solutions. By identifying and enhancing their processes through information technology solutions. NEXGEN TECHNOLOGY help it customers optimally use their resources.
The document summarizes research on using machine learning to predict patient comorbidities from discharge summaries. It describes training rule learning classifiers on annotated examples and evaluating their performance. The best models were rule learners like JRip and J48, achieving high precision but lower recall. Rules learned for conditions like asthma, depression, and obesity were relatively simple but descriptive of the data.
Presentation based on "Hierarchical Bayesian Models of Subtask Learning. Angl...Jeromy Anglim
Citation Information:
Anglim, J., & Wynton, S. K. (2015). Hierarchical Bayesian Models of Subtask Learning. Journal of Experimental Psychology. Learning, Memory, and Cognition. Online First. http://dx.doi.org/10.1037/xlm0000103
Abstract: In this talk I present some recent work looking at the question of how to understanding learning complex computer-based tasks in terms of component learning processes. The research tests and examines what Lee and Anderson (2001) labelled the "decomposition hypothesis" : i.e., that learning complex tasks can be understand as the result of learning many simpler subtasks. To test these ideas, we get participants to practice computer-based tasks where all mouse clicks and key presses are logged. We then extract a range of measures of strategy use, subtask performance, and overall task performance. We then use Bayesian hierarchical methods to test models of how strategy use and performance changes with practice at the individual-level. Overall, these model provide a more nuanced representation of how complex tasks can be decomposed in terms of simpler learning mechanisms. The research also presents a case study of how Bayesian methods can be used to yield novel insights to well-established psychological questions.
Bio: Dr Jeromy Anglim is a lecturer at Deakin University in Melbourne. He completed his PhD at University of Melbourne on mathematical models of learning, and his Post Doc in the Melbourne Business School on applications of Bayesian hierarchical models to psychology. His research interests are at the interface of statistics and industrial / organisational psychology with particular interest in skill acquisition, performance, individual differences, Bayesian data analysis, psychometrics, and selection and recruitment. He has a particular interest in refining and promoting methods for open and reproducible research in psychology. For further information go to http://jeromyanglim.blogspot.com
This document describes a machine learning approach to classify functional magnetic resonance imaging (fMRI) scans based on the image a subject was observing. The researcher preprocessed fMRI data from 1452 brain scans across 9 categories using masks, detrending, and z-scoring. Various machine learning techniques were tested, with principal component analysis (PCA) and support vector machines (SVM) achieving the best average accuracy of 92.1% at classifying scans. Areas of future work include classifying scans across multiple subjects and exploring misclassifications between labels.
This paper proposes a structured methodology following a full vulnerability analysis of the general biometric model outlined by Mansfield and Wayman (2002). Based on this analysis, a new multidimensional paradigm named the Biometric Architecture & System Security (BASS) model is proposed, which adds comprehensive security and management layers to the existing biometric model.
Meta learned Confidence for Few-shot LearningKIMMINHA3
This was presented Meta learned Confidence for Few-shot Learning on CVPR in 2020.
Few-shot learning is an important challenge under data scarcity.
When there is a lot of unlabeled data and data scarcity,
a) leveraging nearest neighbor graph
b) using predicted soft or hard labels on unlabeled samples to update the class prototype.
the model confidence may be unreliable, which may lead to incorrect predictions.
Classification of Grasp Patterns using sEMGPriyanka Reddy
This document summarizes research on classifying grasp patterns using surface electromyography (sEMG) data. The goal was to build a classification model that identifies spherical and tip grasps. A male subject performed each grasp type 100 times daily for 3 days, providing 600 total instances. A hidden Markov model was used to classify the grasps, with 90% of data for training and 10% for testing. The model achieved 73.3% overall accuracy, with higher accuracy for spherical grasps. Suggestions for improving the model included adding more features and stratifying the training/test sets.
WEAKLY SUPERVISED FINE-GRAINED CATEGORIZATION WITH PART-BASED IMAGE REPRESENT...Nexgen Technology
TO GET THIS PROJECT COMPLETE SOURCE ON SUPPORT WITH EXECUTION PLEASE CALL BELOW CONTACT DETAILS
MOBILE: 9791938249, 0413-2211159, WEB: WWW.NEXGENPROJECT.COM,WWW.FINALYEAR-IEEEPROJECTS.COM, EMAIL:Praveen@nexgenproject.com
NEXGEN TECHNOLOGY provides total software solutions to its customers. Apsys works closely with the customers to identify their business processes for computerization and help them implement state-of-the-art solutions. By identifying and enhancing their processes through information technology solutions. NEXGEN TECHNOLOGY help it customers optimally use their resources.
The document summarizes research on using machine learning to predict patient comorbidities from discharge summaries. It describes training rule learning classifiers on annotated examples and evaluating their performance. The best models were rule learners like JRip and J48, achieving high precision but lower recall. Rules learned for conditions like asthma, depression, and obesity were relatively simple but descriptive of the data.
Presentation based on "Hierarchical Bayesian Models of Subtask Learning. Angl...Jeromy Anglim
Citation Information:
Anglim, J., & Wynton, S. K. (2015). Hierarchical Bayesian Models of Subtask Learning. Journal of Experimental Psychology. Learning, Memory, and Cognition. Online First. http://dx.doi.org/10.1037/xlm0000103
Abstract: In this talk I present some recent work looking at the question of how to understanding learning complex computer-based tasks in terms of component learning processes. The research tests and examines what Lee and Anderson (2001) labelled the "decomposition hypothesis" : i.e., that learning complex tasks can be understand as the result of learning many simpler subtasks. To test these ideas, we get participants to practice computer-based tasks where all mouse clicks and key presses are logged. We then extract a range of measures of strategy use, subtask performance, and overall task performance. We then use Bayesian hierarchical methods to test models of how strategy use and performance changes with practice at the individual-level. Overall, these model provide a more nuanced representation of how complex tasks can be decomposed in terms of simpler learning mechanisms. The research also presents a case study of how Bayesian methods can be used to yield novel insights to well-established psychological questions.
Bio: Dr Jeromy Anglim is a lecturer at Deakin University in Melbourne. He completed his PhD at University of Melbourne on mathematical models of learning, and his Post Doc in the Melbourne Business School on applications of Bayesian hierarchical models to psychology. His research interests are at the interface of statistics and industrial / organisational psychology with particular interest in skill acquisition, performance, individual differences, Bayesian data analysis, psychometrics, and selection and recruitment. He has a particular interest in refining and promoting methods for open and reproducible research in psychology. For further information go to http://jeromyanglim.blogspot.com
This document describes a machine learning approach to classify functional magnetic resonance imaging (fMRI) scans based on the image a subject was observing. The researcher preprocessed fMRI data from 1452 brain scans across 9 categories using masks, detrending, and z-scoring. Various machine learning techniques were tested, with principal component analysis (PCA) and support vector machines (SVM) achieving the best average accuracy of 92.1% at classifying scans. Areas of future work include classifying scans across multiple subjects and exploring misclassifications between labels.
This paper proposes a structured methodology following a full vulnerability analysis of the general biometric model outlined by Mansfield and Wayman (2002). Based on this analysis, a new multidimensional paradigm named the Biometric Architecture & System Security (BASS) model is proposed, which adds comprehensive security and management layers to the existing biometric model.
Abstract—Biometric systems are increasingly deployed in networked environment, and issues related to interoperability are bound to arise as single vendor, monolithic architectures become less desirable. Interoperability issues affect every subsystem of the biometric system, and a statistical framework to evaluate interoperability is proposed. The framework was applied to the acquisition subsystem for a fingerprint recognition system and the results were evaluated using the framework. Fingerprints were collected from 100 subjects on 6 fingerprint sensors. The results show that performance of interoperable fingerprint datasets is not easily predictable and the proposed framework can aid in removing unpredictability to some degree.
This study evaluated the performance of a commercially available face recognition algorithm for the verification of an individual's identity across three illumination levels. The lack of research related to lighting conditions and face recognition was the driver for this evaluation. This evaluation examined the influence of variations in illumination levels on the performance of a face recognition algorithm, specifically with respect to factors of: age, gender, ethnicity, facial characteristics, and facial obstructions.
This study evaluated the performance of a
commercially available face recognition algorithm for the verification of an individual’s identity across three illumination levels
• The lack of research related to lighting conditions and face recognition was the driver for this evaluation
• This evaluation examined the influence of variations in illumination levels on the performance of a face recognition algorithm, specifically with respect to factors of:
– Age, gender, ethnicity, facial characteristics, and facial obstructions
Biometric research centers on five fundamental areas: data collection, signal processing, decision-making, transmission, and storage. Traditionally, research occurred in subsets of the discipline in separate departments within universities such as algorithm development in computer science, and speech and computer vision in electrical engineering. In the fall semester of 2002, a class in Biometric Technology and Applications was developed to encourage cross-disciplinary education, where all areas of the biometric model would come together and address issues such as research methodologies and the implementation of biometrics in society at large. The course has been modified to accommodate a wider audience, incorporate graduate student research, which is the foundation for modular mini-courses tailored to specific majors and issues. Having an interdisciplinary group of student’s better mirrors the makeup of jobs involved in biometrics, such as management, marketing, or research. The challenge lies in providing a course that accounts for these diverse needs.
The document compares the quality of face images from three datasets - a legacy IDOC criminal database, a newer electronic IDOC database, and the FERET standard database. It analyzes the images using 28 quality metrics related to factors like scene, photography, digital attributes, and algorithms. The results show that the legacy IDOC images scored higher on most metrics than the electronic IDOC images, but the FERET images scored highest overall. The conclusions suggest room for improvement in the operational IDOC data quality and the need for algorithm developers to adjust to real-world image variability.
This document discusses key aspects of study design, data collection, statistical analysis, and reasoning in biomedical research. It covers observational studies, experiments, data registration and validation, effect estimation and bias evaluation. Statistical analysis includes data description, interpretation of outcomes in light of study limitations, and multiplicity issues. Recent developments in different research areas include longitudinal and multilevel analysis, causality models, and registration guidelines.
This document analyzes different model validation techniques (MVTs) used to estimate the performance of defect prediction models. It finds that out-of-sample bootstrap validation produces the least biased performance estimates while ordinary bootstrap validation produces the most stable estimates. Considering both bias and variance, techniques like ordinary bootstrap and out-of-sample bootstrap perform best by providing a balance of low bias and variance in their performance estimates.
Bayesian Assurance: Formalizing Sensitivity Analysis For Sample SizenQuery
Title: Bayesian Assurance: Formalizing Sensitivity Analysis For Sample Size
Duration: 60 minutes
Speaker: Ronan Fitzpatrick, Head of Statistics, Statsols
Watch Here: http://bit.ly/2ndRG4B
In this webinar you’ll learn about:
Benefits of Sensitivity Analysis: What does the researcher gain by conducting a sensitivity analysis?
Why isn't Sensitivity Analysis formalized: Why does sensitivity analysis still lack the type of formalized rules and grounding to make it a routine part of sample size determination in every field?
How Bayesian Assurance works: Using Bayesian Assurance provides key contextual information on what is likely to happen over the total range possible values rather than the small number of fixed points used in a sensitivity analysis
Elicitation & SHELF: How expert opinion is elicited and then how to integrate these opinions with each other plus prior data using the Sheffield Elicitation Framework (SHELF)
Why use in both Frequentist or Bayesian analysis: How and why these methods can be used for studies which will use Frequentist or Bayesian methods in their final analysis
Plus more
This document summarizes a research paper that evaluated the effect of feature reduction using principal component analysis (PCA) on sentiment analysis of online product reviews. The researchers developed two models - Model I used unigram features directly, while Model II reduced the features to the top 57 principal components. Both support vector machines and naive Bayes classifiers showed improved accuracy when trained on the reduced feature set of Model II compared to the full feature set of Model I. Receiver operating characteristic curves also indicated better classification performance from both classifiers when using the reduced features. The results provide promising evidence that PCA can be an effective feature reduction method for sentiment analysis tasks.
This document summarizes a research paper that examines the effect of feature reduction in sentiment analysis of online reviews. It uses principle component analysis to reduce the number of features (product attributes) from a dataset of 500 camera reviews labeled as positive or negative. Two models are developed - one using the original set of 95 product attributes, and one using the reduced set. Support vector machines and naive Bayes classifiers are applied to both models and their performance is evaluated to determine if classification accuracy can be maintained while using fewer features. The results show it is possible to achieve similar accuracy levels with less features, improving computational efficiency.
The document describes the development of the Simulated Colonoscopy Objective Performance Evaluation (S.C.O.P.E.) tool, which is a non-virtual reality simulation for assessing endoscopic skills. Four tasks were created to evaluate core skills: scope manipulation, tool targeting, loop management, and mucosal inspection. A study of 35 subjects stratified into novice, intermediate and expert groups found that experts outperformed intermediates, who outperformed novices, on all four tasks and the total S.C.O.P.E. score, demonstrating the tool's ability to differentiate skill levels. This provides initial validity evidence for S.C.O.P.E. as an objective assessment of endoscopic skills.
This document summarizes research on semi-supervised classification methods for protein crystallization image classification. It describes self-training and YATSI (Yet Another Two Staged Idea) semi-supervised classification approaches applied to a dataset of 2250 protein crystallization images. Experimental results show that naive Bayesian and SMO classifiers benefited from self-training and YATSI, while decision trees, multilayer perceptron, and random forests did not improve. Random forest provided the best overall classification performance. Future work will investigate active learning combined with semi-supervised learning.
Recognition of anaerobic based on machine learning using smart watch sensor dataSuhyun Cho
This document discusses a study that used machine learning to recognize three types of anaerobic exercises (pull-ups, side pulls, and concentration curls) performed with dumbbells, based on sensor data from smartwatches. The researchers collected acceleration and gyroscope sensor data from smartwatches worn by subjects performing the exercises. They extracted features from the sensor data and used a support vector machine (SVM) algorithm to classify the exercises. Their best performing model used principal component analysis to reduce the features to two dimensions and a linear kernel, achieving a mean recognition rate of 97.7% for the three exercises.
This document discusses predicting the secondary structure of proteins using machine learning algorithms. The researchers used 57 features of 700 amino acids to train logistic regression, naive Bayes, decision tree, and random forest models. Random forest achieved the best accuracy of 78.76% for a dataset of 1000 samples. The results show that modern machine learning algorithms can efficiently and accurately predict protein secondary structures. Room for improvement remains in adding new informative features to further boost prediction accuracy.
Robust Fault-Tolerant Training Strategy Using Neural Network to Perform Funct...Eswar Publications
This paper is intended to introduce an efficient as well as robust training mechanism for a neural network which can be used for testing the functionality of software. The traditional setup of neural network architecture is used constituting the two phases -training phase and evaluation phase. The input test cases are to be trained in first phase and consequently they behave like normal test cases to predict the output as untrained test cases. The test oracle measures the deviation between the outputs of untrained test cases with trained test cases and authorizes a final decision. Our framework can be applied to systems where number of test cases outnumbers the
functionalities or the system under test is too complex. It can also be applied to the test case development when the modules of a system become tedious after modification.
Younger: Predicting Age with Deep LearningPrem Ananda
Younger: Predicting Age with Deep Learning is a data science project created by Prem Ananda to predict subjects' ages from photographs using: Python, TensorFlow, Deep Learning principles, and a convolutional neural network. The project was implemented using Google Colab Pro's GPU compute power.
Data analysis_PredictingActivity_SamsungSensorDataKaren Yang
- The document analyzes data from a study that tracked activity using smartphone sensors to predict activity type based on quantitative measurements.
- It builds random forest and support vector machine (SVM) models on a training data set and finds the random forest model has a lower error rate of 11%, making it the better predictive model.
- Variable importance analysis of the random forest model identifies 11 highly correlated variables as the most important predictors of activity type. Tuning the random forest model to use just these 11 variables results in a 16% error rate on a validation data set.
- Applying the tuned random forest model to a test data set achieves an error rate of 17%, confirming the 11 variables as key predictors of activity type
以類神經網路實現臉部影像疼痛水準即時估測 Implement Rapid Pain Intensity Estimation from Facial Im...高遠 林
This document describes a study that implemented rapid pain intensity estimation from facial images using an artificial neural network. The researchers extracted features from facial images using local binary patterns and max pooling, and used these as input to a neural network regression model. They augmented the training data to improve light and rotation invariance. The model achieved high correlation (0.92-0.95) and low error (0.16-0.24) for pain intensity estimation on test images, outperforming previous approaches. Data augmentation was shown to improve the model's convergence during training. The approach allows automatic pain recognition without requiring specialized pain-related categories in emotion recognition APIs.
This study examined the effects of Nike SPARQ Vapor Strobe Eyewear on coordination skills in female soccer players at Monmouth University. The women's soccer team was split into a control group that trained without the eyewear and an experimental group that trained using the eyewear three times a week for four weeks. Players were tested before, during, and after training on drills measuring one-touch passing, v-boxing, and rollover boxing skills. Statistical analysis found a significant improvement over time in one-touch passing and v-boxing totals for both groups, but no significant difference between the groups wearing eyewear versus not. The study was limited by a lack of incentive for high performance
Explore the latest techniques and technologies used in classifying fetal health, from traditional methods to cutting-edge AI approaches. Understand the importance of accurate classification for prenatal care and fetal well-being. Join us to delve into this critical aspect of healthcare. visit https://bostoninstituteofanalytics.org/data-science-and-artificial-intelligence/ for more data science insights
Abstract—Biometric systems are increasingly deployed in networked environment, and issues related to interoperability are bound to arise as single vendor, monolithic architectures become less desirable. Interoperability issues affect every subsystem of the biometric system, and a statistical framework to evaluate interoperability is proposed. The framework was applied to the acquisition subsystem for a fingerprint recognition system and the results were evaluated using the framework. Fingerprints were collected from 100 subjects on 6 fingerprint sensors. The results show that performance of interoperable fingerprint datasets is not easily predictable and the proposed framework can aid in removing unpredictability to some degree.
This study evaluated the performance of a commercially available face recognition algorithm for the verification of an individual's identity across three illumination levels. The lack of research related to lighting conditions and face recognition was the driver for this evaluation. This evaluation examined the influence of variations in illumination levels on the performance of a face recognition algorithm, specifically with respect to factors of: age, gender, ethnicity, facial characteristics, and facial obstructions.
This study evaluated the performance of a
commercially available face recognition algorithm for the verification of an individual’s identity across three illumination levels
• The lack of research related to lighting conditions and face recognition was the driver for this evaluation
• This evaluation examined the influence of variations in illumination levels on the performance of a face recognition algorithm, specifically with respect to factors of:
– Age, gender, ethnicity, facial characteristics, and facial obstructions
Biometric research centers on five fundamental areas: data collection, signal processing, decision-making, transmission, and storage. Traditionally, research occurred in subsets of the discipline in separate departments within universities such as algorithm development in computer science, and speech and computer vision in electrical engineering. In the fall semester of 2002, a class in Biometric Technology and Applications was developed to encourage cross-disciplinary education, where all areas of the biometric model would come together and address issues such as research methodologies and the implementation of biometrics in society at large. The course has been modified to accommodate a wider audience, incorporate graduate student research, which is the foundation for modular mini-courses tailored to specific majors and issues. Having an interdisciplinary group of student’s better mirrors the makeup of jobs involved in biometrics, such as management, marketing, or research. The challenge lies in providing a course that accounts for these diverse needs.
The document compares the quality of face images from three datasets - a legacy IDOC criminal database, a newer electronic IDOC database, and the FERET standard database. It analyzes the images using 28 quality metrics related to factors like scene, photography, digital attributes, and algorithms. The results show that the legacy IDOC images scored higher on most metrics than the electronic IDOC images, but the FERET images scored highest overall. The conclusions suggest room for improvement in the operational IDOC data quality and the need for algorithm developers to adjust to real-world image variability.
This document discusses key aspects of study design, data collection, statistical analysis, and reasoning in biomedical research. It covers observational studies, experiments, data registration and validation, effect estimation and bias evaluation. Statistical analysis includes data description, interpretation of outcomes in light of study limitations, and multiplicity issues. Recent developments in different research areas include longitudinal and multilevel analysis, causality models, and registration guidelines.
This document analyzes different model validation techniques (MVTs) used to estimate the performance of defect prediction models. It finds that out-of-sample bootstrap validation produces the least biased performance estimates while ordinary bootstrap validation produces the most stable estimates. Considering both bias and variance, techniques like ordinary bootstrap and out-of-sample bootstrap perform best by providing a balance of low bias and variance in their performance estimates.
Bayesian Assurance: Formalizing Sensitivity Analysis For Sample SizenQuery
Title: Bayesian Assurance: Formalizing Sensitivity Analysis For Sample Size
Duration: 60 minutes
Speaker: Ronan Fitzpatrick, Head of Statistics, Statsols
Watch Here: http://bit.ly/2ndRG4B
In this webinar you’ll learn about:
Benefits of Sensitivity Analysis: What does the researcher gain by conducting a sensitivity analysis?
Why isn't Sensitivity Analysis formalized: Why does sensitivity analysis still lack the type of formalized rules and grounding to make it a routine part of sample size determination in every field?
How Bayesian Assurance works: Using Bayesian Assurance provides key contextual information on what is likely to happen over the total range possible values rather than the small number of fixed points used in a sensitivity analysis
Elicitation & SHELF: How expert opinion is elicited and then how to integrate these opinions with each other plus prior data using the Sheffield Elicitation Framework (SHELF)
Why use in both Frequentist or Bayesian analysis: How and why these methods can be used for studies which will use Frequentist or Bayesian methods in their final analysis
Plus more
This document summarizes a research paper that evaluated the effect of feature reduction using principal component analysis (PCA) on sentiment analysis of online product reviews. The researchers developed two models - Model I used unigram features directly, while Model II reduced the features to the top 57 principal components. Both support vector machines and naive Bayes classifiers showed improved accuracy when trained on the reduced feature set of Model II compared to the full feature set of Model I. Receiver operating characteristic curves also indicated better classification performance from both classifiers when using the reduced features. The results provide promising evidence that PCA can be an effective feature reduction method for sentiment analysis tasks.
This document summarizes a research paper that examines the effect of feature reduction in sentiment analysis of online reviews. It uses principle component analysis to reduce the number of features (product attributes) from a dataset of 500 camera reviews labeled as positive or negative. Two models are developed - one using the original set of 95 product attributes, and one using the reduced set. Support vector machines and naive Bayes classifiers are applied to both models and their performance is evaluated to determine if classification accuracy can be maintained while using fewer features. The results show it is possible to achieve similar accuracy levels with less features, improving computational efficiency.
The document describes the development of the Simulated Colonoscopy Objective Performance Evaluation (S.C.O.P.E.) tool, which is a non-virtual reality simulation for assessing endoscopic skills. Four tasks were created to evaluate core skills: scope manipulation, tool targeting, loop management, and mucosal inspection. A study of 35 subjects stratified into novice, intermediate and expert groups found that experts outperformed intermediates, who outperformed novices, on all four tasks and the total S.C.O.P.E. score, demonstrating the tool's ability to differentiate skill levels. This provides initial validity evidence for S.C.O.P.E. as an objective assessment of endoscopic skills.
This document summarizes research on semi-supervised classification methods for protein crystallization image classification. It describes self-training and YATSI (Yet Another Two Staged Idea) semi-supervised classification approaches applied to a dataset of 2250 protein crystallization images. Experimental results show that naive Bayesian and SMO classifiers benefited from self-training and YATSI, while decision trees, multilayer perceptron, and random forests did not improve. Random forest provided the best overall classification performance. Future work will investigate active learning combined with semi-supervised learning.
Recognition of anaerobic based on machine learning using smart watch sensor dataSuhyun Cho
This document discusses a study that used machine learning to recognize three types of anaerobic exercises (pull-ups, side pulls, and concentration curls) performed with dumbbells, based on sensor data from smartwatches. The researchers collected acceleration and gyroscope sensor data from smartwatches worn by subjects performing the exercises. They extracted features from the sensor data and used a support vector machine (SVM) algorithm to classify the exercises. Their best performing model used principal component analysis to reduce the features to two dimensions and a linear kernel, achieving a mean recognition rate of 97.7% for the three exercises.
This document discusses predicting the secondary structure of proteins using machine learning algorithms. The researchers used 57 features of 700 amino acids to train logistic regression, naive Bayes, decision tree, and random forest models. Random forest achieved the best accuracy of 78.76% for a dataset of 1000 samples. The results show that modern machine learning algorithms can efficiently and accurately predict protein secondary structures. Room for improvement remains in adding new informative features to further boost prediction accuracy.
Robust Fault-Tolerant Training Strategy Using Neural Network to Perform Funct...Eswar Publications
This paper is intended to introduce an efficient as well as robust training mechanism for a neural network which can be used for testing the functionality of software. The traditional setup of neural network architecture is used constituting the two phases -training phase and evaluation phase. The input test cases are to be trained in first phase and consequently they behave like normal test cases to predict the output as untrained test cases. The test oracle measures the deviation between the outputs of untrained test cases with trained test cases and authorizes a final decision. Our framework can be applied to systems where number of test cases outnumbers the
functionalities or the system under test is too complex. It can also be applied to the test case development when the modules of a system become tedious after modification.
Younger: Predicting Age with Deep LearningPrem Ananda
Younger: Predicting Age with Deep Learning is a data science project created by Prem Ananda to predict subjects' ages from photographs using: Python, TensorFlow, Deep Learning principles, and a convolutional neural network. The project was implemented using Google Colab Pro's GPU compute power.
Data analysis_PredictingActivity_SamsungSensorDataKaren Yang
- The document analyzes data from a study that tracked activity using smartphone sensors to predict activity type based on quantitative measurements.
- It builds random forest and support vector machine (SVM) models on a training data set and finds the random forest model has a lower error rate of 11%, making it the better predictive model.
- Variable importance analysis of the random forest model identifies 11 highly correlated variables as the most important predictors of activity type. Tuning the random forest model to use just these 11 variables results in a 16% error rate on a validation data set.
- Applying the tuned random forest model to a test data set achieves an error rate of 17%, confirming the 11 variables as key predictors of activity type
以類神經網路實現臉部影像疼痛水準即時估測 Implement Rapid Pain Intensity Estimation from Facial Im...高遠 林
This document describes a study that implemented rapid pain intensity estimation from facial images using an artificial neural network. The researchers extracted features from facial images using local binary patterns and max pooling, and used these as input to a neural network regression model. They augmented the training data to improve light and rotation invariance. The model achieved high correlation (0.92-0.95) and low error (0.16-0.24) for pain intensity estimation on test images, outperforming previous approaches. Data augmentation was shown to improve the model's convergence during training. The approach allows automatic pain recognition without requiring specialized pain-related categories in emotion recognition APIs.
This study examined the effects of Nike SPARQ Vapor Strobe Eyewear on coordination skills in female soccer players at Monmouth University. The women's soccer team was split into a control group that trained without the eyewear and an experimental group that trained using the eyewear three times a week for four weeks. Players were tested before, during, and after training on drills measuring one-touch passing, v-boxing, and rollover boxing skills. Statistical analysis found a significant improvement over time in one-touch passing and v-boxing totals for both groups, but no significant difference between the groups wearing eyewear versus not. The study was limited by a lack of incentive for high performance
Explore the latest techniques and technologies used in classifying fetal health, from traditional methods to cutting-edge AI approaches. Understand the importance of accurate classification for prenatal care and fetal well-being. Join us to delve into this critical aspect of healthcare. visit https://bostoninstituteofanalytics.org/data-science-and-artificial-intelligence/ for more data science insights
Robust Tracking Via Feature Mapping Method and Support Vector MachineIRJET Journal
This document presents a visual object tracking method using expectation maximization algorithm and support vector machine for improved accuracy and robustness. The method involves selecting an initial target in the first frame, extracting features using expectation maximization, and tracking the target across subsequent frames using a support vector machine classifier. The method is able to track objects undergoing occlusion, deformation, rotation and other challenges. It maintains a tracking speed of around 45 frames per second and outperforms other tracking methods in terms of accuracy according to qualitative and quantitative evaluations.
Cross-validation aggregation for forecastingDevon Barrow
Cross-validation aggregation combines the benefits of cross-validation and forecast aggregation. It saves the predictions from models estimated on different cross-validation folds and averages these predictions to obtain the final forecast. Empirical results on 111 time series show that cross-validation aggregation outperforms simple model averaging and bagging, with the lowest errors on validation sets. Different cross-validation aggregation methods perform best depending on data characteristics like time series length and forecast horizon.
Scott MacKenzie at BayCHI: Evaluating Eye Tracking Systems for Computer Data ...BayCHI
The human eye, with the assistance of an eye tracking apparatus, may serve as an input controller to a computer system. Much like point-select operations with a mouse, the eye can "look-select", and thereby activate items such as buttons, icons, links, or text. Evaluating the eye working in concert with an eye tracking system requires a methodology that uniquely addresses the characteristics of both the eye and the eye tracking apparatus. Among the interactions considered are eye typing and mouse emulation. Eye typing involves using the eye to interact with an on-screen keyboard to generate text messages. Mouse emulation involves using the eye for conventional point-select operations in a graphical user interface. In this case, the methodologies for evaluating pointing devices (e.g., Fitts' law and ISO 9241-9) are applicable but must be tailored to the unique characteristics of the eye, such as saccadic movement. This presentation surveys and reviews these and other issues in evaluating eye-tracking systems for computer input.
Scott MacKenzie is associate professor of Computer Science and Engineering at York University, Toronto, Canada. His research is in human-computer interaction with an emphasis on human performance measurement and modeling, experimental methods and evaluation, interaction devices and techniques, alphanumeric entry, language modeling, and mobile computing. He has more than 100 peer-reviewed publications in the field of Human-Computer Interaction, including more than 30 from the ACM's annual SIGCHI conference. He has given numerous invited talks over the past 20 years.
Thesis presentation: Applications of machine learning in predicting supply risksTuanNguyen1697
The document describes a thesis defense that applies machine learning techniques to predict supply chain risks for an e-commerce company. The thesis uses machine learning algorithms like support vector machines, decision trees, and random forests on delivery data to build predictive models of delayed deliveries. An initial analysis shows that random forests outperform other models. The thesis proposes improvements to the models through recursive feature elimination and a two-phase cost-complexity pruning approach for decision trees to further optimize performance and interpretability of the results.
This research focused on classifying Human-Biometric Sensor Interaction errors in real-time. The Kinect 2 was used as a measuring device to track the position and movements of the subject through a simulated border control environment. Knowing, in detail, the state of the subject ensures that the human element of the HBSI model is analyzed accurately. A network connection was established with the iris device to know the state of the sensor and biometric system elements of the model. Information such as detection rate, extraction rate, quality, capture type, and more metrics was available for use in classifying HBSI errors. A Federal Inspection Station (FIS) booth was constructed to simulate a U.S. border control setting in an International airport. The subjects were taken through the process of capturing iris and fingerprint samples in an immigration setting. If errors occurred, the Kinect 2 program would classify the error and saved these for further analysis.
IT 34500 is an undergraduate course offered to Purdue West Lafayette students. The course gives an introduction into biometrics and automatic identification and data capture technologies
The human signature provides a natural and publically-accepted legally-admissible method for providing authentication to a process. Automatic biometric signature systems assess both the drawn image and the temporal aspects of signature construction, providing enhanced verification rates over and above conventional outcome assessment. To enable the capture of these constructional data requires the use of specialist ‘tablet’ devices. In this paper we explore the enrolment performance using a range of common signature capture devices and investigate the reasons behind user preference. The results show that writing feedback and familiarity with conventional ‘paper and pen’ donation configurations are the primary motivation for user preference. These results inform the choice of signature device from both technical performance and user acceptance viewpoints.
The inherent differences between secret-based authentication (such as passwords and PINs) and biometric authentication have left gaps in the credibility of biometrics. These gaps are due, in large part, to the inability to adequately cross-compare the two types of authentication. This paper provides a comparison between the two types of authentication by equating biometric entropy in the same way entropy of secrets are represented. Similar to the method used by Ratha, Connell, and Bolle [1], the x and y dimensions of the fingerprints were examined to determine all possible locations of minutiae. These locations were then examined based on the observed probability of minutiae occurring in each of the designated locations. The results of this work show statistically significant differences in the frequencies and probabilities of occurrence for minutiae location, type, and angle, across all possible minutiae locations. These components were applied to Shannon’s Information Theory [2] to determine the entropy of fingerprint biometrics, which was estimated to be equivalent to an 8.3-character, randomly chosen password
This course covers biometric usability testing with a focus on border control and mobile devices. The course objectives are to understand biometric systems, how people use them, testing methodologies, limitations, and research methods. Topics include genuine users, usability, attacks, border security, tokens, qualitative/quantitative research, and focus groups. Students will complete a research-based group project, assignments, and quizzes. The course uses lectures, discussions, guest speakers and students are expected to regularly attend and complete all work.
This document examines the stability of iris recognition over short periods of time. It analyzes iris scan data from 60 participants in a single visit lasting 10 minutes or less. The stability of each iris is measured using a stability score index. Statistical analysis finds no significant difference in stability scores between age groups, gender, or ethnicity. This suggests the iris remains stable within a single visit. Future work could examine stability over longer time periods and whether it decreases with more extended testing.
In this research, intra-visit match score stability was examined for the human iris. Scores were found to be statistically stable in this short time frame.
In this research, intra-visit match score stability was examined for the human iris. Scores were found to be statistically stable in this short time frame.
In this research, intra-visit match score stability was examined for the human iris. Scores were found to be statistically stable in this short time frame.
In this research, intra-visit match score stability was examined for the human iris. Scores were found to be statistically stable in this short time frame.
A lot of work done in Center recently has focused around different topics concerning "time". Iris stability across different "times" has been in the forefront due to work in the undergraduate class, IT345, the graduate class IT545, as well as work in Ben Petry's thesis. Of course "time" is a fairly inaccurate word to use. Assessing stability over time is very ambiguous to the research question. For example time may mean millisecond, months, years, or even life of the user. Upon further examination of other academic literature, the reporting of research duration, collection interval, and specific time frame of interest are sporadic at best and missing completely at worst. To solve this issue, the Center has created the biometric duration scale (BDS) model with associated suggested best practices for reporting time duration in biometrics.
The BDS model marries the general biometric model with HBSI model to create a logical flow of five phases: the presentation definition phase, sample phase, processing phase, and enrollment or matching phase. By tracking information through this progression such as specific subject presentations made, HBSI error, and FTE/Enrollment score (to name a few), performance within the general biometric model can be examined. The BDS model goes one step further by creating specific durations to report research specific metrics. By creating this model, outcomes that effect a yearly performance metrics can be looked at by examining monthly performance, daily performance, or even specific user presentations and how those subcomponents effect the whole system.
Additionally, best practices for the reporting of duration is also included. The reporting methodology stems from ISO 8601 and is in compliance with ISO 21920. In the common reporting structure, start date, duration, number of visits at how many intervals, and time scope of interest for the specific research are given in a logical, readily available format along with the very specific, detailed ISO 8601 methodology. The goal of creating a formal script for reporting research duration was to eliminate ambiguity and create an environment where replication and drawing parallels between research is encouraged.
The document examines the stability of iris recognition over a short period of time. It discusses how iris recognition works and why the iris is considered unique and stable over time. The research presented in the document analyzed iris image data collected over four weekly visits. The results showed no statistically significant difference in iris matching scores between the different visits, suggesting the iris is stable over a short time period. This supports the idea that the iris can be used for biometric identification applications that require stability over time.
ICBR has been involved in standards development for over 14 years through committees like INCITS M1 and ISO/IEC JTC1 SC37. To provide students real-world experience, students participated on these committees by submitting documents, comments, and reviews. This engagement between academia and standards development benefits both fields by allowing applied research and education in new and emerging technical areas.
The stability score index, conceptualized in 2013, was designed to address the weaknesses of the zoo menagerie and other performance metrics by quantifying the relative stability of a user from on condition to another. In this paper, the measure of interoperability is the stability score from enrolling on one sensor and verifying on multiple sensors. The results showed that like performance, individual performance were not stable across these sensors. When examining stability by sensor family (capacitance, optical and thermal) we find that capacitive as the enrollment sensor were the least stable. Both enrolling and verifying on a thermal sensor, individuals were the most stable of the three family types. With respect to interaction type, enrolling on touch and verifying on swipe was more stable than enrolling on swipe and verifying on swipe, which was an interesting finding. Individuals using the thermal sensor generated the most stable stability scores.
This document discusses advances in testing and evaluating human-biometric sensor interaction using a new model. It describes gaps in traditional biometric testing, such as how users interact with systems. A new Human Biometric Sensor Interaction model is presented and has been tested on iris and fingerprint biometrics. The model has been expanded to more complex systems like border gates. Testing looks at how users interact with biometric systems in different environments and factors like throughput. The goal is to better test and evaluate systems without overburdening test facilities.
This document discusses biometric testing and evaluation. It covers traditional biometric algorithm testing and more complex operational testing. There are gaps in areas like training, accessibility, human factors, and determining what causes errors. Filling these gaps is an ongoing work in progress as biometric devices become more complex and deployed in more environments and applications. Different types of testing include technology, scenario, and operational evaluations to adequately assess performance and usability.
This course provides an overview of biometric technology as it relates to security, access control, and authentication. It examines basic biometric terminology and various biometric modalities such as fingerprint, face, and iris recognition. Students will learn about biometric data evaluation and interpretation, standards, integration, and challenges. The course is divided into fundamental, modality, integration, and research building blocks to cover topics like identification, matching, fusion, standards, and interoperability.
This document outlines the structure and goals of a research study on the stability of iris recognition match scores over time. It introduces the problem statement around the lack of quantification of match score stability, and previews the research question, significance, purpose and scope, assumptions, limitations, and delimitations that will be discussed in the following chapters which focus on the literature review, methodology, results, and conclusions of the study.
According to a report by Frost and Sullivan in 2007, revenues for non-AFIS fingerprint devices in notebook PC's and wireless devices is anticipated to grow from $148.5 million to $1588.0 million by 2014, a compound annual growth rate of 40.3% [1]. The AFIS market has a compound annual growth rate of 15.2% with revenues of $445.0 million in 2007. With the development of mobile applications in a number of different market segments, such as healthcare, retail, and law enforcement, this paper analyzed the performance of fingerprints of different sizes, from different sensors...
This is a preview of the databases we use in the Center. The presentation overviews our data collection GUI, data storage (datawarehouse), and our project management database. Each of these databases work together to allow us to efficiently run our operations.
More from International Center for Biometric Research (20)
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
Discover top-tier mobile app development services, offering innovative solutions for iOS and Android. Enhance your business with custom, user-friendly mobile applications.
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
LF Energy Webinar: Carbon Data Specifications: Mechanisms to Improve Data Acc...DanBrown980551
This LF Energy webinar took place June 20, 2024. It featured:
-Alex Thornton, LF Energy
-Hallie Cramer, Google
-Daniel Roesler, UtilityAPI
-Henry Richardson, WattTime
In response to the urgency and scale required to effectively address climate change, open source solutions offer significant potential for driving innovation and progress. Currently, there is a growing demand for standardization and interoperability in energy data and modeling. Open source standards and specifications within the energy sector can also alleviate challenges associated with data fragmentation, transparency, and accessibility. At the same time, it is crucial to consider privacy and security concerns throughout the development of open source platforms.
This webinar will delve into the motivations behind establishing LF Energy’s Carbon Data Specification Consortium. It will provide an overview of the draft specifications and the ongoing progress made by the respective working groups.
Three primary specifications will be discussed:
-Discovery and client registration, emphasizing transparent processes and secure and private access
-Customer data, centering around customer tariffs, bills, energy usage, and full consumption disclosure
-Power systems data, focusing on grid data, inclusive of transmission and distribution networks, generation, intergrid power flows, and market settlement data
"Scaling RAG Applications to serve millions of users", Kevin GoedeckeFwdays
How we managed to grow and scale a RAG application from zero to thousands of users in 7 months. Lessons from technical challenges around managing high load for LLMs, RAGs and Vector databases.
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
This talk will cover ScyllaDB Architecture from the cluster-level view and zoom in on data distribution and internal node architecture. In the process, we will learn the secret sauce used to get ScyllaDB's high availability and superior performance. We will also touch on the upcoming changes to ScyllaDB architecture, moving to strongly consistent metadata and tablets.
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
From Natural Language to Structured Solr Queries using LLMsSease
This talk draws on experimentation to enable AI applications with Solr. One important use case is to use AI for better accessibility and discoverability of the data: while User eXperience techniques, lexical search improvements, and data harmonization can take organizations to a good level of accessibility, a structural (or “cognitive” gap) remains between the data user needs and the data producer constraints.
That is where AI – and most importantly, Natural Language Processing and Large Language Model techniques – could make a difference. This natural language, conversational engine could facilitate access and usage of the data leveraging the semantics of any data source.
The objective of the presentation is to propose a technical approach and a way forward to achieve this goal.
The key concept is to enable users to express their search queries in natural language, which the LLM then enriches, interprets, and translates into structured queries based on the Solr index’s metadata.
This approach leverages the LLM’s ability to understand the nuances of natural language and the structure of documents within Apache Solr.
The LLM acts as an intermediary agent, offering a transparent experience to users automatically and potentially uncovering relevant documents that conventional search methods might overlook. The presentation will include the results of this experimental work, lessons learned, best practices, and the scope of future work that should improve the approach and make it production-ready.
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Northern Engraving | Modern Metal Trim, Nameplates and Appliance PanelsNorthern Engraving
What began over 115 years ago as a supplier of precision gauges to the automotive industry has evolved into being an industry leader in the manufacture of product branding, automotive cockpit trim and decorative appliance trim. Value-added services include in-house Design, Engineering, Program Management, Test Lab and Tool Shops.
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
What is an RPA CoE? Session 2 – CoE RolesDianaGray10
In this session, we will review the players involved in the CoE and how each role impacts opportunities.
Topics covered:
• What roles are essential?
• What place in the automation journey does each role play?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
1. Examination of Fingerprint Image Quality and Performance on Force Acquisition vis-à-vis Auto-captureCarnahan Conference| San Jose, CA| October 7th, 2010 Biometric Standards, Performance, and Assurance Laboratory | Purdue University www.bspalabs.org www.twitter.com/bspalabs www.slideshare.net/bspalabs www.linkedin.com/companies/bspa-labs
2. Agenda Motivation – why are we doing this? Data Collection Results Questions and Further Research Comments / Questions
3. Why are we doing this? Force improves the fingerprint image quality and performance We have done a number of studies in fingerprint force, across 10 print, single print optical and capacitance slap and swipe. Wanted to examine different force levels and how sensitive force sensor acquisition could be
4. Four-fold motivation Validating results from Kukula, et al. (2007) Difference between auto-capture vs. force-capture The effect of force-capture on time User comfort level
6. Methodology – force capture Examination of force and performance Auto-capture in Verifinger 5.0 Manipulation of force through the SDK 1.5, 2.5, 3.5, 4.5, 5.5, 6.5, & 7.5 N with tolerance band of ±0.5N using force-capture method Off line analysis using Verifinger 6.0
7. Methodology - Timing Throughput is important in an operational setting What is the impact of force on timing
10. Data Collection Procedures Collected data in accordance with our quality manual (approximates ISO 17025) Consent forms approved by the IRB Advertisements were posted around campus Another data collection activity was ongoing in fingerprinting at the same time Subjects were seated when they interacted with the fingerprint sensor
11. Data Collection Procedures 24 fingerprint images were collected per subject Three images for natural force using auto-capture method Three images for each force levels (1.5, 2.5, 3.5, 4.5, 5.5, 6.5, & 7.5 N with tolerance band of ±0.5N) using force-capture method Survey
25. Results - Conclusion Force impacts both image quality and performance. By using force-capture acquisition method, the biometric subsystem processing time slightly increases. Force level 5.5 N is recommended as the optimal force level to be used without sacrificing user’s comfort level.
26. Any Questions? Follow the discussion on the research blog after the conference www.bspalabs.org/
27. Authors and Primary Contact Information Authors Benny Senjaya Graduate Researcher at BSPA Lab bennysenjaya@gmail.com Stephen Elliott, Ph.D. BSPA Lab Director & Associate Professor elliott@purdue.edu Shimon Modi, Ph.D. Visiting Scientist at C-DAC Mumbai shimonmodi@gmail.com Tae Bong Lee, Ph.D. Professor at Kyungwon College tblee@kyungwon.ac.kr Contact Information Stephen Elliott, Ph.D. Associate Professor Director of BSPA Labs elliott@purdue.edu