This study investigated the effect of force levels (3, 5, 7, 9 and 11N) on fingerprint
matching performance, image quality scores and minutiae count between optical and capacitance sensors. Three images were collected from the right index fingers of 75 participants for each sensing technology. Descriptive statistics analysis of variance and Kruskal-Wallis non-parametric
tests were conducted to assess significant differences in minutiae counts and image quality scores, by force level. The results reveal a significant difference in image quality score by force level and sensor technology in contrast to minutiae count for the capacitance sensor. The image quality score is one of the many factors that influence the system matching performance, yet the
removal of low quality images does not improve the system performance at each force level. Further research is needed to identify other manipulatable factors to improve the interaction between a user and device and the subsequent matching performance.
Abstract—Biometric systems are increasingly deployed in networked environment, and issues related to interoperability are bound to arise as single vendor, monolithic architectures become less desirable. Interoperability issues affect every subsystem of the biometric system, and a statistical framework to evaluate interoperability is proposed. The framework was applied to the acquisition subsystem for a fingerprint recognition system and the results were evaluated using the framework. Fingerprints were collected from 100 subjects on 6 fingerprint sensors. The results show that performance of interoperable fingerprint datasets is not easily predictable and the proposed framework can aid in removing unpredictability to some degree.
The vulnerabilities of biometric sensors have been
discussed extensively in the literature and popularized in films and
television shows. This research examines the image quality of an
artificial print as compared to a genuine finger, and examines the
characteristics of the two, including minutiae counts and image
quality, as repeated samples are taken.
Fingerprint recognition is an important
biometric technology, and its use is increasing day by day.
Fingerprint recognition is affected by several physiological
factors like age, wear and tear of skin, and technological
factors like sensor technologies. This paper builds on
previous research in the area of gender differences in
fingerprint features, and reports results of differences in
performance of fingerprints collected from males and
females. The researchers propose a fingerprint analysis
framework for testing differences in gender and apply the
framework to fingerprints collected from males and
females.
The proliferation of networked authentication
systems has put focus on the issue of interoperability.
Fingerprint sensors are based on a variety of different technologies that introduce inconsistent distortions and variations in the feature set of the captured image, which makes the goal of interoperability challenging. The motivation of this
research was to examine the effect of fingerprint sensor interoperability on the performance of a minutiae based matcher. A statistical analysis framework for testing
interoperability was formulated to test similarity of minutiae count, image quality and similarity of performance between
native and interoperable datasets. False non-match rate (FNMR) was used as the performance metric in this research.
Interoperability performance analysis was conducted on each sensor dataset and also by grouping datasets based on the
acquisition technology and interaction type of the acquisition sensor. The lowest interoperable FNMR observed was 0.12%.
Increasingly sophisticated biometric methods are being used for a
variety of applications in which accurate authentication of people is necessary.
Because all biometric methods require humans to interact with a device of some
type, effective implementation requires consideration of human factors issues.
One such issue is the training needed to use a particular device appropriately.
In this paper, we review human factors issues in general that are associated with
biometric devices and focus more specifically on the role of training.
Abstract—Biometric systems are increasingly deployed in networked environment, and issues related to interoperability are bound to arise as single vendor, monolithic architectures become less desirable. Interoperability issues affect every subsystem of the biometric system, and a statistical framework to evaluate interoperability is proposed. The framework was applied to the acquisition subsystem for a fingerprint recognition system and the results were evaluated using the framework. Fingerprints were collected from 100 subjects on 6 fingerprint sensors. The results show that performance of interoperable fingerprint datasets is not easily predictable and the proposed framework can aid in removing unpredictability to some degree.
The vulnerabilities of biometric sensors have been
discussed extensively in the literature and popularized in films and
television shows. This research examines the image quality of an
artificial print as compared to a genuine finger, and examines the
characteristics of the two, including minutiae counts and image
quality, as repeated samples are taken.
Fingerprint recognition is an important
biometric technology, and its use is increasing day by day.
Fingerprint recognition is affected by several physiological
factors like age, wear and tear of skin, and technological
factors like sensor technologies. This paper builds on
previous research in the area of gender differences in
fingerprint features, and reports results of differences in
performance of fingerprints collected from males and
females. The researchers propose a fingerprint analysis
framework for testing differences in gender and apply the
framework to fingerprints collected from males and
females.
The proliferation of networked authentication
systems has put focus on the issue of interoperability.
Fingerprint sensors are based on a variety of different technologies that introduce inconsistent distortions and variations in the feature set of the captured image, which makes the goal of interoperability challenging. The motivation of this
research was to examine the effect of fingerprint sensor interoperability on the performance of a minutiae based matcher. A statistical analysis framework for testing
interoperability was formulated to test similarity of minutiae count, image quality and similarity of performance between
native and interoperable datasets. False non-match rate (FNMR) was used as the performance metric in this research.
Interoperability performance analysis was conducted on each sensor dataset and also by grouping datasets based on the
acquisition technology and interaction type of the acquisition sensor. The lowest interoperable FNMR observed was 0.12%.
Increasingly sophisticated biometric methods are being used for a
variety of applications in which accurate authentication of people is necessary.
Because all biometric methods require humans to interact with a device of some
type, effective implementation requires consideration of human factors issues.
One such issue is the training needed to use a particular device appropriately.
In this paper, we review human factors issues in general that are associated with
biometric devices and focus more specifically on the role of training.
IMAGE QUALITY ASSESSMENT FOR FAKE BIOMETRIC DETECTION: APPLICATION TO IRIS, F...ijiert bestjournal
In this Paper,the actual presence of a real legitimate trait in contrast to a fake self - manufactured synthetic or reconstructed sample is a significant problem in biometric authentication,which requires the development of new and efficient protection measures. In this paper,we present a novel software - based fake detection method that can be used in multiple biometric systems to detect different types of fraudulent access attempts. The obje ctive of the proposed system is to enhance the security of biometric recognition frameworks,by adding livens assessment in a fast,user - friendly,and non - intrusive manner,through the use of image quality assessment. The proposed approach presents a very low degree of complexity,which makes it suitable for real - time applications,using 25 general image quality features extracted from one image (i.e.,the same acquired for authentication purposes) to distinguish between legitimate and impostor samples. The experimental results,obtained on publicly available data sets of fingerprint,iris,and 2D face,show that the proposed method is highly competitive compared with other state - of - the - art approaches and that the analysis of the general image quality of rea l biometric samples reveals highly valuable information that may be very efficiently used to discriminate them from fake traits.
This study evaluated the effectiveness quality of fingerprint images obtained from two populations interacting with two commercially available fingerprint recognition sensors (one capacitance, one optical). Currently, there is a lack of information within the biometric community pertaining to fingerprint image quality and an elderly population. Therefore, this study specifically examined the fingerprint image quality of an elderly population (55+) to that of an 18-25 year old population baseline. Key individual variables that may have an influence on image quality were collected and examined. These variables included age, gender, thnicity, handedness, moisture content of
each index finger, occupation(s)/hobby(ies), use of hand moisturizers, and prior use of
fingerprint devices.
Fake Multi Biometric Detection using Image Quality Assessmentijsrd.com
In the recent era where technology plays a prominent role, persons can be identified (for security reasons) based on their behavioral and physiological characteristics (for example fingerprint, face, iris, key-stroke, signature, voice, etc.) through a computer system called the biometric system. In these kinds of systems the security is still a question mark because of various intruders and attacks. This problem can be solved by improving the security using some efficient algorithms available. Hence the fake person can be identified if he/she uses any synthetic sample of an authenticated person and a fake person who is trying to forge can be identified and authenticated.
To ensure that the object presented in front of biometric device is real or reconstructed sample is a significant
problem in biometric authentication, which requires the development of new and efficient protection measures. This
paper, presents a software-based fake biometric detection method that can be used in multiple biometric systems to
detect different types of fraudulent access attempts. The objective of the proposed system is to enhance the security of
biometric recognition devices through the use of image quality assessment in a fast and user friendly manner. The
proposed approach presents a very low degree of complexity, which makes it suitable for real-time applications, using
25 general image quality features extracted from one image to distinguish between real and imposed samples. The
proposed method is highly competitive compared with other as the analysis of the general image quality of real
biometric samples reveals highly valuable information that may be very efficiently used to discriminate them from fake
traits.
An SVM based Statistical Image Quality Assessment for Fake Biometric DetectionIJTET Journal
Abstract
A biometric system is a computer based system and is used to identify the person on their behavioral and logical characteristics such as (for example fingerprint, face, iris, keystroke, signature, voice, etc.).A typical biometric system consists of feature extraction and matching patterns. But nowadays biometric systems are attacked by using fake biometric samples. This paper described the fingerprint biometric techniques and also introduce the attack on that system and by using Image Quality Assessment for Liveness Detection to know how to protect the system from fake biometrics and also how the multi biometric system is more secure than uni-biometric system. Support Vector Machine (SVM) classification technique is used for training and testing the fingerprint images. The testing onput fingerprint image is resulted as real and fake fingerprint image by quality score matching with the training based real and fake fingerprint samples.
IRJET-Gaussian Filter based Biometric System Security EnhancementIRJET Journal
M.Selvi, T.Manickam, C.N.Marimuthu"Gaussian Filter based Biometric System Security Enhancement", International Research Journal of Engineering and Technology (IRJET), Volume2,issue-01 April 2015.e-ISSN:2395-0056, p-ISSN:2395-0072. www.irjet.net
Abstract
A novel software-based fake detection method that can be used in multiple biometric systems to detect different types of fraudulent access attempts. To ensure the actual presence of a real legitimate trait in contrast to a fake self-manufactured synthetic or reconstructed sample is a significant problem in biometric authentication, which requires the development of new and efficient protection measures. To enhance the security of biometric recognition frameworks, by adding liveness assessment in a fast, user-friendly, and non-intrusive manner, through the use of image quality assessment.
The proposed approach presents a very low degree of complexity, which makes it suitable for real-time applications, using 25 general image quality features extracted from one image (i.e., the same acquired for authentication purposes) to distinguish between legitimate and impostor samples. Multi-biometric and Multi-attack protection method which targets to overcome part of these limitations through the use of Image Quality Assessment (IQA).
Moreover, being software-based, it presents the usual advantages of this type of approaches: fast, as it only needs one image (i.e., the same sample acquired for biometric recognition) to detect whether it is real or fake, non-intrusive; user-friendly (transparent to the user), cheap and easy to embed in already functional systems and no hardware is required).
Role of fuzzy in multimodal biometrics systemKishor Singh
Person identification is possible through the biometrics using their physiological and behavioral characteristics such
as face, ear, thumb print, voice, signature and key stock. Unimodal biometric systems face a range of problems, including noisy
data, intra-class versions, small liberty, non-university, spoof assaults, and unsustainable error rates. Some of these drawbacks
can be overcome by multimodal biometric technologies, which incorporate data from various information sources. In this paper
we work on multimodal biometric using three modalities face, ear and foot to find the optimal results using fuzzy fusion
mechanism and produces final identification decision via a fuzzy rules that enhance the quality of multimodalities biometric
system.
New Research Articles 2019 October Issue Signal & Image Processing An Interna...sipij
Signal & Image Processing: An International Journal (SIPIJ)
ISSN: 0976 – 710X [Online]; 2229 - 3922 [Print]
http://www.airccse.org/journal/sipij/index.html
Current Issue; October 2019, Volume 10, Number 5
Free- Reference Image Quality Assessment Framework Using Metrics Fusion and Dimensionality Reduction
Besma Sadou1, Atidel Lahoulou2, Toufik Bouden1, Anderson R. Avila3, Tiago H. Falk3 and Zahid Akhtar4, 1Non Destructive Testing Laboratory, University of Jijel, Algeria, 2LAOTI laboratory, University of Jijel, Algeria, 3University of Québec, Canada and 4University of Memphis, USA
Test-cost-sensitive Convolutional Neural Networks with Expert Branches
Mahdi Naghibi1, Reza Anvari1, Ali Forghani1 and Behrouz Minaei2, 1Malek-Ashtar University of Technology, Iran and 2Iran University of Science and Technology, Iran
Robust Image Watermarking Method using Wavelet Transform
Omar Adwan, The University of Jordan, Jordan
Improvements of the Analysis of Human Activity Using Acceleration Record of Electrocardiographs
Itaru Kaneko1, Yutaka Yoshida2 and Emi Yuda3, 1&2Nagoya City University, Japan and 3Tohoku University, Japan
http://www.airccse.org/journal/sipij/vol10.html
Seminar report on Error Handling methods used in bio-cryptographykanchannawkar
Detail information about the real time errors in the biometrics devices and also how to secure encryption keys. To make authentication systems more secure. In this seminar report describe about the combination of the biometrics with the cryptography. and also describe the methods that are used to handle the real time error like fault accept and fault reject and also describe their their rates.i,e FRR and FAR by the biometrics systems.
Robust Analysis of Multibiometric Fusion Versus Ensemble Learning Schemes: A ...CSCJournals
Identification of person using multiple biometric is very common approach used in existing user
validation of systems. Most of multibiometric system depends on fusion schemes, as much of the
fusion techniques have shown promising results in literature, due to the fact of combining multiple
biometric modalities with suitable fusion schemes. However, similar type of practices are found in
ensemble of classifiers, which increases the classification accuracy while combining different
types of classifiers. In this paper, we have evaluated comparative study of traditional fusion
methods like feature level and score level fusion with the well-known ensemble methods such as
bagging and boosting. Precisely, for our frame work experimentations, we have fused face and
palmprint modalities and we have employed probability model - Naive Bayes (NB), neural
network model - Multi Layer Perceptron (MLP), supervised machine learning algorithm - Support
Vector Machine (SVM) classifiers for our experimentation. Nevertheless, machine learning
ensemble approaches namely, Boosting and Bagging are statistically well recognized. From
experimental results, in biometric fusion the traditional method, score level fusion is highly
recommended strategy than ensemble learning techniques.
An Approach to Speech and Iris based Multimodal Biometric SystemIJEEE
Biometrics is the science and technology of human identification and verification through the use of feature set extracted from the biological data of the individual to be recognized. Unimodal and Multimodal systems are the two modal systems which have been developed so far. Unimodal biometric systems use a single biometric trait but they face limitations in the system performance due to the presence of noise in data, interclass variations and spoof attacks. These problems can be resolved by using multimodal biometrics which rely on more than one biometric information to produce better recognition results. This paper presents an overview of the multimodal biometrics, various fusion levels used in them and suggests the use of iris and speech using score level fusion for a multimodal biometric system.
A Smart Receptionist Implementing Facial Recognition and Voice InteractionCSCJournals
The purpose of this research is to implement a smart receptionist system with facial recognition and voice interaction using deep learning. The facial recognition component is implemented using real time image processing techniques, and it can be used to learn new faces as well as detect and recognize existing faces. The first time a customer uses this system, it will take the person’s facial data to create a unique user facial model, and this model will be triggered if the person comes the second time. The recognition is done in real time and after which voice interaction will be applied. Voice interaction is used to provide a life-like human communication and improve user experience. Our proposed smart receptionist system could be integrated into the self check-in kiosks deployed in hospitals or smart buildings to streamline the user recognition process and provide customized user interactions. This system could also be used in smart home environment where smart cameras have been deployed and voice assistants are in place.
Existing definitions for biometric testing and
evaluation do not fully explain errors in a biometric system. This paper provides a definitional framework for the Human
Biometric-Sensor Interaction (HBSI) model. This paper proposes six new definitions based around two classifications of
presentations, erroneous and correct. The new terms are: defective interaction (DI), concealed interaction (CI), false interaction (FI), failure to detect (FTD), failure to extract
(FTX), and successfully acquired samples (SAS). As with all definitions, the new terms require a modification to the general biometric model.
A Bring Your Own Device Risk Assessment ModelCSCJournals
Bring Your Own Device (BYOD), a technology where individuals or employees use their own devices on the organization’s network to perform tasks assigned to them by the organization has been widely embraced. The reasons for adoption are diverse in every organization. In spite of the security control strategies implemented by these organizations to safeguard their information resources, there has been an upsurge in information security breaches as a result of existing vulnerabilities in these systems and the legacy systems in use. Various approaches have been employed to deal with security challenges in BYOD, but according to literature, risk assessment has proved to be the first key step towards improving security of the BYOD environment in an enterprise. Risk assessment models have been proposed by various researchers, although, most are largely influenced by the degree of technological advancement and utilization as well as the working cultures within institutions. The existing models were largely developed in technologically advanced countries and thus do not fit well in developing countries. This study sought to develop flexible BYOD risk assessment model that can be adopted by varied institutions to secure their information resources. The study was carried out in Five (5) purposively selected state universities in Kenya. The research adopted a mixed research design approach with mixed sampling technique utilized to select the participants. Reliability and validity of data collection tools were evaluated and recommended by IT security and network experts. The qualitative and quantitative data was collected by interviewing experts and administering a questionnaire to sampled participants. The developed model was validated both statistically and by experts. The findings revealed that threats and vulnerabilities contributed to 39.9% and 69.2% respectively to the risk of the BYOD environment while Data Encryption (DE) and Software Updates (SU) came out strongly as intervening variables which have a major impact on the relationship between the dependent and independent variables.
Research has shown for some age groups, quality of fingerprints can impact the performance of biometric systems. A
desirable feature of biometrics is that they are suitable for use across the population. This applied study examines the performance of a fingerprint recognition system in a healthcare environment. Anecdotal evidence suggested front line healthcare workers may have lower image quality due to continued hand washing which may remove oils from their skin. During training, individuals are told to add oil to their fingers by wiping oil from their foreheads to improve the resulting quality of the
fingerprints. In the healthcare population the authors tested, compared to two general populations (collected on optical and
capacitance sensors) there was a significant difference in skin oiliness, but not in image quality. There was a difference across
healthcare and non-healthcare groups in the performance of the fingerprint algorithm when compared against the capacitance dataset.
Research has shown for some age groups, quality of fingerprints can impact the performance of biometric systems. A
desirable feature of biometrics is that they are suitable for use across the population. This applied study examines the performance of a fingerprint recognition system in a healthcare environment. Anecdotal evidence suggested front line healthcare workers may have lower image quality due to continued hand washing which may remove oils from their skin. During training, individuals are told to add oil to their fingers by wiping oil from their foreheads to improve the resulting quality of the
fingerprints. In the healthcare population the authors tested, compared to two general populations (collected on optical and
capacitance sensors) there was a significant difference in skin oiliness, but not in image quality. There was a difference across
healthcare and non-healthcare groups in the performance of the fingerprint algorithm when compared against the capacitance
dataset.
IMAGE QUALITY ASSESSMENT FOR FAKE BIOMETRIC DETECTION: APPLICATION TO IRIS, F...ijiert bestjournal
In this Paper,the actual presence of a real legitimate trait in contrast to a fake self - manufactured synthetic or reconstructed sample is a significant problem in biometric authentication,which requires the development of new and efficient protection measures. In this paper,we present a novel software - based fake detection method that can be used in multiple biometric systems to detect different types of fraudulent access attempts. The obje ctive of the proposed system is to enhance the security of biometric recognition frameworks,by adding livens assessment in a fast,user - friendly,and non - intrusive manner,through the use of image quality assessment. The proposed approach presents a very low degree of complexity,which makes it suitable for real - time applications,using 25 general image quality features extracted from one image (i.e.,the same acquired for authentication purposes) to distinguish between legitimate and impostor samples. The experimental results,obtained on publicly available data sets of fingerprint,iris,and 2D face,show that the proposed method is highly competitive compared with other state - of - the - art approaches and that the analysis of the general image quality of rea l biometric samples reveals highly valuable information that may be very efficiently used to discriminate them from fake traits.
This study evaluated the effectiveness quality of fingerprint images obtained from two populations interacting with two commercially available fingerprint recognition sensors (one capacitance, one optical). Currently, there is a lack of information within the biometric community pertaining to fingerprint image quality and an elderly population. Therefore, this study specifically examined the fingerprint image quality of an elderly population (55+) to that of an 18-25 year old population baseline. Key individual variables that may have an influence on image quality were collected and examined. These variables included age, gender, thnicity, handedness, moisture content of
each index finger, occupation(s)/hobby(ies), use of hand moisturizers, and prior use of
fingerprint devices.
Fake Multi Biometric Detection using Image Quality Assessmentijsrd.com
In the recent era where technology plays a prominent role, persons can be identified (for security reasons) based on their behavioral and physiological characteristics (for example fingerprint, face, iris, key-stroke, signature, voice, etc.) through a computer system called the biometric system. In these kinds of systems the security is still a question mark because of various intruders and attacks. This problem can be solved by improving the security using some efficient algorithms available. Hence the fake person can be identified if he/she uses any synthetic sample of an authenticated person and a fake person who is trying to forge can be identified and authenticated.
To ensure that the object presented in front of biometric device is real or reconstructed sample is a significant
problem in biometric authentication, which requires the development of new and efficient protection measures. This
paper, presents a software-based fake biometric detection method that can be used in multiple biometric systems to
detect different types of fraudulent access attempts. The objective of the proposed system is to enhance the security of
biometric recognition devices through the use of image quality assessment in a fast and user friendly manner. The
proposed approach presents a very low degree of complexity, which makes it suitable for real-time applications, using
25 general image quality features extracted from one image to distinguish between real and imposed samples. The
proposed method is highly competitive compared with other as the analysis of the general image quality of real
biometric samples reveals highly valuable information that may be very efficiently used to discriminate them from fake
traits.
An SVM based Statistical Image Quality Assessment for Fake Biometric DetectionIJTET Journal
Abstract
A biometric system is a computer based system and is used to identify the person on their behavioral and logical characteristics such as (for example fingerprint, face, iris, keystroke, signature, voice, etc.).A typical biometric system consists of feature extraction and matching patterns. But nowadays biometric systems are attacked by using fake biometric samples. This paper described the fingerprint biometric techniques and also introduce the attack on that system and by using Image Quality Assessment for Liveness Detection to know how to protect the system from fake biometrics and also how the multi biometric system is more secure than uni-biometric system. Support Vector Machine (SVM) classification technique is used for training and testing the fingerprint images. The testing onput fingerprint image is resulted as real and fake fingerprint image by quality score matching with the training based real and fake fingerprint samples.
IRJET-Gaussian Filter based Biometric System Security EnhancementIRJET Journal
M.Selvi, T.Manickam, C.N.Marimuthu"Gaussian Filter based Biometric System Security Enhancement", International Research Journal of Engineering and Technology (IRJET), Volume2,issue-01 April 2015.e-ISSN:2395-0056, p-ISSN:2395-0072. www.irjet.net
Abstract
A novel software-based fake detection method that can be used in multiple biometric systems to detect different types of fraudulent access attempts. To ensure the actual presence of a real legitimate trait in contrast to a fake self-manufactured synthetic or reconstructed sample is a significant problem in biometric authentication, which requires the development of new and efficient protection measures. To enhance the security of biometric recognition frameworks, by adding liveness assessment in a fast, user-friendly, and non-intrusive manner, through the use of image quality assessment.
The proposed approach presents a very low degree of complexity, which makes it suitable for real-time applications, using 25 general image quality features extracted from one image (i.e., the same acquired for authentication purposes) to distinguish between legitimate and impostor samples. Multi-biometric and Multi-attack protection method which targets to overcome part of these limitations through the use of Image Quality Assessment (IQA).
Moreover, being software-based, it presents the usual advantages of this type of approaches: fast, as it only needs one image (i.e., the same sample acquired for biometric recognition) to detect whether it is real or fake, non-intrusive; user-friendly (transparent to the user), cheap and easy to embed in already functional systems and no hardware is required).
Role of fuzzy in multimodal biometrics systemKishor Singh
Person identification is possible through the biometrics using their physiological and behavioral characteristics such
as face, ear, thumb print, voice, signature and key stock. Unimodal biometric systems face a range of problems, including noisy
data, intra-class versions, small liberty, non-university, spoof assaults, and unsustainable error rates. Some of these drawbacks
can be overcome by multimodal biometric technologies, which incorporate data from various information sources. In this paper
we work on multimodal biometric using three modalities face, ear and foot to find the optimal results using fuzzy fusion
mechanism and produces final identification decision via a fuzzy rules that enhance the quality of multimodalities biometric
system.
New Research Articles 2019 October Issue Signal & Image Processing An Interna...sipij
Signal & Image Processing: An International Journal (SIPIJ)
ISSN: 0976 – 710X [Online]; 2229 - 3922 [Print]
http://www.airccse.org/journal/sipij/index.html
Current Issue; October 2019, Volume 10, Number 5
Free- Reference Image Quality Assessment Framework Using Metrics Fusion and Dimensionality Reduction
Besma Sadou1, Atidel Lahoulou2, Toufik Bouden1, Anderson R. Avila3, Tiago H. Falk3 and Zahid Akhtar4, 1Non Destructive Testing Laboratory, University of Jijel, Algeria, 2LAOTI laboratory, University of Jijel, Algeria, 3University of Québec, Canada and 4University of Memphis, USA
Test-cost-sensitive Convolutional Neural Networks with Expert Branches
Mahdi Naghibi1, Reza Anvari1, Ali Forghani1 and Behrouz Minaei2, 1Malek-Ashtar University of Technology, Iran and 2Iran University of Science and Technology, Iran
Robust Image Watermarking Method using Wavelet Transform
Omar Adwan, The University of Jordan, Jordan
Improvements of the Analysis of Human Activity Using Acceleration Record of Electrocardiographs
Itaru Kaneko1, Yutaka Yoshida2 and Emi Yuda3, 1&2Nagoya City University, Japan and 3Tohoku University, Japan
http://www.airccse.org/journal/sipij/vol10.html
Seminar report on Error Handling methods used in bio-cryptographykanchannawkar
Detail information about the real time errors in the biometrics devices and also how to secure encryption keys. To make authentication systems more secure. In this seminar report describe about the combination of the biometrics with the cryptography. and also describe the methods that are used to handle the real time error like fault accept and fault reject and also describe their their rates.i,e FRR and FAR by the biometrics systems.
Robust Analysis of Multibiometric Fusion Versus Ensemble Learning Schemes: A ...CSCJournals
Identification of person using multiple biometric is very common approach used in existing user
validation of systems. Most of multibiometric system depends on fusion schemes, as much of the
fusion techniques have shown promising results in literature, due to the fact of combining multiple
biometric modalities with suitable fusion schemes. However, similar type of practices are found in
ensemble of classifiers, which increases the classification accuracy while combining different
types of classifiers. In this paper, we have evaluated comparative study of traditional fusion
methods like feature level and score level fusion with the well-known ensemble methods such as
bagging and boosting. Precisely, for our frame work experimentations, we have fused face and
palmprint modalities and we have employed probability model - Naive Bayes (NB), neural
network model - Multi Layer Perceptron (MLP), supervised machine learning algorithm - Support
Vector Machine (SVM) classifiers for our experimentation. Nevertheless, machine learning
ensemble approaches namely, Boosting and Bagging are statistically well recognized. From
experimental results, in biometric fusion the traditional method, score level fusion is highly
recommended strategy than ensemble learning techniques.
An Approach to Speech and Iris based Multimodal Biometric SystemIJEEE
Biometrics is the science and technology of human identification and verification through the use of feature set extracted from the biological data of the individual to be recognized. Unimodal and Multimodal systems are the two modal systems which have been developed so far. Unimodal biometric systems use a single biometric trait but they face limitations in the system performance due to the presence of noise in data, interclass variations and spoof attacks. These problems can be resolved by using multimodal biometrics which rely on more than one biometric information to produce better recognition results. This paper presents an overview of the multimodal biometrics, various fusion levels used in them and suggests the use of iris and speech using score level fusion for a multimodal biometric system.
A Smart Receptionist Implementing Facial Recognition and Voice InteractionCSCJournals
The purpose of this research is to implement a smart receptionist system with facial recognition and voice interaction using deep learning. The facial recognition component is implemented using real time image processing techniques, and it can be used to learn new faces as well as detect and recognize existing faces. The first time a customer uses this system, it will take the person’s facial data to create a unique user facial model, and this model will be triggered if the person comes the second time. The recognition is done in real time and after which voice interaction will be applied. Voice interaction is used to provide a life-like human communication and improve user experience. Our proposed smart receptionist system could be integrated into the self check-in kiosks deployed in hospitals or smart buildings to streamline the user recognition process and provide customized user interactions. This system could also be used in smart home environment where smart cameras have been deployed and voice assistants are in place.
Existing definitions for biometric testing and
evaluation do not fully explain errors in a biometric system. This paper provides a definitional framework for the Human
Biometric-Sensor Interaction (HBSI) model. This paper proposes six new definitions based around two classifications of
presentations, erroneous and correct. The new terms are: defective interaction (DI), concealed interaction (CI), false interaction (FI), failure to detect (FTD), failure to extract
(FTX), and successfully acquired samples (SAS). As with all definitions, the new terms require a modification to the general biometric model.
A Bring Your Own Device Risk Assessment ModelCSCJournals
Bring Your Own Device (BYOD), a technology where individuals or employees use their own devices on the organization’s network to perform tasks assigned to them by the organization has been widely embraced. The reasons for adoption are diverse in every organization. In spite of the security control strategies implemented by these organizations to safeguard their information resources, there has been an upsurge in information security breaches as a result of existing vulnerabilities in these systems and the legacy systems in use. Various approaches have been employed to deal with security challenges in BYOD, but according to literature, risk assessment has proved to be the first key step towards improving security of the BYOD environment in an enterprise. Risk assessment models have been proposed by various researchers, although, most are largely influenced by the degree of technological advancement and utilization as well as the working cultures within institutions. The existing models were largely developed in technologically advanced countries and thus do not fit well in developing countries. This study sought to develop flexible BYOD risk assessment model that can be adopted by varied institutions to secure their information resources. The study was carried out in Five (5) purposively selected state universities in Kenya. The research adopted a mixed research design approach with mixed sampling technique utilized to select the participants. Reliability and validity of data collection tools were evaluated and recommended by IT security and network experts. The qualitative and quantitative data was collected by interviewing experts and administering a questionnaire to sampled participants. The developed model was validated both statistically and by experts. The findings revealed that threats and vulnerabilities contributed to 39.9% and 69.2% respectively to the risk of the BYOD environment while Data Encryption (DE) and Software Updates (SU) came out strongly as intervening variables which have a major impact on the relationship between the dependent and independent variables.
Research has shown for some age groups, quality of fingerprints can impact the performance of biometric systems. A
desirable feature of biometrics is that they are suitable for use across the population. This applied study examines the performance of a fingerprint recognition system in a healthcare environment. Anecdotal evidence suggested front line healthcare workers may have lower image quality due to continued hand washing which may remove oils from their skin. During training, individuals are told to add oil to their fingers by wiping oil from their foreheads to improve the resulting quality of the
fingerprints. In the healthcare population the authors tested, compared to two general populations (collected on optical and
capacitance sensors) there was a significant difference in skin oiliness, but not in image quality. There was a difference across
healthcare and non-healthcare groups in the performance of the fingerprint algorithm when compared against the capacitance dataset.
Research has shown for some age groups, quality of fingerprints can impact the performance of biometric systems. A
desirable feature of biometrics is that they are suitable for use across the population. This applied study examines the performance of a fingerprint recognition system in a healthcare environment. Anecdotal evidence suggested front line healthcare workers may have lower image quality due to continued hand washing which may remove oils from their skin. During training, individuals are told to add oil to their fingers by wiping oil from their foreheads to improve the resulting quality of the
fingerprints. In the healthcare population the authors tested, compared to two general populations (collected on optical and
capacitance sensors) there was a significant difference in skin oiliness, but not in image quality. There was a difference across
healthcare and non-healthcare groups in the performance of the fingerprint algorithm when compared against the capacitance
dataset.
This paper reports the correlations between skin characteristics, such as moisture, oiliness, elasticity, and temperature of the skin, and fingerprint image quality across
three sensing technologies. Fingerprint images from the index finger of the dominant hand of 190 individuals, were collected on nine different fingerprint sensors. The sensors
included four capacitance sensors, four optical sensors and one thermal fingerprint sensor. Skin characteristics included temperature, moisture, oiliness and elasticity, were measured prior to the initial interaction with each of the individual sensors. The analysis of the full dataset indicated that the sensing technology and interaction type (swipe or touch) were moderately and weakly correlated respectively with image quality scores. Correlation analysis between image quality scores and the skin characteristics were also
made on subsets of data, divided by the sensing technology. The results did not identify any significant correlations. This indicates that further work is necessary to
determine the type of relationship between the variables, and how they impact image quality and matching performance.
Assessment and Improvement of Image Quality using Biometric Techniques for Fa...ijceronline
Biometrics is broadly used in Forensic, highly secured control access and prison security. By making use of this system one can recognizes a person by determining the authentication by his or her biological and physiological features such as Fingerprint, retina-scan, iris scans and face recognition. The determination of the characteristic function of quality and match scores shows that a careful selection of complimentary sets of quality metrics can provide much more benefit to various benefits of biometric quality. Face recognition is a challenging approach to the image quality analysis and many more security applications. Biometric face recognition is the well known technology which is used by the government and civilian applications such Aadhar cards, Pan cards etc. Face recognition is a Behavioral and physiological feature of a human being. Nowadays the quality of an biometric image is the measure concern. There are many factors which are directly or indirectly affects on the image quality hence improvement in image quality has to be done by making the use of some biometric techniques for face recongnion.This paper presents some important techniques for fake biometric detection and improvement of facial image quality.
The increasing use of distributed authentication architecture
has made interoperability of systems an important issue. Interoperabil ity of systems affects the maturity of the technology and also improves confidence of users in the technology. Biometric systems are not immune to the concerns of interoperability. Interoperability of fingerprint sensors and its effect on the overall performance of the recognition system is an area of interest with a considerable amount of work directed
towards it. This research analyzed effects of interoperability on error rates for fingerprint datasets captured from two optical sensors and a capacitive sensor when using a single commercially available fingerprint
matching algorithm. The main aim of this research was to emulate a
centralized storage and matching architecture with multiple acquisition
stations. Fingerprints were collected from 44 individuals on all three sensors and interoperable False Reject Rates of less than .31% were achieved using two different enrolment strategies.
BSI Biometrics Standards Presentation.
View BSI’s presentation about biometric standards, and get an overview of biometrics and identity management, and standards development for biometrics.
This is a complete report on Bio-metrics, finger print detection. It include what finger print is, how to scan and refin finger print, how the mechanism of its detection work, applications, etc
Integrating Fusion levels for Biometric Authentication SystemIOSRJECE
— Recently a lot of works are presented in the literature for the multimodal biometric authentication. And such biometric systems have been widely accepted with increasing accuracy rates and population coverage, reducing the vulnerability to spoofing. This paper descripts about the proposed multimodal biometric system that combines the feature extraction level and the score level fusion of iris and face unimodal biometric systems in order to take advantage of both fusion techniques. The experimental results shows the performance of the multimodal and multilevel fusion techniques which are analysed using TRR and TAR to study the recognition behaviour of the approach system. From the ROC Curve plotted, the performance of the proposed system is better compared to the individual fusion techniques.
This research focused on classifying Human-Biometric Sensor Interaction errors in real-time. The Kinect 2 was used as a measuring device to track the position and movements of the subject through a simulated border control environment. Knowing, in detail, the state of the subject ensures that the human element of the HBSI model is analyzed accurately. A network connection was established with the iris device to know the state of the sensor and biometric system elements of the model. Information such as detection rate, extraction rate, quality, capture type, and more metrics was available for use in classifying HBSI errors. A Federal Inspection Station (FIS) booth was constructed to simulate a U.S. border control setting in an International airport. The subjects were taken through the process of capturing iris and fingerprint samples in an immigration setting. If errors occurred, the Kinect 2 program would classify the error and saved these for further analysis.
IT 34500 is an undergraduate course offered to Purdue West Lafayette students. The course gives an introduction into biometrics and automatic identification and data capture technologies
The human signature provides a natural and publically-accepted legally-admissible method for providing authentication to a process. Automatic biometric signature systems assess both the drawn image and the temporal aspects of signature construction, providing enhanced verification rates over and above conventional outcome assessment. To enable the capture of these constructional data requires the use of specialist ‘tablet’ devices. In this paper we explore the enrolment performance using a range of common signature capture devices and investigate the reasons behind user preference. The results show that writing feedback and familiarity with conventional ‘paper and pen’ donation configurations are the primary motivation for user preference. These results inform the choice of signature device from both technical performance and user acceptance viewpoints.
The inherent differences between secret-based authentication (such as passwords and PINs) and biometric authentication have left gaps in the credibility of biometrics. These gaps are due, in large part, to the inability to adequately cross-compare the two types of authentication. This paper provides a comparison between the two types of authentication by equating biometric entropy in the same way entropy of secrets are represented. Similar to the method used by Ratha, Connell, and Bolle [1], the x and y dimensions of the fingerprints were examined to determine all possible locations of minutiae. These locations were then examined based on the observed probability of minutiae occurring in each of the designated locations. The results of this work show statistically significant differences in the frequencies and probabilities of occurrence for minutiae location, type, and angle, across all possible minutiae locations. These components were applied to Shannon’s Information Theory [2] to determine the entropy of fingerprint biometrics, which was estimated to be equivalent to an 8.3-character, randomly chosen password
In this research, intra-visit match score stability was examined for the human iris. Scores were found to be statistically stable in this short time frame.
In this research, intra-visit match score stability was examined for the human iris. Scores were found to be statistically stable in this short time frame.
In this research, intra-visit match score stability was examined for the human iris. Scores were found to be statistically stable in this short time frame.
In this research, intra-visit match score stability was examined for the human iris. Scores were found to be statistically stable in this short time frame.
A lot of work done in Center recently has focused around different topics concerning "time". Iris stability across different "times" has been in the forefront due to work in the undergraduate class, IT345, the graduate class IT545, as well as work in Ben Petry's thesis. Of course "time" is a fairly inaccurate word to use. Assessing stability over time is very ambiguous to the research question. For example time may mean millisecond, months, years, or even life of the user. Upon further examination of other academic literature, the reporting of research duration, collection interval, and specific time frame of interest are sporadic at best and missing completely at worst. To solve this issue, the Center has created the biometric duration scale (BDS) model with associated suggested best practices for reporting time duration in biometrics.
The BDS model marries the general biometric model with HBSI model to create a logical flow of five phases: the presentation definition phase, sample phase, processing phase, and enrollment or matching phase. By tracking information through this progression such as specific subject presentations made, HBSI error, and FTE/Enrollment score (to name a few), performance within the general biometric model can be examined. The BDS model goes one step further by creating specific durations to report research specific metrics. By creating this model, outcomes that effect a yearly performance metrics can be looked at by examining monthly performance, daily performance, or even specific user presentations and how those subcomponents effect the whole system.
Additionally, best practices for the reporting of duration is also included. The reporting methodology stems from ISO 8601 and is in compliance with ISO 21920. In the common reporting structure, start date, duration, number of visits at how many intervals, and time scope of interest for the specific research are given in a logical, readily available format along with the very specific, detailed ISO 8601 methodology. The goal of creating a formal script for reporting research duration was to eliminate ambiguity and create an environment where replication and drawing parallels between research is encouraged.
The stability score index, conceptualized in 2013, was designed to address the weaknesses of the zoo menagerie and other performance metrics by quantifying the relative stability of a user from on condition to another. In this paper, the measure of interoperability is the stability score from enrolling on one sensor and verifying on multiple sensors. The results showed that like performance, individual performance were not stable across these sensors. When examining stability by sensor family (capacitance, optical and thermal) we find that capacitive as the enrollment sensor were the least stable. Both enrolling and verifying on a thermal sensor, individuals were the most stable of the three family types. With respect to interaction type, enrolling on touch and verifying on swipe was more stable than enrolling on swipe and verifying on swipe, which was an interesting finding. Individuals using the thermal sensor generated the most stable stability scores.
According to a report by Frost and Sullivan in 2007, revenues for non-AFIS fingerprint devices in notebook PC's and wireless devices is anticipated to grow from $148.5 million to $1588.0 million by 2014, a compound annual growth rate of 40.3% [1]. The AFIS market has a compound annual growth rate of 15.2% with revenues of $445.0 million in 2007. With the development of mobile applications in a number of different market segments, such as healthcare, retail, and law enforcement, this paper analyzed the performance of fingerprints of different sizes, from different sensors...
This is a preview of the databases we use in the Center. The presentation overviews our data collection GUI, data storage (datawarehouse), and our project management database. Each of these databases work together to allow us to efficiently run our operations.
More from International Center for Biometric Research (20)
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™UiPathCommunity
In questo evento online gratuito, organizzato dalla Community Italiana di UiPath, potrai esplorare le nuove funzionalità di Autopilot, il tool che integra l'Intelligenza Artificiale nei processi di sviluppo e utilizzo delle Automazioni.
📕 Vedremo insieme alcuni esempi dell'utilizzo di Autopilot in diversi tool della Suite UiPath:
Autopilot per Studio Web
Autopilot per Studio
Autopilot per Apps
Clipboard AI
GenAI applicata alla Document Understanding
👨🏫👨💻 Speakers:
Stefano Negro, UiPath MVPx3, RPA Tech Lead @ BSP Consultant
Flavio Martinelli, UiPath MVP 2023, Technical Account Manager @UiPath
Andrei Tasca, RPA Solutions Team Lead @NTT Data
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
2. Effect of human-biometric sensor interaction 271
Stephen J. Elliott is an Associate Professor and Assistant Department Head of the Department of
Industrial Technology at Purdue University and serves as the Director of the Biometric
Standards, Performance and Assurance Lab (BSPA). His main interests include biometrics,
security, biometric performance as well as pedagogical research in distance education. In those
areas, he has published more than 15 journal articles and 35 conference proceedings. He is the
Vice Chair of Ballots and Membership for INCITS M1 Biometrics Committee and has served on
ISO IEC JTC1 SC37 Biometric Sub-committee.
1 Introduction
2 Motivation and previous literature
Biometric technology is defined as the automated
The motivation for this research is to determine the impact
recognition of behavioural and physiological characteristics
of human interaction with fingerprint sensors and the
of an individual (International Organization for
implications on image quality and subsequent algorithm
Standardization, 2007). An important question that has
performance. The significance of user interaction with
received insufficient attention has to do with how
various fingerprint recognition sensor technologies is
individuals should interact with a biometric device to obtain
apparent, given that fingerprint recognition is the most
the most suitable samples for matching with that particular
widely used of the biometric technologies, with popular
system. A non-exhaustive list of factors that would likely
applications in law enforcement [e.g., the integrated
influence users’ behaviour includes information about the
automated fingerprint identification system (IAFIS)], access
intended users themselves and their knowledge about the
control, time and attendance recordkeeping and personal
system, the environment, the application and the design of
computer/network access. The current biometrics industry
the system/device/sensor. Ultimately, the success of
report published by the IBG (2006) states that fingerprint
biometric technologies relies on the sensors’ ability to
recognition holds approximately 44% of the biometric
collect and extract the biometric characteristics from
market. Traditionally, this high market share has been
different individuals. If most individuals experience failures
attributed to law enforcement applications, but over the last
during interaction, these failures may cause individuals,
two years, the list of applications for fingerprint recognition
organisations and government to seek other security
technologies has grown tremendously. This expansion is
technologies.
due, in part, to the rapid evolution of sensors and the
The authors have undertaken the task of answering
expansion of applications beyond law enforcement and
this important question through researching the human-
computer desktop single sign-on solutions to personal data
biometric sensor interaction (HBSI). HBSI is an
assistants, mobile phones, laptop computers, desktop
interdisciplinary research area within the field of biometrics
keyboards, mice and universal serial bus (USB) flash media
that focuses on the interaction between the user and
drives, to name a few. In particular, the growth (in terms of
the biometric system to better understand how individuals
volume) of one fingerprint vendor’s sales reached new
use biometric devices to uncover the issues and errors
highs in fiscal year 2006 – shipping one million sensors
users knowingly and unknowingly generate when
between 1998–2003, four million between 2003–2005 and
attempting to use a particular biometric system (Elliott et
five million sensors in 2006 (Burke, 2006). As fingerprint
al., 2007; Kukula, 2007; Kukula, 2008; Kukula et al., 2008;
recognition applications continue to become more
Kukula and Elliott, 2006; Kukula et al., 2007a, 2007b,
pervasive, the biometric community must appreciate the
2007c). This research area attempts to understand the tasks,
impact that different types of human interaction have on
movements and behaviours users execute when they
performance of the biometric system, but also examine if
encounter different biometric modalities. This research area
there are differences in human interaction characteristics
frames a challenge for the biometrics community: while the
and system performance among the various fingerprint
algorithms are continually improving, there remain
sensing technologies.
individuals who cannot successfully interact with the
Original work by Kang et al. (2003) examined finger
biometric sensor(s) or provide the system with images or
force and indicated that force does have an impact on
samples of sufficient quality to achieve satisfactory results.
quality, but did not specify quantitative measures. Instead,
Therefore, the goal of HBSI research is to understand user
this research classified force as low (softly pressing), middle
movements, behaviours and problems so as to modify the
(normally pressing) and high (strongly pressing). Edwards
user’s interaction with the sensor through training and
et al. (2006) noted the relationship between finger contact
education with the objective of capturing better quality
area, pressure applied and other physical characteristics and
samples or recommend design alterations for biometric
stated that by analysing the finger pressure and contact area,
devices, processes or systems that better accommodate user
it is possible to enhance fingerprint systems. Based on the
limitations to reduce the quantity of unusable or unacquired
work of Kang et al. (2003), the authors conducted
biometric samples.
experiments in Kukula et al. (2007) to quantitatively assess
the impact of fingerprint pressing force on both image
quality and the number of detected minutiae on fingerprint
3. 272 E.P. Kukula et al.
image quality as it is documented that image quality has an 4 Experimental design
impact on the performance of biometric matching
To analyse the results, both parametric and non-parametric
algorithms (Jain et al., 2005; Modi and Elliott, 2006;
analysis of variance methods were used, based solely on
Tabassi and Wilson, 2005; Yao et al., 2004). Results of
model assumptions and the resulting diagnostics image
Kukula et al. (2007) revealed that there was no incremental
quality scores and number of detected minutiae. Analysis of
benefit in terms of image quality when using more than 9N
variance methods to compare the effect of multiple levels of
when interacting with an optical fingerprint sensor in one
one factor (force) on a response variable (image quality,
particular experiment. A second experiment further
number of minutiae) yielded a generalisation of the
investigated the 3N–9N interval, with results indicating that
two-sample t-test.
optimal image quality scores were obtained with a subject
pool of 43 people in the 5N–7N force level range. However,
it should not be assumed that this force level range is 4.1 Parametric – number of detected minutiae
optimal for other fingerprint sensing technologies, offering The parametric method is known as analysis of variance or
yet additional research opportunities. ANOVA. Parametric tests, like their non-parametric
counterparts, involve hypothesis testing, but parametric tests
require a stringent set of assumptions that must be met
3 The authors’ approach (NIST/SEMATECH, 2006). The ANOVA is partitioned
The purpose of this research was to perform a comparative into two segments: the variation that is explained by the
evaluation of optical and capacitance fingerprint sensors to model (1) and the variation that is not explained (the error)
determine if fingerprint sensing technologies are affected by (2) which are both used to calculate the F-statistic (3)
finger force, as discussed in Kang et al. (2003) and Kukula testing the hypothesis Ho: µ1 = µ2 = … = µI and Ha: not all
et al. (2007). The research conducted in this paper followed µ’s are the same. In practice, p values are used, but the
the methodology of experiment 2 in Kukula et al. (2007), Fobserved test statistic can also be compared to the F
but was modified to include the capacitance sensor. Five distribution table, as shown in (4). Typically, when the Ho is
force levels were used and measured in Newtons (N): 3N, rejected, the variation of the model (SSM) tends to be larger
5N, 7N, 9N and 11N. The force levels were measured with than the error (SSE), which corresponds to a larger F value.
a dual-range force sensor. Interaction was limited to the The number of detected minutiae was analysed using this
subject’s right index finger for both sensors to minimise the methodology:
variability of measurement relative to dexterity and finger
∑ (Yˆ − Y )
2
size. Variability that naturally occurs between individuals SSM = i , dfM = 1, MSM = SSM dfM (1)
was treated as an uncontrollable factor. Once the fingerprint
∑ (Y − Yˆ )
2
samples were collected, the prints were analysed using SSE = , dfE = n − 2, MSE = SSE dfE (2)
i i
commercially available quality analysis software. The
following variables were reported by the software: image F = MSM MSE ~ F (dfM , dfE ) (3)
quality score, minutiae and the number of core(s)/delta(s).
The image quality score ranged from 0–99, with zero (0) F ≥ F (1 − α , dfM , dfE ) (4)
being the lowest possible quality image score and 99
being the highest possible quality score. Fingerprint feature
extraction and feature matching was performed using the 4.2 Non-parametric – image quality score
Neurotechnologija VeriFinger 5.0 algorithm. Several According to Montgomery (1997), in situations where
different metrics can be used for analysing matching normality assumptions fail to be met, alternative statistical
performance of a dataset; the authors used false non-match methods to the F-test analysis of variance can be used.
rates (FNMR) and false match rates (FMR) to determine the Non-parametric methods are those that are distribution-free
performance of different force levels. A combined graphical and are typically used when measurements are categorical,
representation of FNMR and FMR can be created using parametric model assumptions cannot be met or analysis
detection error trade-off (DET) curves, which indicate a requires investigation into features such as randomness,
combination of FNMR and FMR at every possible threshold independence, symmetry or goodness of fit, rather than
value of the fingerprint matcher. DET curves were created testing hypotheses about values of population parameters
for fingerprints captured at each force level and then DET (NIST/SEMATECH, 2006).
curves were created for fingerprint datasets that resulted by One of the more common non-parametric methods was
combining every possible pair of force levels. This developed by Kruskal and Wallis (1952, 1953). The
methodology was performed separately for fingerprints Kruskal-Wallis test examines the equality of medians for
collected from the optical and capacitive sensors. two or more populations and examines the hypotheses Ho
(the population medians are all equal) and Ha (the medians
are not all the same), with the assumption that samples
taken from different populations are independent random
samples from continuous distributions with similar shapes
(Minitab, 2000). The Kruskal-Wallis test computes the H
4. Effect of human-biometric sensor interaction 273
statistic, as shown in equation (5). Image quality scores performed. The results of the pair-wise comparisons and
were analysed with this method to address skewness of descriptive statistics are shown in Tables 1 and 2,
scores to the left. respectively. The results showed that the 3N average
a
minutiae count was significantly different from all the other
⎡ 12 ⎤ Ri2
H = ⎢ ⎥ ∑
⎣ N ( N + 1) ⎦ i = 1 ni
− 3 (N + 1) , (5) force level average minutiae counts.
Table 1 Tukey pair-wise comparison results for optical sensor
where a equals the number of samples (groups), ni is the
3N 5N 7N 9N 11N
number of observations for the ith sample, N is the total
number of observations and Ri is the sum of ranks for group 3N – p < .05 p < .05 p < .05 p < .05
i (NIST/SEMATECH, 2006). 5N – p < .05 p < .05 p < .05
7N – n.s. p < .05
9N – n.s.
5 Evaluation and experimental results
11N –
The evaluation consisted of 75 participants, 18–25 years
old, and took place in October 2007. All participants used A similar ANOVA test was performed at a significance
their right index finger and three images were collected at level of 0.05 to determine whether the average minutiae
each force level and for both sensing technologies. The five counts between the force levels are statistically significant
force levels investigated were: 3N, 5N, 7N, 9N and 11N. for the capacitive sensor. The ANOVA test had a
Fingerprint images for two subjects at each of the p-value = 0.387, which indicated that the minutiae counts
corresponding force levels and the two sensing technologies between the force levels did not demonstrate a statistically
can be seen in Figure 1. significant difference. The descriptive statistics are
presented in Table 2.
Figure 1 Fingerprint images and quality scores for five force
levels by sensor technology: optical (top) and Table 2 Descriptive statistics for the number of detected
capacitance (bottom) minutiae by sensor type
Optical fingerprint images Optical Capacitance
Force
level N µ σ N µ σ
3N 228 39.78 13.25 228 39.62 11.15
5N 228 43.72 13.12 224 40.76 11.40
7N 228 46.99 12.12 227 41.75 11.27
3N force 5N force 7N force 9N force 11N force 9N 228 48.61 11.77 228 41.01 10.96
Quality 3 Quality 87 Quality 91 Quality 88 Quality 90
11N 228 50.65 12.06 228 40.96 12.22
Capacitance fingerprint images
To further examine the differences in the number of
detected minutiae across the optical and capacitance
sensors, overlapping histograms were constructed, as shown
in Figure 2.
3N force 5N force 7N force 9N force 11N force Figure 2 Histogram of the number of detected minutiae by force
Quality 91 Quality 87 Quality 81 Quality 61 Quality 38 level and sensor technology
Results are documented in terms of minutiae count analysis,
image quality analysis and performance analysis.
5.1 Number of detected minutiae
The number of minutiae detected from a fingerprint image
can vary according to the force applied by the finger on the
surface of the sensor. An ANOVA test was performed at a
significance level (α) of 0.05 to determine whether the
average minutiae count between the force levels are
statistically significant for the optical sensor. The p-value of
less than 0.05 was observed, which indicated that the
minutiae counts between the force levels were statistically
different. In order to test which groups were significantly
different, the Tukey test for pair-wise comparisons was
5. 274 E.P. Kukula et al.
5.2 Image quality Figure 3 Histogram of image quality scores by force level and
sensor technology
As described in the experimental design, image quality
failed to meet the parametric ANOVA model assumptions
due to skewness of the image quality scores. Thus, the
authors used the non-parametric Kruskal-Wallis (H) test to
analyse the image quality scores for both sensors.
The results of the non-parametric test for the image
quality scores from the optical sensor revealed a statistically
significant difference among the median image quality
scores across the five force levels, H(.95, 4) = 47.96,
resulting in a p-value less than 0.05. By examining the
descriptive statistics for the optical image quality scores, as
shown in Table 3, patterns can be found in the mean,
median and standard deviation. The mean and median
increase as force increases, while the variation between the
image quality scores for a particular level decrease as force
increases.
However, the descriptive statistics for the capacitance
image quality scores exhibit the opposite behaviour, as 5.3 Full dataset matching performance
shown in Table 4. For the capacitance image quality scores,
Once the fingerprint image characteristics were analysed,
the mean and median decrease as force increases, while
performance of all the collected fingerprint images from
the variation between the image quality scores for a
different force levels was analysed using a minutiae-based
particular level increases as force increases. The results for
matcher. DET curves were created to graphically represent
the non-parametric test for the capacitance image quality
the results.
scores revealed the same thing; that is, there is a statistically
Figure 4 shows the DET curves for fingerprint images
significant difference among the median image quality score
from each force level on the optical sensor. Note the flatness
across the five force levels, H(.95, 4) = 87.30, resulting in a
of the DET curves for 5N, 9N and 11N is indicative of an
p-value less than 0.05.
optimal state. The DET curve for fingerprint images
Table 3 Descriptive statistics for optical image quality scores collected at 3N showed the poorest performance.
Force level N µ x̃ σ Figure 4 DET of the full set of optical images
3N 228 75.25 80.0 17.05
5N 228 78.52 84.0 16.79
7N 228 81.15 86.0 13.15
9N 228 81.94 86.0 11.62
11N 228 82.25 86.0 10.95
Table 4 Descriptive statistics for capacitance image quality
scores
Force level N µ x̃ σ
3N 228 83.79 87.0 12.07
5N 224 80.87 87.0 16.31
7N 227 78.41 85.0 17.71
9N 228 74.26 82.5 20.30
11N 228 69.96 77.0 22.20 Figure 5 shows the DET curves for fingerprint images from
each force level collected on the capacitive sensor. Note the
To further illustrate the different patterns in image quality flatness of the 3N and 7N DET curves is indicative of an
data, histograms for the optical and capacitance sensor optimal state, whereas performance deteriorates for force
image quality scores were constructed by force level, shown levels 5N, 11N and 9N.
in Figure 3.
6. Effect of human-biometric sensor interaction 275
Figure 5 DET of the full set of capacitance images The DET curves in Figure 7 reveal that removal of the
lowest 5% quality images collected on the optical sensor at
force levels of 3N and 7N yielded negligible changes to
system performance. An inward shift in the DET curve
would have indicated an improvement in performance,
which was not noticed for the 3N and 7N fingerprint
datasets.
Figure 7 DET of optical images, lowest 5% quality images
removed
The full dataset DET for the optical and capacitive sensor
indicated that performance varies for fingerprint images
collected at different force levels. The optimal force level
for matching performance is different for the two types of
sensor.
5.4 Lowest 5% quality removed matching
performance
Having established the impact of force levels on the However, the DET curves in Figure 8 reveal that the
different sensor technologies, the authors sought to study removal of the lowest 5% quality images for the capacitance
the impact of image quality on matching performance of the force levels 5N, 9N and 11N resulted in shifts of the DET
different force levels for the two sensors. Variations in the curves for two of the three force levels, 5N and 9N. In
image quality score results for both sensor technologies and particular, the 5N DET curve shifted to reach optimal
evidence of patterns in the descriptive data led the authors matching accuracy, as noted by the flattened curve. The 9N
to hypothesise that performance might improve if some of DET curve also demonstrated a noticeable improvement.
the lowest quality images, in terms of reported image There were negligible improvements to the DET curve for
quality scores, were removed. The size of the dataset 11N when the lowest 5% quality images were removed.
(n = 75) prompted the removal of images producing the
lowest 5% quality scores for each force level; as such, 11 Figure 8 DET of capacitance images, poorest 5% quality images
images were removed. Figure 6 shows two example images removed
for each sensor type and force level combinations that were
included in the lowest 5% category that did not achieve
optimal matching accuracy and were removed from
consideration for this particular analysis.
Figure 6 Selected images (two per force level and technology)
removed as part of the 5% lowest quality bin
Optical Optical Optical Optical Capacitance
3N force 3N force 7N force 7N force 5N force
Quality 29 Quality 18 Quality 21 Quality 56 Quality 32
Removal of the lowest 5% quality images collected on the
capacitive sensor showed a different behaviour compared to
the optical sensor. Figure 8 shows that the DET curve for
fingerprints collected at the 5N level was flat after the
Capacitance Capacitance Capacitance Capacitance Capacitance lowest quality images were removed. Removal of the lowest
5N force 9N force 9N force 11N force 11N force quality images resulted in optimal performance for
Quality 32 Quality 10 Quality 4 Quality 19 Quality 7 fingerprints collected at the 5N level. It was also observed
7. 276 E.P. Kukula et al.
that removal of the lowest quality images at the 9N and 11N References
levels yielded an insignificant improvement in performance.
Burke, J. (2006) AuthenTec Ships Record 10 Million Biometric
Removal of the lowest quality images does not necessarily Fingerprint Sensors, available at
lead to an improvement in performance rates for all force http://www.authentec.com/news.cfm?newsID=217 (accessed
levels. The inconsistent behaviour of image quality and on 10 August 2006).
performance rates at different levels of force was a Edwards, M., Torrens, G. and Bhamra, T. (2006) ‘The use of
surprising observation of this analysis. fingerprint contact area for biometric identification’, Proc. of
the 2006 International Conference on Biometrics (ICB), Hong
Kong, China, pp.341–347.
6 Conclusions Elliott, S., Kukula, E. and Modi, S. (2007) ‘Issues involving the
human biometric sensor interface’, in Yanushkevich, S.,
The purpose of this study was to quantitatively compare the Wang, P., Gavrilova, M. and Srihari, S. (Eds.): Image Pattern
effect of force on the minutiae counts, image quality scores Recognition: Synthesis and Analysis in Biometrics, Vol. 67,
and fingerprint matching performance of optical and pp.339–363), World Scientific, Singapore.
capacitance fingerprint sensors. Comparing these two sensor International Biometric Group (IBG) (2006) Biometrics Market
technologies reveals that increasing the amount of force and Industry Report 2006–2010, available at
http://www.biometricgroup.com/reports/public/market_report
applied to the sensor surface has an inverse impact on the
.html (accessed on 12 September 2006).
quality scores. Images collected from a capacitance sensor
International Organization for Standardization (2007) ISO/IEC
are of a higher quality when captured at the lower end of the JTC1/SC37 Standing Document 2 – Harmonized Biometric
force range. In contrast, images collected from an optical Vocabulary (WD 2.56).
sensor are of a higher quality when captured at the higher Jain, A., Chen, Y. and Dass, S. (2005) ‘Fingerprint quality indices
end of the force range. This is an important observation to for predicting authentication performance’, Proc. of the 5th
consider when instructing individuals in how best to interact International Conf. on Audio- and Video-Based Biometric
with a particular sensor technology, so that images captured Person Authentication, Rye Brook, NY, pp.160–170.
by that technology have a quality score sufficiently high to Kang, H., Lee, B., Kim, H., Shin, D. and Kim, J. (2003), ‘A study
optimise performance of the matching system. The minutiae on performance evaluation of fingerprint sensors’, in G.
counts significantly increased with increasing levels of force Goos, J. Hartmanis and J. van Leeuwen (Eds.): Audio- and
when using optical sensors, but the authors’ research Video-Based Biometric Person Authentication, pp.574–583,
Springer, Berlin, Heidelberg.
demonstrated no significant difference relative to this factor
when using capacitance sensors. Kruskal, W. and Wallis, W. (1952) ‘Use of ranks on one-criterion
variance analysis’, Journal of the American Statistical
Matching performance for the full dataset using optical Association, Vol. 47, pp.583–621.
and capacitive sensors showed very different performance
Kruskal, W. and Wallis, W. (1953) ‘Errata to use of ranks in
levels for fingerprint images collected at different force one-criterion variance analysis’, Journal of the American
levels. The optimal force level for matching performance is Statistical Association, Vol. 48, pp.907–911.
different for the two sensors and exhibits similar behaviours Kukula, E. (2007) ‘Understanding the impact of the
for the image quality analysis. Removal of low-quality human-biometric sensor interaction and system design on
images alone will not always improve the matching biometric image quality’, NIST Biometric Quality Workshop
performance of a system. Further studies are needed to II, Gaithersburg, MD.
determine what other factors affect the system matching Kukula, E. (2008) Design and Evaluation of the Human-Biometric
performance. Sensor Interaction Method, p.510, Purdue University, West
Lafayette.
Kukula, E. and Elliott, S. (2006) ‘Implementing ergonomic
7 Recommendations and future work principles in a biometric system: a look at the human
biometric sensor interaction (HBSI)’, 40th International IEEE
The results of this research provided additional insight into Carnahan Conference on Security Technology, Lexington,
human interaction with fingerprint sensor technologies, KY, pp.86–91.
specifically the opposite effect of force level and image Kukula, E., Elliott, S. and Duffy, V. (2007a) ‘The effects of human
quality and the different behaviours of matching interaction on biometric system performance’, 12th
International Conference on Human-Computer Interaction
performance by force level. However, additional work is
and 1st International Conference on Digital-Human
needed to further examine the impact of force on other Modeling, Beijing, China, Vol. 12, pp.903–913.
fingerprint sensing technologies (e.g., thermal and Kukula, E., Elliott, S., Gresock, B. and Dunning, N. (2007b)
ultrasonic sensors). Once the relationships of force level, ‘Defining habituation using hand geometry’, 5th IEEE
image quality and matching performance are understood for Workshop on Automatic Identification Advanced
these additional technologies, it would be interesting to Technologies, Alghero, Italy, pp.242–246.
perform an analysis with fingerprint templates consisting of Kukula, E., Elliott, S., Kim, H. and San Martin, C. (2007c) ‘The
images from multiple force levels to examine the effect on impact of fingerprint force on image quality and the detection
matching performance, with the overarching objective of of minutiae’, Proc. of the 2007 IEEE International
further reducing matching errors due to HBSI. Conference on Electro/Information Technology (EIT),
Chicago, IL, pp.432–437.
8. Effect of human-biometric sensor interaction 277
Kukula, E., Blomeke, C., Modi, S. and Elliott, S. (2008) ‘Effect of
human interaction on fingerprint matching performance,
image quality, and minutiae count’, paper presented at the
Proceedings of the 5th International Conference on
Information Technology and Applications (ICITA 2008),
Cairns, Australia, pp.771–776.
Minitab (2000) Minitab User’s Guide 2. Release 13, available at
http://mathstat.carleton.ca/~help/minitab/STNONPAR.pdf
(accessed on 7 June 2006).
Modi, S. and Elliott, S. (2006) ‘Impact of image quality on
performance: comparison of young and elderly fingerprints’,
Proc. of the 6th International Conference on Recent Advances
in Soft Computing (RASC 2006), Canterbury, UK,
pp.449–454.
Montgomery, D. (1997) Design and Analysis of Experiments,
4th ed., John Wiley and Sons, New York.
NIST/SEMATECH (2006) E-handbook of Statistical Methods,
available at http://www.itl.nist.gov/div898/handbook/
(accessed on 20 March 2007).
Tabassi, E. and Wilson, C.L. (2005) ‘A novel approach to
fingerprint image quality’, Proc. of the IEEE International
Conference on Image Processing, Genoa, Italy, pp.37–40.
Yao, M., Pankanti, S. and Haas, N. (2004) ‘Fingerprint quality
assessment’, in N. Ratha and R. Bolle (Eds.): Automatic
Fingerprint Recognition Systems, pp.55–66, Springer,
New York.