In the recent era where technology plays a prominent role, persons can be identified (for security reasons) based on their behavioral and physiological characteristics (for example fingerprint, face, iris, key-stroke, signature, voice, etc.) through a computer system called the biometric system. In these kinds of systems the security is still a question mark because of various intruders and attacks. This problem can be solved by improving the security using some efficient algorithms available. Hence the fake person can be identified if he/she uses any synthetic sample of an authenticated person and a fake person who is trying to forge can be identified and authenticated.
Assessment and Improvement of Image Quality using Biometric Techniques for Fa...ijceronline
Biometrics is broadly used in Forensic, highly secured control access and prison security. By making use of this system one can recognizes a person by determining the authentication by his or her biological and physiological features such as Fingerprint, retina-scan, iris scans and face recognition. The determination of the characteristic function of quality and match scores shows that a careful selection of complimentary sets of quality metrics can provide much more benefit to various benefits of biometric quality. Face recognition is a challenging approach to the image quality analysis and many more security applications. Biometric face recognition is the well known technology which is used by the government and civilian applications such Aadhar cards, Pan cards etc. Face recognition is a Behavioral and physiological feature of a human being. Nowadays the quality of an biometric image is the measure concern. There are many factors which are directly or indirectly affects on the image quality hence improvement in image quality has to be done by making the use of some biometric techniques for face recongnion.This paper presents some important techniques for fake biometric detection and improvement of facial image quality.
Scale Invariant Feature Transform Based Face Recognition from a Single Sample...ijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
To ensure that the object presented in front of biometric device is real or reconstructed sample is a significant
problem in biometric authentication, which requires the development of new and efficient protection measures. This
paper, presents a software-based fake biometric detection method that can be used in multiple biometric systems to
detect different types of fraudulent access attempts. The objective of the proposed system is to enhance the security of
biometric recognition devices through the use of image quality assessment in a fast and user friendly manner. The
proposed approach presents a very low degree of complexity, which makes it suitable for real-time applications, using
25 general image quality features extracted from one image to distinguish between real and imposed samples. The
proposed method is highly competitive compared with other as the analysis of the general image quality of real
biometric samples reveals highly valuable information that may be very efficiently used to discriminate them from fake
traits.
An SVM based Statistical Image Quality Assessment for Fake Biometric DetectionIJTET Journal
Abstract
A biometric system is a computer based system and is used to identify the person on their behavioral and logical characteristics such as (for example fingerprint, face, iris, keystroke, signature, voice, etc.).A typical biometric system consists of feature extraction and matching patterns. But nowadays biometric systems are attacked by using fake biometric samples. This paper described the fingerprint biometric techniques and also introduce the attack on that system and by using Image Quality Assessment for Liveness Detection to know how to protect the system from fake biometrics and also how the multi biometric system is more secure than uni-biometric system. Support Vector Machine (SVM) classification technique is used for training and testing the fingerprint images. The testing onput fingerprint image is resulted as real and fake fingerprint image by quality score matching with the training based real and fake fingerprint samples.
Assessment and Improvement of Image Quality using Biometric Techniques for Fa...ijceronline
Biometrics is broadly used in Forensic, highly secured control access and prison security. By making use of this system one can recognizes a person by determining the authentication by his or her biological and physiological features such as Fingerprint, retina-scan, iris scans and face recognition. The determination of the characteristic function of quality and match scores shows that a careful selection of complimentary sets of quality metrics can provide much more benefit to various benefits of biometric quality. Face recognition is a challenging approach to the image quality analysis and many more security applications. Biometric face recognition is the well known technology which is used by the government and civilian applications such Aadhar cards, Pan cards etc. Face recognition is a Behavioral and physiological feature of a human being. Nowadays the quality of an biometric image is the measure concern. There are many factors which are directly or indirectly affects on the image quality hence improvement in image quality has to be done by making the use of some biometric techniques for face recongnion.This paper presents some important techniques for fake biometric detection and improvement of facial image quality.
Scale Invariant Feature Transform Based Face Recognition from a Single Sample...ijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
To ensure that the object presented in front of biometric device is real or reconstructed sample is a significant
problem in biometric authentication, which requires the development of new and efficient protection measures. This
paper, presents a software-based fake biometric detection method that can be used in multiple biometric systems to
detect different types of fraudulent access attempts. The objective of the proposed system is to enhance the security of
biometric recognition devices through the use of image quality assessment in a fast and user friendly manner. The
proposed approach presents a very low degree of complexity, which makes it suitable for real-time applications, using
25 general image quality features extracted from one image to distinguish between real and imposed samples. The
proposed method is highly competitive compared with other as the analysis of the general image quality of real
biometric samples reveals highly valuable information that may be very efficiently used to discriminate them from fake
traits.
An SVM based Statistical Image Quality Assessment for Fake Biometric DetectionIJTET Journal
Abstract
A biometric system is a computer based system and is used to identify the person on their behavioral and logical characteristics such as (for example fingerprint, face, iris, keystroke, signature, voice, etc.).A typical biometric system consists of feature extraction and matching patterns. But nowadays biometric systems are attacked by using fake biometric samples. This paper described the fingerprint biometric techniques and also introduce the attack on that system and by using Image Quality Assessment for Liveness Detection to know how to protect the system from fake biometrics and also how the multi biometric system is more secure than uni-biometric system. Support Vector Machine (SVM) classification technique is used for training and testing the fingerprint images. The testing onput fingerprint image is resulted as real and fake fingerprint image by quality score matching with the training based real and fake fingerprint samples.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Robust Analysis of Multibiometric Fusion Versus Ensemble Learning Schemes: A ...CSCJournals
Identification of person using multiple biometric is very common approach used in existing user
validation of systems. Most of multibiometric system depends on fusion schemes, as much of the
fusion techniques have shown promising results in literature, due to the fact of combining multiple
biometric modalities with suitable fusion schemes. However, similar type of practices are found in
ensemble of classifiers, which increases the classification accuracy while combining different
types of classifiers. In this paper, we have evaluated comparative study of traditional fusion
methods like feature level and score level fusion with the well-known ensemble methods such as
bagging and boosting. Precisely, for our frame work experimentations, we have fused face and
palmprint modalities and we have employed probability model - Naive Bayes (NB), neural
network model - Multi Layer Perceptron (MLP), supervised machine learning algorithm - Support
Vector Machine (SVM) classifiers for our experimentation. Nevertheless, machine learning
ensemble approaches namely, Boosting and Bagging are statistically well recognized. From
experimental results, in biometric fusion the traditional method, score level fusion is highly
recommended strategy than ensemble learning techniques.
Biometric Authentication Based on Hash Iris FeaturesCSCJournals
With an increasing emphasis on security, automated personal identification based on biometrics has been receiving extensive attention since its introduction in 1992. In this study, authentication system contained two parts: registration part and matching part. In both parts, iris image is used for personal identification. Localization of inner boundary only, extracted a region from the iris (without eyelashes problem), a feature vector is deduced from the texture of the image. The feature vector is used for classification of the iris texture, then it's treated by the hash function to produce the hash value (authentic value of a person). In matching part, produced hash value searched in the authorized person's database for taking a decision (success or fail) of the authentication. The method was evaluated on iris images takes from the CASIA iris image database version 1.0 [15]. The experimental results show that the vector extracted by the proposed method has very discriminating values that led to a recognition rate of over 100% on iris database. Also, authentication system is very accurate because it's used a secure method of authentication that iris-biometric and a hash function for avoiding stealing data from database.
IMAGE QUALITY ASSESSMENT FOR FAKE BIOMETRIC DETECTION: APPLICATION TO IRIS, F...ijiert bestjournal
In this Paper,the actual presence of a real legitimate trait in contrast to a fake self - manufactured synthetic or reconstructed sample is a significant problem in biometric authentication,which requires the development of new and efficient protection measures. In this paper,we present a novel software - based fake detection method that can be used in multiple biometric systems to detect different types of fraudulent access attempts. The obje ctive of the proposed system is to enhance the security of biometric recognition frameworks,by adding livens assessment in a fast,user - friendly,and non - intrusive manner,through the use of image quality assessment. The proposed approach presents a very low degree of complexity,which makes it suitable for real - time applications,using 25 general image quality features extracted from one image (i.e.,the same acquired for authentication purposes) to distinguish between legitimate and impostor samples. The experimental results,obtained on publicly available data sets of fingerprint,iris,and 2D face,show that the proposed method is highly competitive compared with other state - of - the - art approaches and that the analysis of the general image quality of rea l biometric samples reveals highly valuable information that may be very efficiently used to discriminate them from fake traits.
The Survey of Architecture of Multi-Modal (Fingerprint and Iris Recognition) ...IJERA Editor
Biometrics based individual identification is observed as an effective technique for automatically knowing, with a high confidence a person’s identity. Multi-modal biometric systems consolidate the evidence accessible by multiple biometric sources and normally better recognition performance associate to system based on a single biometric modality.Multi biometric systems are used to overcome this issue by providing multiple pieces of indication of the same identity. This system provides effective fusion structure that combines information provided by the multiple field experts based on decision-level and score-level fusion method, thereby increasing the efficiency which is not conceivable in uni-modal system.Multi-modal biometrics can be attained through a fusion of two or more images, where the subsequent fused image will be more protected. This paper discusses various fusion techniques, architecture of multi-modal biometric authentication and working of biometric fusion i.e. Iris and Fingerprint recognition that are used in multi-modal biometrics
An offline signature recognition and verification system based on neural networkeSAT Journals
Abstract Various techniques are already introduced for personal identification and verification based on different types of biometrics which can be physiological or behavioral. Signatures lies in the category of behavioral biometric which can distort or changed with course of time. Signatures are considered to be most promising authentication method in all legal and financial documents. It is necessary to verify signers and their respective signatures. This paper presents an Offline Signature recognition and verification system(SRVS). In this system signature database of signature images is created, followed by image preprocessing, feature extraction, neural network design and training, and classification of signature as genuine or counterfeit. Keywords: biometrics, neural network design, feature extraction, classification etc.
Face Detection in Digital Image: A Technical ReviewIJERA Editor
Face detection is the method of focusing faces in input image is an important part of any face processing system. In Face detection, segmentation plays the major role to detect the face. There are many contests for effective and efficient face detection. The aim of this paper is to present a review on several algorithms and methods used for face detection. We read the various surveys and related various techniques according to how they extract features and what learning algorithms are adopted for. Face detection system has two major phases, first to segment skin region from an image and second to decide these regions cover human face or not. There are number of algorithms used in face detection namely Genetic, Hausdorff Distance etc.
Fraud Detection Using Signature RecognitionTejraj Thakor
The signature of person is an important bio metric of a human being which can be used to authenticate human identity. The problem arises when someone decide to imitate our signature and steal our identity.
The Image of human signature is collected by camera of mobile phone which can extract dynamic and spatial information of the signature based on Image processing techniques like Convert to gray scale, Noise Removal, Normalization, Border Elimination and Feature Extraction techniques.
The signature matching is depending on SVM. The SVM classifier is trained with sample images in database obtained from those individuals whose signatures have to be authenticated by the system. In our proposed system SQLite database as a back-end and Android platform as a front-end.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Facial image classification and searching –a surveyZac Darcy
Recent developments in the area of image mining have shown the way for incredible growth in
extensively large and detailed image databases. The images which are available in these
databases, if checked, can endow with valuable information to the human users. As one of the
most successful applications of image analysis and understanding, fac
e recognition has
recently gained important attention particularly throughout the past many years. Though
tracking and recognizing face objects is a routine task, building such a system is still an active
research. Among several proposed face rec
ognition schemes, shape based approaches are
possibly the most promising ones. This paper provides an overview of various
classification and retrieval methods that were proposed earlier in literature. Also, this paper
provides a margina
l summary for future research and enhancements in face detection
Face recognition is a widely used biometric approach. Face recognition technology has developed rapidly
in recent years and it is more direct, user friendly and convenient compared to other methods. But face
recognition systems are vulnerable to spoof attacks made by non-real faces. It is an easy way to spoof face
recognition systems by facial pictures such as portrait photographs. A secure system needs Liveness
detection in order to guard against such spoofing. In this work, face liveness detection approaches are
categorized based on the various types techniques used for liveness detection. This categorization helps
understanding different spoof attacks scenarios and their relation to the developed solutions. A review of
the latest works regarding face liveness detection works is presented. The main aim is to provide a simple
path for the future development of novel and more secured face liveness detection approach.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Robust Analysis of Multibiometric Fusion Versus Ensemble Learning Schemes: A ...CSCJournals
Identification of person using multiple biometric is very common approach used in existing user
validation of systems. Most of multibiometric system depends on fusion schemes, as much of the
fusion techniques have shown promising results in literature, due to the fact of combining multiple
biometric modalities with suitable fusion schemes. However, similar type of practices are found in
ensemble of classifiers, which increases the classification accuracy while combining different
types of classifiers. In this paper, we have evaluated comparative study of traditional fusion
methods like feature level and score level fusion with the well-known ensemble methods such as
bagging and boosting. Precisely, for our frame work experimentations, we have fused face and
palmprint modalities and we have employed probability model - Naive Bayes (NB), neural
network model - Multi Layer Perceptron (MLP), supervised machine learning algorithm - Support
Vector Machine (SVM) classifiers for our experimentation. Nevertheless, machine learning
ensemble approaches namely, Boosting and Bagging are statistically well recognized. From
experimental results, in biometric fusion the traditional method, score level fusion is highly
recommended strategy than ensemble learning techniques.
Biometric Authentication Based on Hash Iris FeaturesCSCJournals
With an increasing emphasis on security, automated personal identification based on biometrics has been receiving extensive attention since its introduction in 1992. In this study, authentication system contained two parts: registration part and matching part. In both parts, iris image is used for personal identification. Localization of inner boundary only, extracted a region from the iris (without eyelashes problem), a feature vector is deduced from the texture of the image. The feature vector is used for classification of the iris texture, then it's treated by the hash function to produce the hash value (authentic value of a person). In matching part, produced hash value searched in the authorized person's database for taking a decision (success or fail) of the authentication. The method was evaluated on iris images takes from the CASIA iris image database version 1.0 [15]. The experimental results show that the vector extracted by the proposed method has very discriminating values that led to a recognition rate of over 100% on iris database. Also, authentication system is very accurate because it's used a secure method of authentication that iris-biometric and a hash function for avoiding stealing data from database.
IMAGE QUALITY ASSESSMENT FOR FAKE BIOMETRIC DETECTION: APPLICATION TO IRIS, F...ijiert bestjournal
In this Paper,the actual presence of a real legitimate trait in contrast to a fake self - manufactured synthetic or reconstructed sample is a significant problem in biometric authentication,which requires the development of new and efficient protection measures. In this paper,we present a novel software - based fake detection method that can be used in multiple biometric systems to detect different types of fraudulent access attempts. The obje ctive of the proposed system is to enhance the security of biometric recognition frameworks,by adding livens assessment in a fast,user - friendly,and non - intrusive manner,through the use of image quality assessment. The proposed approach presents a very low degree of complexity,which makes it suitable for real - time applications,using 25 general image quality features extracted from one image (i.e.,the same acquired for authentication purposes) to distinguish between legitimate and impostor samples. The experimental results,obtained on publicly available data sets of fingerprint,iris,and 2D face,show that the proposed method is highly competitive compared with other state - of - the - art approaches and that the analysis of the general image quality of rea l biometric samples reveals highly valuable information that may be very efficiently used to discriminate them from fake traits.
The Survey of Architecture of Multi-Modal (Fingerprint and Iris Recognition) ...IJERA Editor
Biometrics based individual identification is observed as an effective technique for automatically knowing, with a high confidence a person’s identity. Multi-modal biometric systems consolidate the evidence accessible by multiple biometric sources and normally better recognition performance associate to system based on a single biometric modality.Multi biometric systems are used to overcome this issue by providing multiple pieces of indication of the same identity. This system provides effective fusion structure that combines information provided by the multiple field experts based on decision-level and score-level fusion method, thereby increasing the efficiency which is not conceivable in uni-modal system.Multi-modal biometrics can be attained through a fusion of two or more images, where the subsequent fused image will be more protected. This paper discusses various fusion techniques, architecture of multi-modal biometric authentication and working of biometric fusion i.e. Iris and Fingerprint recognition that are used in multi-modal biometrics
An offline signature recognition and verification system based on neural networkeSAT Journals
Abstract Various techniques are already introduced for personal identification and verification based on different types of biometrics which can be physiological or behavioral. Signatures lies in the category of behavioral biometric which can distort or changed with course of time. Signatures are considered to be most promising authentication method in all legal and financial documents. It is necessary to verify signers and their respective signatures. This paper presents an Offline Signature recognition and verification system(SRVS). In this system signature database of signature images is created, followed by image preprocessing, feature extraction, neural network design and training, and classification of signature as genuine or counterfeit. Keywords: biometrics, neural network design, feature extraction, classification etc.
Face Detection in Digital Image: A Technical ReviewIJERA Editor
Face detection is the method of focusing faces in input image is an important part of any face processing system. In Face detection, segmentation plays the major role to detect the face. There are many contests for effective and efficient face detection. The aim of this paper is to present a review on several algorithms and methods used for face detection. We read the various surveys and related various techniques according to how they extract features and what learning algorithms are adopted for. Face detection system has two major phases, first to segment skin region from an image and second to decide these regions cover human face or not. There are number of algorithms used in face detection namely Genetic, Hausdorff Distance etc.
Fraud Detection Using Signature RecognitionTejraj Thakor
The signature of person is an important bio metric of a human being which can be used to authenticate human identity. The problem arises when someone decide to imitate our signature and steal our identity.
The Image of human signature is collected by camera of mobile phone which can extract dynamic and spatial information of the signature based on Image processing techniques like Convert to gray scale, Noise Removal, Normalization, Border Elimination and Feature Extraction techniques.
The signature matching is depending on SVM. The SVM classifier is trained with sample images in database obtained from those individuals whose signatures have to be authenticated by the system. In our proposed system SQLite database as a back-end and Android platform as a front-end.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Facial image classification and searching –a surveyZac Darcy
Recent developments in the area of image mining have shown the way for incredible growth in
extensively large and detailed image databases. The images which are available in these
databases, if checked, can endow with valuable information to the human users. As one of the
most successful applications of image analysis and understanding, fac
e recognition has
recently gained important attention particularly throughout the past many years. Though
tracking and recognizing face objects is a routine task, building such a system is still an active
research. Among several proposed face rec
ognition schemes, shape based approaches are
possibly the most promising ones. This paper provides an overview of various
classification and retrieval methods that were proposed earlier in literature. Also, this paper
provides a margina
l summary for future research and enhancements in face detection
Face recognition is a widely used biometric approach. Face recognition technology has developed rapidly
in recent years and it is more direct, user friendly and convenient compared to other methods. But face
recognition systems are vulnerable to spoof attacks made by non-real faces. It is an easy way to spoof face
recognition systems by facial pictures such as portrait photographs. A secure system needs Liveness
detection in order to guard against such spoofing. In this work, face liveness detection approaches are
categorized based on the various types techniques used for liveness detection. This categorization helps
understanding different spoof attacks scenarios and their relation to the developed solutions. A review of
the latest works regarding face liveness detection works is presented. The main aim is to provide a simple
path for the future development of novel and more secured face liveness detection approach.
7 multi biometric fake detection system using image quality based liveness de...INFOGAIN PUBLICATION
Biometric systems mostly popular in all over the world because of its user friendly and credible nature in security. In spite of this advantages, many attacks that done through synthetic , self manufactured, fake, reconstructed samples affected on the performance and accuracy of biometric system which becomes major problem in biometrics. Hence, new effective measures have to be taken to protect the biometric systems. In this paper, we propose novel software based multi-biometric fake detection system to detect various types of attacks. The main moto of this system is to enhance security level of biometric recognition systems through Image Quality Assessment (IQA) which is one of the liveness detection method.25 image quality measures calculated from test image which used to classify between real and fake trait using Linear Discriminative Analysis(LDA) classifier. The experimental results is done on the database of 2D face and fingerprint modalities, shows the proposed system is ease in implementation in real time application as complexities is very less because of one input image. Also this system is fast, user-friendly, non-intrusive which is more competitive with any other state of the art approaches, classifies between real and fake traits.
This is a complete report on Bio-metrics, finger print detection. It include what finger print is, how to scan and refin finger print, how the mechanism of its detection work, applications, etc
Humans often use faces to recognize individuals, and advancements in computing capability over the past few decades now enable similar recognitions automatically. Early facial recognition algorithms used simple geometric models, but the recognition process has now matured into a science of sophisticated mathematical representations and matching processes. Major advancements and initiatives in the past 10 to 15 years have propelled facial recognition technology into the spotlight. Facial recognition can be used for both verification and identification.
Face detection is one of the most suitable applications for image processing and biometric programs. Artificial neural networks have been used in the many field like image processing, pattern recognition, sales forecasting, customer research and data validation. Face detection and recognition have become one of the most popular biometric techniques over the past few years. There is a lack of research literature that provides an overview of studies and research-related research of Artificial neural networks face detection. Therefore, this study includes a review of facial recognition studies as well systems based on various Artificial neural networks methods and algorithms.
Technology that identifies you based on your physical or behavioral traits- for added security to confirm that you are who you claim to be.(this ppt is very dear to me as i have given a talk on this topic twice. this also fetched me and migmar first prize at deen dayal upadhyay college- converging vectors - an inter college presentation competition organized by arya bhata science forum)
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Due to availability of internet and evolution of embedded devices, Internet of things can be useful to contribute in energy domain. The Internet of Things (IoT) will deliver a smarter grid to enable more information and connectivity throughout the infrastructure and to homes. Through the IoT, consumers, manufacturers and utility providers will come across new ways to manage devices and ultimately conserve resources and save money by using smart meters, home gateways, smart plugs and connected appliances. The future smart home, various devices will be able to measure and share their energy consumption, and actively participate in house-wide or building wide energy management systems. This paper discusses the different approaches being taken worldwide to connect the smart grid. Full system solutions can be developed by combining hardware and software to address some of the challenges in building a smarter and more connected smart grid.
A Survey Report on : Security & Challenges in Internet of Thingsijsrd.com
In the era of computing technology, Internet of Things (IoT) devices are now popular in each and every domains like e-governance, e-Health, e-Home, e-Commerce, and e-Trafficking etc. Iot is spreading from small to large applications in all fields like Smart Cities, Smart Grids, Smart Transportation. As on one side IoT provide facilities and services for the society. On the other hand, IoT security is also a crucial issues.IoT security is an area which totally concerned for giving security to connected devices and networks in the IoT .As, IoT is vast area with usability, performance, security, and reliability as a major challenges in it. The growth of the IoT is exponentially increases as driven by market pressures, which proportionally increases the security threats involved in IoT The relationship between the security and billions of devices connecting to the Internet cannot be described with existing mathematical methods. In this paper, we explore the opportunities possible in the IoT with security threats and challenges associated with it.
In today’s emerging world of Internet, each and every thing is supposed to be in connected mode with the help of billions of smart devices. By connecting all the devises used in our day to day life, make our life trouble less and easy. We are incorporated in a world where we are used to have smart phones, smart cars, smart gadgets, smart homes and smart cities. Different institutes and researchers are working for creating a smart world for us but real question which we need to emphasis on is how to make dumb devises talk with uncommon hardware and communication technology. For the same what kind of mechanism to use with various protocols and less human interaction. The purpose is to provide the key area for application of IoT and a platform on which various devices having different mechanism and protocols can communicate with an integrated architecture.
Study on Issues in Managing and Protecting Data of IOTijsrd.com
This paper discusses variety of issues for preserving and managing data produced by IoT. Every second large amount of data are added or updated in the IoT databases across the heterogeneous environment. While managing the data each phase of data processing for IoT data is exigent like storing data, querying, indexing, transaction management and failure handling. We also refer to the problem of data integration and protection as data requires to be fit in single layout and travel securely as they arrive in the pool from diversified sources in different structure. Finally, we confer a standardized pathway to manage and to defend data in consistent manner.
Interactive Technologies for Improving Quality of Education to Build Collabor...ijsrd.com
Today with advancement in Information Communication Technology (ICT) the way the education is being delivered is seeing a paradigm shift from boring classroom lectures to interactive applications such as 2-D and 3-D learning content, animations, live videos, response systems, interactive panels, education games, virtual laboratories and collaborative research (data gathering and analysis) etc. Engineering is emerging with more innovative solutions in the field of education and bringing out their innovative products to improve education delivery. The academic institutes which were once hesitant to use such technology are now looking forward to such innovations. They are adopting the new ways as they are realizing the vast benefits of using such methods and technology. The benefits are better comprehensibility, improved learning efficiency of students, and access to vast knowledge resources, geographical reach, quick feedback, accountability and quality research. This paper focuses on how engineering can leverage the latest technology and build a collaborative learning environment which can then be integrated with the national e-learning grid.
Internet of Things - Paradigm Shift of Future Internet Application for Specia...ijsrd.com
In the world more than 15% people are living with disability that also include children below age of 10 years. Due to lack of independent support services specially abled (handicap) people overly rely on other people for their basic needs, that excludes them from being financially and socially active. The Internet of Things (IoT) can give support system and a better quality of life as well as participation in routine and day to day life. For this purpose, the future solutions for current problems has been introduced in this paper. Daunting challenges have been considered as future research and glimpse of the IoT for specially abled person is given in the paper.
A Study of the Adverse Effects of IoT on Student's Lifeijsrd.com
Internet of things (IoT) is the most powerful invention and if used in the positive direction, internet can prove to be very productive. But, now a days, due to the social networking sites such as Face book, WhatsApp, twitter, hike etc. internet is producing adverse effects on the student life, especially those students studying at college Level. As it is rightly said, something which has some positive effects also has some of the negative effects on the other hand. In this article, we are discussing some adverse effects of IoT on student’s life.
Pedagogy for Effective use of ICT in English Language Learningijsrd.com
The use of information and communications technology (ICT) in education is a relatively new phenomenon and it has been the educational researchers' focus of attention for more than two decades. Educators and researchers examine the challenges of using ICT and think of new ways to integrate ICT into the curriculum. However, there are some barriers for the teachers that prevent them to use ICT in the classroom and develop supporting materials through ICT. The purpose of this study is to examine the high school English teachers’ perceptions of the factors discouraging teachers to use ICT in the classroom.
In recent years usage of private vehicles create urban traffic more and more crowded. As result traffic becomes one of the important problems in big cities in all over the world. Some of the traffic concerns are traffic jam and accidents which have caused a huge waste of time, more fuel consumption and more pollution. Time is very important parameter in routine life. The main problem faced by the people is real time routing. Our solution Virtual Eye will provide the current updates as in the real time scenario of the specific route. This research paper presents smart traffic navigation system, based on Internet of Things, which is featured by low cost, high compatibility, easy to upgrade, to replace traditional traffic management system and the proposed system can improve road traffic tremendously.
Ontological Model of Educational Programs in Computer Science (Bachelor and M...ijsrd.com
In this work there is illustrated an ontological model of educational programs in computer science for bachelor and master degrees in Computer science and for master educational program “Computer science as second competence†by Tempus project PROMIS.
Understanding IoT Management for Smart Refrigeratorijsrd.com
Lately the concept of Internet of Things (IoT) is being more elaborated and devices and databases are proposed thereby to meet the need of an Internet of Things scenario. IoT is being considered to be an integral part of smart house where devices will be connected to each other and also react upon certain environmental input. This will eventually include the home refrigerator, air conditioner, lights, heater and such other home appliances. Therefore, we focus our research on the database part for such an IoT’ fridge which we called as smart Fridge. We describe the potentials achievable through a database for an IoT refrigerator to manage the refrigerator food and also aid the creation of a monthly budget of the house for a family. The paper aims at the data management issue based on a proposed design for an intelligent refrigerator leveraging the sensor technology and the wireless communication technology. The refrigerator which identifies products by reading the barcodes or RFID tags is proposed to order the required products by connecting to the Internet. Thus the goal of this paper is to minimize human interaction to maintain the daily life events.
DESIGN AND ANALYSIS OF DOUBLE WISHBONE SUSPENSION SYSTEM USING FINITE ELEMENT...ijsrd.com
Double wishbone designs allow the engineer to carefully control the motion of the wheel throughout suspension travel. 3-D model of the Lower Wishbone Arm is prepared by using CAD software for modal and stress analysis. The forces and moments are used as the boundary conditions for finite element model of the wishbone arm. By using these boundary conditions static analysis is carried out. Then making the load as a function of time; quasi-static analysis of the wishbone arm is carried out. A finite element based optimization is used to optimize the design of lower wishbone arm. Topology optimization and material optimization techniques are used to optimize lower wishbone arm design.
A Review: Microwave Energy for materials processingijsrd.com
Microwave energy is a latest largest growing technique for material processing. This paper presents a review of microwave technologies used for material processing and its use for industrial applications. Advantages in using microwave energy for processing material include rapid heating, high heating efficiency, heating uniformity and clean energy. The microwave heating has various characteristics and due to which it has been become popular for heating low temperature applications to high temperature applications. In recent years this novel technique has been successfully utilized for the processing of metallic materials. Many researchers have reported microwave energy for sintering, joining and cladding of metallic materials. The aim of this paper is to show the use of microwave energy not only for non-metallic materials but also the metallic materials. The ability to process metals with microwave could assist in the manufacturing of high performance metal parts desired in many industries, for example in automotive and aeronautical industries.
Web Usage Mining: A Survey on User's Navigation Pattern from Web Logsijsrd.com
With an expontial growth of World Wide Web, there are so many information overloaded and it became hard to find out data according to need. Web usage mining is a part of web mining, which deal with automatic discovery of user navigation pattern from web log. This paper presents an overview of web mining and also provide navigation pattern from classification and clustering algorithm for web usage mining. Web usage mining contain three important task namely data preprocessing, pattern discovery and pattern analysis based on discovered pattern. And also contain the comparative study of web mining techniques.
APPLICATION OF STATCOM to IMPROVED DYNAMIC PERFORMANCE OF POWER SYSTEMijsrd.com
Application of FACTS controller called Static Synchronous Compensator STATCOM to improve the performance of power grid with Wind Farms is investigated .The essential feature of the STATCOM is that it has the ability to absorb or inject fastly the reactive power with power grid . Therefore the voltage regulation of the power grid with STATCOM FACTS device is achieved. Moreover restoring the stability of the power system having wind farm after occurring severe disturbance such as faults or wind farm mechanical power variation is obtained with STATCOM controller . The dynamic model of the power system having wind farm controlled by proposed STATCOM is developed . To validate the powerful of the STATCOM FACTS controller, the studied power system is simulated and subjected to different severe disturbances. The results prove the effectiveness of the proposed STATCOM controller in terms of fast damping the power system oscillations and restoring the power system stability.
Making model of dual axis solar tracking with Maximum Power Point Trackingijsrd.com
Now a days solar harvesting is more popular. As the popularity become higher the material quality and solar tracking methods are more improved. There are several factors affecting the solar system. Major influence on solar cell, intensity of source radiation and storage techniques The materials used in solar cell manufacturing limit the efficiency of solar cell. This makes it particularly difficult to make considerable improvements in the performance of the cell, and hence restricts the efficiency of the overall collection process. Therefore, the most attainable maximum power point tracking method of improving the performance of solar power collection is to increase the mean intensity of radiation received from the source used. The purposed of tracking system controls elevation and orientation angles of solar panels such that the panels always maintain perpendicular to the sunlight. The measured variables of our automatic system were compared with those of a fixed angle PV system. As a result of the experiment, the voltage generated by the proposed tracking system has an overall of about 28.11% more than the fixed angle PV system. There are three major approaches for maximizing power extraction in medium and large scale systems. They are sun tracking, maximum power point (MPP) tracking or both.
A REVIEW PAPER ON PERFORMANCE AND EMISSION TEST OF 4 STROKE DIESEL ENGINE USI...ijsrd.com
In day today's relevance, it is mandatory to device the usage of diesel in an economic way. In present scenario, the very low combustion efficiency of CI engine leads to poor performance of engine and produces emission due to incomplete combustion. Study of research papers is focused on the improvement in efficiency of the engine and reduction in emissions by adding ethanol in a diesel with different blends like 5%, 10%, 15%, 20%, 25% and 30% by volume. The performance and emission characteristics of the engine are tested observed using blended fuels and comparative assessment is done with the performance and emission characteristics of engine using pure diesel.
Study and Review on Various Current Comparatorsijsrd.com
This paper presents study and review on various current comparators. It also describes low voltage current comparator using flipped voltage follower (FVF) to obtain the single supply voltage. This circuit has short propagation delay and occupies a small chip area as compare to other current comparators. The results of this circuit has obtained using PSpice simulator for 0.18 μm CMOS technology and a comparison has been performed with its non FVF counterpart to contrast its effectiveness, simplicity, compactness and low power consumption.
Reducing Silicon Real Estate and Switching Activity Using Low Power Test Patt...ijsrd.com
Power dissipation is a challenging problem for today's system-on-chip design and test. This paper presents a novel architecture which generates the test patterns with reduced switching activities; it has the advantage of low test power and low hardware overhead. The proposed LP-TPG (test pattern generator) structure consists of modified low power linear feedback shift register (LP-LFSR), m-bit counter, gray counter, NOR-gate structure and XOR-array. The seed generated from LP-LFSR is EXCLUSIVE-OR ed with the data generated from gray code generator. The XOR result of the sequence is single input changing (SIC) sequence, in turn reduces the switching activity and so power dissipation will be very less. The proposed architecture is simulated using Modelsim and synthesized using Xilinx ISE9.2.The Xilinx chip scope tool will be used to test the logic running on FPGA.
Defending Reactive Jammers in WSN using a Trigger Identification Service.ijsrd.com
In the last decade, the greatest threat to the wireless sensor network has been Reactive Jamming Attack because it is difficult to be disclosed and defend as well as due to its mass destruction to legitimate sensor communications. As discussed above about the Reactive Jammers Nodes, a new scheme to deactivate them efficiently is by identifying all trigger nodes, where transmissions invoke the jammer nodes, which has been proposed and developed. Due to this identification mechanism, many existing reactive jamming defending schemes can be benefited. This Trigger Identification can also work as an application layer .In this paper, on one side we provide the several optimization problems to provide complete trigger identification service framework for unreliable wireless sensor networks and on the other side we also provide an improved algorithm with regard to two sophisticated jamming models, in order to enhance its robustness for various network scenarios.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
The Art Pastor's Guide to Sabbath | Steve ThomasonSteve Thomason
What is the purpose of the Sabbath Law in the Torah. It is interesting to compare how the context of the law shifts from Exodus to Deuteronomy. Who gets to rest, and why?
How to Create Map Views in the Odoo 17 ERPCeline George
The map views are useful for providing a geographical representation of data. They allow users to visualize and analyze the data in a more intuitive manner.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
Digital Tools and AI for Teaching Learning and Research
Fake Multi Biometric Detection using Image Quality Assessment
1. IJSRD - International Journal for Scientific Research & Development| Vol. 2, Issue 07, 2014 | ISSN (online): 2321-0613
All rights reserved by www.ijsrd.com 514
Fake Multi Biometric Detection Using Image Quality Assessment
R.Brindha1
V.Mathiazhagan2
1
M.E. Student 2
Assistant Professor
1,2
Department of Electronics and Communication Engineering
1,2
Sri Eshwar College of Engineering, Coimbatore
Abstract— In the recent era where technology plays a
prominent role, persons can be identified (for security
reasons) based on their behavioral and physiological
characteristics (for example fingerprint, face, iris, key-
stroke, signature, voice, etc.) through a computer system
called the biometric system. In these kinds of systems the
security is still a question mark because of various intruders
and attacks. This problem can be solved by improving the
security using some efficient algorithms available. Hence
the fake person can be identified if he/she uses any synthetic
sample of an authenticated person and a fake person who is
trying to forge can be identified and authenticated.
Key words: Biometric Systems, Image quality assessment,
Security, Attacks, Principal Component Analysis (PCA),
Linear Discriminant Analysis (LDA), Quadrature
Discriminant Analysis, Feature extraction, classification.
I. INTRODUCTION
Biometrics refers to metrics related to human characteristics
and traits. Biometrics authentication (or realistic
authentication) is used in Computer Science as a form of
identification and access control. It is also used to identify
individuals in groups that are under surveillance. Biometric
identifiers are the distinctive, measurable characteristics and
used to label and describe individuals. Biometric identifiers
are often classified as physiological versus behavioral
characteristics. Physiological characteristics are related to
the shape of the body. Examples such as fingerprint, palm
veins, face recognition, DNA, palm print, hand geometry,
iris recognition, retina and odor/scent. Behavioral
characteristics are related to the pattern of behavior of the
person, including but not limited to typing rhythm, gait, and
voice.
No single biometric will meet all the requirements
of every possible application. The synthetic or reconstructed
samples can easily break the security of the system. Hence
multi bio metric systems are used to provide better security
against various attacks. To achieve this Image Quality
Assessment for liveness detection technique is used. This
method distinguishes between fake and real samples.
II. IMAGE QUALITY ASSESSMENT FOR LIVENESS
DETECTION
Image quality is a characteristic of an image that measures
the perceived image degradation (typically, compared to an
ideal or perfect image). Imaging systems may introduce
some amounts of distortion or noises in the signal, so the
quality assessment is an important problem.
There are several techniques and metrics that can
be measured objectively and automatically evaluated by a
computer program. Therefore, they can be classified as full-
reference (FR) methods and no-reference (NR) methods. In
FR image quality assessment methods, the quality of a test
image is evaluated by comparing it with a reference image
that is assumed to have perfect quality. NR metrics try to
assess the quality of an image without any reference to the
original one.
For example, comparing an original image to the
output of JPEG compression of that image is full-reference –
it uses the original as reference.
Fig. 1: Image Quality Assessment for fake biometric
detection
Liveness detection methods are usually classified
into one of the two groups (see Fig.1)
Hardware based techniques
Software based techniques
A. Hardware Based Techniques
It adds some specific device to the sensor in order to detect
particular properties of a living trait (e.g., fingerprint sweat,
blood pressure, or specific reflection properties of the eye).
B. Software-Based Techniques
In this case the fake trait is detected once the sample has
been acquired with a standard sensor (i.e., features used to
distinguish between real and fake traits are extracted from
the biometric sample, and not from the trait itself).
The two types of methods present certain
advantages and drawbacks over the other and, in general, a
combination of both would be the most desirable protection
approach to increase the security of biometric systems. As
a coarse comparison, hardware-based schemes usually
present a higher fake detection rate, while software-based
techniques are in general less expensive (as no extra device
is needed), and less intrusive since their implementation is
transparent to the user. Furthermore, as they operate
directly on the acquired sample (and not on the biometric
trait itself), software-based techniques may be embedded in
the feature extractor module which makes them potentially
capable of detecting other types of illegal break-in attempts
not necessarily classified as spoofing attacks. For instance,
software-based methods can protect the system against the
injection of reconstructed or synthetic samples into the
communication channel between the sensor and the feature
extractor.
III. SECURITY PROTECTION METHOD
The problem of fake biometric detection can be seen as a
two-class classification problem where an input biometric
sample has to be assigned to one of two classes: real or fake.
2. Fake Multi Biometric Detection Using Image Quality Assessment
(IJSRD/Vol. 2/Issue 07/2014/113)
All rights reserved by www.ijsrd.com 515
The probability of an image can be obtained through feature
extraction. Some of the general image quality measures is
considered. In order to keep its generality and simplicity,
the system needs only one input: the biometric sample to be
classified as real or fake (i.e., the same image acquired for
biometric recognition purposes). Furthermore, as the
method operates on the whole image without searching for
any trait-specific properties, it does not require any pre-
processing steps (e.g., fingerprint segmentation, iris
detection or face extraction) prior to the computation of the
IQ features. This characteristic minimizes its computational
load. Once the feature vector has been generated the sample
is classified as real (generated by a genuine trait) or fake
(synthetically produced), using some simple classifiers
Fig. 2: Block diagram
IV. FACE RECOGNITION
From a digital image or a video frame from a video source a
person can be identified through the face recognition
system. This can be done by comparing selected facial
features from the image and a facial database. For this
purpose fusion of Principal Component Analysis (PCA) and
Linear Discriminant Analysis (LDA) is used.
These algorithms identify facial features by
extracting landmarks, or features, from an image of the
subject's face. These features are then used to search for
other images with matching features. Three different types
of attacks were considered:
Print, illegal access attempts are carried out with
hard copies of high-resolution digital photographs
of the genuine users.
Mobile, the attacks are performed using photos and
videos taken with the iPhone using the iPhone
screen.
Highdef, similar to the mobile subset but in this
case the photos and videos are displayed using an
iPad screen with resolution 1024 × 768.
Such a variety of real and fake acquisition
scenarios and conditions makes the REPLAY-
ATTACK DB (unwanted changes in the
background; different sizes of the heads; a black
frame due to an imperfect fitting of the attack
media on the capturing device screen) a unique
benchmark for testing anti-spoofing techniques for
face-based systems.
It is a set of two tasks:
Face Identification: Given a face image that
belongs to a person in a database, tell whose image
it is.
Face Verification: Given a face image that might
not belong to the database, verify whether it is from
the person it is claimed to be in the database
Fig. 3: Flow diagram for face module
A. Steps
The input image is loaded and converted from
RGB to gray scale
The image is preprocessed by using Gaussian filter
to remove noise and some specific distortions
Then the gradient (horizontal & vertical texture)
and magnitude of the image is calculated
The edges of the input image is detected by using
canny method for edge detection
After the edges are detected the preprocessed
image is smoothened by using low pass filter
(average filter)
The smoothened image is then normalized in both
horizontal and vertical direction
The normalized image is then reconstructed using
anisotropic diffusion
Finally the reconstructed image is obtained
B. Canny Method for Edge Detection
The Canny method for edge detection uses an edge
operator to detect a wide range of edges in images.
Canny edge detection is a four step process.
A Gaussian blur is applied to clear the image from
noise
A gradient operator is applied for obtaining the
gradients' intensity and direction
Non-maximum suppression determines the pixel
that is capable of detecting their neighbours
Hysteresis thresholding finds the start and end of
the edges
C. Principal Component Analysis (PCA)
The Principal Component Analysis (PCA) is one of the most
successful techniques that have been used in image
recognition and compression.
The purpose of PCA is to reduce the large
dimensionality of the data space to the smaller
intrinsic dimensionality of feature space, which is
needed to describe the data economically
PCA can do prediction, redundancy removal,
feature extraction, data compression
In PCA, every image in the training set can be
represented as a linear combination of weighted
eigenvectors called as Eigen faces. A training image set
3. Fake Multi Biometric Detection Using Image Quality Assessment
(IJSRD/Vol. 2/Issue 07/2014/113)
All rights reserved by www.ijsrd.com 516
called basis function obtains these eigenvectors from
covariance matrix. The weights are found out after the
selection of a set of most relevant Eigen faces.
Recognition is performed by considering the test image
onto the subspace spanned by the Eigen faces and then
classification is done by measuring the Euclidean distance.
Fig. 4: Face recognition
Mean and Eigen vectors finds the projected image
Find the mean centered data
Find covariance matrix’s Eigen vector using
surogate method to reduce computational cost
Surogate matrix of mean subtracted
Eigen vectors of surrogate matrix
Sorting Eigen vectors to select the most dominant
Eigen vectors
Normalization
Dimensionality reduction
Calculating the mean of training data
Perform PCA to get Eigen faces in reduced
dimensions
Projecting training and testing images in Eigen
space
Recognizing images using Euclidean distance
Calculating Euclidean distance between the
projected test image and the projection
Distance of all centered training image is calculated
Test image is said to have minimum distance with
its corresponding image in the training database
Finding the recognized number whose distance is
minimum
Generating the desired result
D. Linear Discriminant Analysis (LDA)
Linear discriminant analysis (LDA) is a kind of Fisher
analysis which is used to separate or distinguish two or more
classes of objects or events. The resulting combination may
be used as a linear classifier or, more commonly, for
dimensionality reduction before later classification.
There has been a tendency in the computer vision
community to prefer LDA over PCA
This is mainly because LDA deals directly with
discrimination between classes while PCA does not
pay attention to the underlying class structure
This paper shows that when the training set is
small, PCA can outperform LDA
When the number of samples is large and
representative for each class, LDA outperforms
PCA
Only linearly separable classes will remain
separable after applying LDA.
It does not seem to be superior to PCA when the
training data set is small.
E. Methodology
Generate the set of MEFs for each image in the
training set
Given a query image, compute its MEFs using the
same procedure
Find the k closest neighbours for retrieval (e.g.,
using Euclidean distance)
F. Most Expressive Features (MEF):
The features (projections) obtained using PCA.
G. Most Discriminating Features (MDF):
The features (projections) obtained using LDA.
V. IRIS RECOGNITION
Iris recognition is an automated method that identifies the
irises of an individual person’s eyes from any videos or
images. It uses mathematical pattern-recognition techniques
to detect the original sample of the authenticated person
Fig. 5: Iris Recognition
For the iris modality the protection method is tested
under two different attack scenarios, namely:
Spoofing attack and
Attack with synthetic samples
For each of the attacks a specific pair of real-fake databases
is used. Databases are divided into totally independent (In
terms of users): train set, used to train the classifier; and test
set, used to evaluate the performance of the protection
method. The classifier used for the two scenarios is based
on Quadratic Discriminant Analysis (QDA). It has better
performance than the Linear Discriminant Analysis (LDA).
It maintains the simplicity of the system.
Fig. 6: Flow diagram for iris module
A. Steps
The input image is loaded and converted from
RGB to gray scale
4. Fake Multi Biometric Detection Using Image Quality Assessment
(IJSRD/Vol. 2/Issue 07/2014/113)
All rights reserved by www.ijsrd.com 517
The image is preprocessed by using Gaussian filter
to remove noise and some specific distortions
The edges of the input image is detected by using
canny method for edge detection
The features are extracted using Gabor filter
method
Then the iris template is generated
B. Quadratic Discriminant Analysis (Qda)
Quadratic discriminant analysis (QDA) is closely related to
linear discriminant analysis (LDA), where it is assumed that
the measurements from each class are normally distributed.
Unlike LDA however, in QDA there is no assumption that
the covariance of each of the classes is identical. When the
normality assumption is true, the best possible test for the
hypothesis that a given measurement is from a given class is
the likelihood ratio test matrices will substitute the
population quantities in this formula.
C. Methodology
Histogram Equalization
Iris template is created - generates a biometric
template from an iris in an eye image.
Normalization is performed
Feature encoding performed
Range of pupil & iris radii is defined
Scaling factor to speed up Hough transform is
defined
Segment iris - automatic segmentation of the iris
region from an eye image is performed. Also
isolates noise areas such as occluding eyelids and
eyelashes.
The iris boundary is found
Array for recording noise regions is formed
Top eyelid is found
Bottom eyelid is found
For CASIA, eliminate eyelashes by thresholding
The circular Hough transform is performed
Find the maximum in the Hough space, and hence
the parameters of the circle
Canny edge detection is performed
Convolving each row of an image with 1d log-
Gabor filters
Encode - generates a biometric template from the
normalized iris region,
Also generates corresponding noise mask
Convolve normalized region with Gabor filters
VI. FINGERPRINT RECOGNITION
Fingerprint recognition or fingerprint authentication refers
to the automated method of verifying a match between two
human fingerprints. Fingerprints are one of many forms of
biometrics used to identify individuals and verify their
identity.
The analysis of fingerprints for matching purposes
generally requires the comparison of several features of the
print pattern. These include patterns, which are aggregate
characteristics of ridges, and minutia points, which are
unique features found within the patterns. It is also
necessary to know the structure and properties of human
skin in order to successfully employ some of the imaging
technologies.
Fig. 7: Fingerprint recognition
A. Patterns
The three basic patterns of fingerprint ridges are the arch,
loop, and whorl:
Arch: The ridges enter from one side of the finger,
rise in the center forming an arc, and then exit the
other side of the finger.
Loop: The ridges enter from one side of a finger,
form a curve, and then exit on that same side.
Whorl: Ridges form circularly around a central
point on the finger.
Fig. 8: Flow diagram for fingerprint module
B. Steps
The input image is loaded and converted from
RGB to gray scale
The image is preprocessed by using Gaussian filter
to remove noise and some specific distortions
Histogram Equalization is performed
Ridge is segmented by determining ridge frequency
values across the image
Finding the median frequency value used across the
whole
The minutiae extraction is done.
As in the iris experiments, the database are divided
into a: train set, used to train the classifier; and test set, used
to evaluate the performance of the protection method. In
order to generate totally unbiased results, there is no overlap
between both sets (i.e., samples corresponding to each user
are just included in the train or the test set). The same QDA
5. Fake Multi Biometric Detection Using Image Quality Assessment
(IJSRD/Vol. 2/Issue 07/2014/113)
All rights reserved by www.ijsrd.com 518
classifier already considered in the iris related experiments
is used here.
C. Methodology
The image is renormalized so that the *ridge
regions* have zero mean, unit standard deviation
Normalizes image values to 0-1, or to desired mean
and variance plot of ridge orientation data
Placement of orientation vectors is determined
Frequencies calculated for non-ridge regions are
masked out
Median frequency over all the valid regions of the
image is found
The fingerprint ridge frequency is estimated within
a small block of a fingerprint image an orientation
within the block. This is done by averaging the
sines and cosines of the doubled angles before
reconstructing the angle again. This avoids
wraparound problems at the origin.
The image block is rotated so that the ridges are
vertical now crop the image so that the rotated
image does not contain any invalid regions. This
prevents the projection down the columns from
being mucked up
The columns are summed down to get a projection
of the grey values down the ridges
Peaks in projected grey values is found by
performing a gray scale
Dilation and then finding where the dilation equals
the original values
The spatial frequency of the ridges is determined
by dividing the distance between the 1st and last
peaks by the (No of peaks-1). If no peaks are
detected, or the wavelength is outside the allowed
bounds, the frequency image is set to 0
Fixed angle increment between filter orientations in
degrees this should divide evenly into 180
The filtering is done finally
VII. IMAGE QUALITY MEASURES
A. Error Sensitivity Measures
Traditional perceptual image quality assessment approaches
are based on measuring the errors (i.e., signal differences)
between the distorted and the reference images.
B. Correlation-Based Measures
Normalized Cross-Correlation (NXC), Mean Angle
Similarity (MAS) and Mean Angle- Magnitude Similarity
(MAMS).
C. Edge-Based Measures
Total Edge Difference (TED) and Total Corner Difference
(TCD).
D. Spectral Distance Measures
The Fourier transform is another traditional image
processing tool which has been applied to the field of image
quality assessment.
E. Gradient-Based Measures
Gradient Magnitude Error (GME) and Gradient Phase Error
(GPE).
F. Information Theoretic Measures
Visual Information Fidelity (VIF) and the Reduced
Reference Entropic Difference index (RRED).
VIII. APPROACHES
A. Distortion-Specific Approaches
These techniques depend on the obtained information about
the sort of visual quality losses due to certain distortion
B. Training-Based Approaches
This approach can be applied to images by training a model
using clean and distorted images
C. Natural Scene Statistic Approaches
Here to train the initial model no distorted images are used
instead a natural scene distortion free image is used
IX. EXPERIMENT AND RESULTS
The evaluation experimental protocol has been designed
with a two-fold objective:
First, the multi-biometric dimension of the
protection method is evaluated. For this purpose
three biometric modalities have been considered in
the experiments:
Second, the multi-attack dimension of the
protection method is evaluated. It does not detect
the spoofing attacks alone but also the fraudulent
access of the synthetic samples or reconstructed
samples.
A. Database
Databases used in this case are:
Real database: CASIA-IrisV1. This dataset is
publicly available through the Biometric Ideal Test
(BIT) platform of the Chinese Academy of
Sciences Institute of Automation (CASIA).
Synthetic database: WVU-Synthetic Iris DB. Being
a database that contains only fully synthetic data, it
is not subjected to any legal constraints and is
publicly available through the CITER research
center.
B. Output
Real samples of Tom - face, iris and fingerprint
Fig. 11: Database image
6. Fake Multi Biometric Detection Using Image Quality Assessment
(IJSRD/Vol. 2/Issue 07/2014/113)
All rights reserved by www.ijsrd.com 519
Fake samples of tom- Fingerprint, face and iris
Fig. 15: Database image
C. Local Binary Pattern
Local binary patterns (LBP) are a type of feature used for
classification in computer vision. Three neighborhood
examples used to define a texture and calculate a local
binary pattern (LBP)
The LBP feature vector, in its simplest form, is
created in the following manner:
Divide the examined window into cells (e.g. 16x16
pixels for each cell)
For each pixel in a cell, compare the pixel to each
of its 8 neighbour’s (on its left-top, left-middle,
left-bottom, right-top, etc.). Follow the pixels along
a circle, i.e. clockwise or counter-clockwise
Where the center pixel's value is greater than the
neighbour’s value, write "1". Otherwise, write "0".
This gives an 8-digit binary number (which is
usually converted to decimal for convenience)
Compute the histogram, over the cell, of the
frequency of each "number" occurring (i.e., each
combination of which pixels are smaller and which
are greater than the center).
Optionally normalize the histogram
Concatenate (normalized) histograms of all cells.
This gives the feature vector for the window
The feature vector can now be processed using the
Support vector machine or some other machine-
learning algorithm to classify images. Such
classifiers can be used for face recognition or
texture analysis
X. CONCLUSION
Better recognition compared to existing method. Better
Security, the multi-biometric system increases the security
level. Uni-biometric system is easy to attack but the multi-
biometric system is not so easy because attacker cannot
obtain multiple traits of the same individual. More secure
than other system. Multiple fingerprint scanner support.
Multiple IRIS Scanner support. The existing recognition
systems can only detect the presence but it is not able to
distinguish between the real and fake samples. So the
system proposed can overcome this limitation.
REFERENCES
[1] Javier Galbally, Sébastien Marcel, Member, IEEE, and
Julian Fierrez,” Image Quality Assessment for Fake
Biometric Detection: Application to Iris, Fingerprint,
and Face Recognition, vol. 23, no. 2, February 2014
[2] M. G. Martini, C. T. Hewage, and B. Villarini, “Image
quality assessment based on edge preservation,” Signal
Process. Image Commun. vol. 27, no. 8, pp. 875–882,
2012
[3] J. Galbally, C. McCool, J. Fierrez, S. Marcel, and J.
Ortega-Garcia, “On the vulnerability of face
verification systems to hill-climbing attacks,” Pattern
Recognit., vol. 43, no. 3, pp. 1027–1038, 2010
[4] J. Galbally, F. Alonso-Fernandez, J. Fierrez, and J.
Ortega-Garcia, “A high performance fingerprint
liveness detection method based on quality related
features, “Future Generat. Comput. Syst., vol. 28, no. 1,
pp. 311–321, 2012
[5] J. Galbally, J. Ortiz-Lopez, J. Fierrez, and J. Ortega-
Garcia, “Iris liveness detection based on quality related
features,” in Proc. 5th IAPR ICB, Mar./Apr. 2012, pp.
271–276.
[6] Maltoni, D. Maio, A. Jain, and S. Prabhakar, Handbook
of Fingerprint Recognition. New York, NY, USA:
Springer-Verlag, 2009.
[7] R. Cappelli, D. Maio, A. Lumini, and D. Maltoni,
“Fingerprint image reconstruction from standard
templates,” IEEE Trans. Pattern Anal.
[8] Mach. Intell., vol. 29, no. 9, pp. 1489–1503, Sep. 2007.