The document discusses the design and implementation of a face recognition system using principal component analysis (PCA). It includes sections on objectives, tools used, analysis, design, testing, snapshots, conclusion, and future enhancements. The key aspects are:
1. PCA is used to extract eigenfaces from a set of training images and represent faces in a lower-dimensional space.
2. In the design, mathematical concepts like variance, covariance, and eigenvalues/eigenvectors are explained which form the basis of the PCA algorithm.
3. The PCA algorithm involves computing the average face, covariance matrix, eigenvectors/values to derive the principal components and construct eigenfaces for classification.
4. Testing involves projecting new
Face Detection and Recognition System (FDRS) is a physical characteristics recognition technology, using the inherent physiological features of humans for ID recognition. The technology does not need to be carried about and will not be lost, so it is convenient and safe for use
The following resources come from the 2009/10 BSc (Hons) in Multimedia Technology (course number 2ELE0075) from the University of Hertfordshire. All the mini projects are designed as level two modules of the undergraduate programmes.
The objectives of this project are to demonstrate abilities to:
• Handle camera setup, calibrate and capture still and video faces
• Pre-process images and extract features
• Perform face recognition by a) using existing methods and b) trying new techniques.
This project requires the students to apply their abilities to handle image capture hardware and software. Since this is an active area of research, students will need to perform literature survey and discuss ( through brainstorm sessions) their performance characteristics. In addition, they will need to design and implement pre-processing and recognition codes leading to face recognition.
Automated attendance system using Face recognitionIRJET Journal
This document describes an automated attendance system using face recognition. The system uses image capture to take photos of students entering the classroom. It then uses the Viola-Jones algorithm for face detection and PCA for feature selection and SVM for classification to recognize students' faces and mark their attendance automatically. When compared to traditional attendance methods, this system saves time and helps monitor students. It discusses related work using RFID, fingerprints, and iris recognition for attendance systems. It outlines the proposed system's modules for image capture, face detection, preprocessing, database development, and postprocessing. Finally, it discusses results, conclusions, and opportunities for future work to improve recognition rates under various conditions.
INTRODUCTION
FACE RECOGNITION
CAPTURING OF IMAGE BY STANDARD VIDEO CAMERAS
COMPONENTS OF FACE RECOGNITION SYSTEMS
IMPLEMENTATION OF FACE RECOGNITION TECHNOLOGY
PERFORMANCE
SOFTWARE
ADVANTAGES AND DISADVANTAGES
APPLICATIONS
CONCLUSION
Computer vision is the automation of human visual perception to allow computers to analyze and understand digital images. The goal is to emulate the human visual system through techniques like deep learning. Computer vision involves image acquisition, processing, and analysis to interpret images beyond just recording them. It has applications in areas like object detection, facial recognition, medical imaging, and self-driving cars. While it provides advantages like unique customer experiences, it also raises privacy concerns regarding how the data used is collected and stored.
This documentation provides a brief insight of face recognition based attendance system using neural networks in terms of product architecture which can be used for educational purpose.
The document presents a project on face recognition using a Raspberry Pi. It discusses using OpenCV for face detection with Haar cascades and eigenfaces for face recognition with PCA. The system consists of a Raspberry Pi connected to a camera. Faces detected in images are compared to a database of stored faces to recognize individuals and control access to a gate by opening or closing it. The project aims to develop a basic access control system using face recognition for security purposes.
Face Detection and Recognition System (FDRS) is a physical characteristics recognition technology, using the inherent physiological features of humans for ID recognition. The technology does not need to be carried about and will not be lost, so it is convenient and safe for use
The following resources come from the 2009/10 BSc (Hons) in Multimedia Technology (course number 2ELE0075) from the University of Hertfordshire. All the mini projects are designed as level two modules of the undergraduate programmes.
The objectives of this project are to demonstrate abilities to:
• Handle camera setup, calibrate and capture still and video faces
• Pre-process images and extract features
• Perform face recognition by a) using existing methods and b) trying new techniques.
This project requires the students to apply their abilities to handle image capture hardware and software. Since this is an active area of research, students will need to perform literature survey and discuss ( through brainstorm sessions) their performance characteristics. In addition, they will need to design and implement pre-processing and recognition codes leading to face recognition.
Automated attendance system using Face recognitionIRJET Journal
This document describes an automated attendance system using face recognition. The system uses image capture to take photos of students entering the classroom. It then uses the Viola-Jones algorithm for face detection and PCA for feature selection and SVM for classification to recognize students' faces and mark their attendance automatically. When compared to traditional attendance methods, this system saves time and helps monitor students. It discusses related work using RFID, fingerprints, and iris recognition for attendance systems. It outlines the proposed system's modules for image capture, face detection, preprocessing, database development, and postprocessing. Finally, it discusses results, conclusions, and opportunities for future work to improve recognition rates under various conditions.
INTRODUCTION
FACE RECOGNITION
CAPTURING OF IMAGE BY STANDARD VIDEO CAMERAS
COMPONENTS OF FACE RECOGNITION SYSTEMS
IMPLEMENTATION OF FACE RECOGNITION TECHNOLOGY
PERFORMANCE
SOFTWARE
ADVANTAGES AND DISADVANTAGES
APPLICATIONS
CONCLUSION
Computer vision is the automation of human visual perception to allow computers to analyze and understand digital images. The goal is to emulate the human visual system through techniques like deep learning. Computer vision involves image acquisition, processing, and analysis to interpret images beyond just recording them. It has applications in areas like object detection, facial recognition, medical imaging, and self-driving cars. While it provides advantages like unique customer experiences, it also raises privacy concerns regarding how the data used is collected and stored.
This documentation provides a brief insight of face recognition based attendance system using neural networks in terms of product architecture which can be used for educational purpose.
The document presents a project on face recognition using a Raspberry Pi. It discusses using OpenCV for face detection with Haar cascades and eigenfaces for face recognition with PCA. The system consists of a Raspberry Pi connected to a camera. Faces detected in images are compared to a database of stored faces to recognize individuals and control access to a gate by opening or closing it. The project aims to develop a basic access control system using face recognition for security purposes.
The document summarizes an OpenCV based image processing attendance system. It discusses using OpenCV to detect faces in images and recognize faces by comparing features to a database. The key steps are face detection using Viola-Jones detection, face recognition using eigenfaces generated by principal component analysis to project faces into "face space", and measuring similarity by distance between projections.
This document presents a proposed approach for biometric authentication using facial vein patterns captured in thermal images. The system aims to address limitations of existing identification systems. It involves extracting thermal signatures from registered infrared face images using morphological operators and directional/diffusional filters. Signatures are matched against templates for authentication. The proposed approach is intended to provide a more secure and accurate identification system compared to alternatives that are sensitive to factors like lighting. It involves modules for image registration, face segmentation, signature extraction, and feature matching between extracted signatures and stored templates. Results show an average accuracy of 87.16-94.63% for the matching algorithm.
This document provides an overview of facial recognition technology. It discusses the history of facial recognition, how the technology works by detecting nodal points on faces and creating faceprints for identification. It also covers implementations, comparing images to templates to verify or identify individuals, and applications in security and surveillance. Strengths are its non-invasive nature, but it can be impacted by changes in appearance.
1. The document discusses face recognition using an eigenface approach, which uses principal component analysis to extract features from a database of faces to generate eigenfaces that can be used to identify unknown faces.
2. The eigenface approach takes into account the entire face for recognition and is relatively insensitive to small changes in faces. It is faster, simpler, and has better learning capabilities compared to other approaches.
3. Some limitations are that accuracy is affected if lighting and face position vary greatly, it only works with grayscale images, and noisy or partially occluded faces decrease recognition performance.
Face recognition technology may help solve problems with identity verification by analyzing facial features instead of passwords or pins. The document outlines the key stages of face recognition systems including data acquisition, input processing, and image classification. It also discusses advantages like convenience and ease of use, as well as limitations such as an inability to distinguish identical twins. Potential applications are identified in government, security, and commercial sectors.
Recon Outpost system is designed to make available tools for home security and investigators that need to research surrounding ambient with video data in real time. The system can analyse and identify biometric faces in live video, and provide real time surveillance in adverse weather conditions.
This document summarizes a student presentation on a face recognition lecture attendance system. The system uses image processing and comparison to recognize students' faces from a high-definition camera feed and compare them to a database to take attendance. It is controlled by the faculty member, who instructs the system to start and end recording. The system is intended to smartly track that students remain for the entire lecture session and also function as surveillance. At the end, it reverts a full attendance report back to a central database. Diagrams including class, activity, sequence and use case diagrams are also presented to depict the system workflow and actors.
The document proposes developing Android applications to sense emotions using smartphones for better health and human-machine interactions. It discusses detecting emotions through passive sensors like cameras, microphones, and accelerometers that can capture facial expressions, speech, heart rate without interpreting input. Recognition involves extracting meaningful patterns from sensor data using techniques like speech recognition, facial expression detection to produce labels or inference algorithms. Specific techniques are discussed for recognizing emotions from speech, facial expressions based on the Facial Action Coding System, and heart rate variability. The conclusion states that understanding emotions with smartphones can help people succeed and make research easier.
This document provides an overview of facial recognition technology. It discusses the history of facial recognition, how the technology works, its implementation which involves image acquisition, processing, distinctive characteristic location and template matching. It also outlines the strengths and weaknesses of facial recognition as well as its applications in areas like border control, computer security, and banking. While facial recognition provides advantages like convenience and easy use, it also has disadvantages such as being impacted by changes in user appearance.
A presentation on Image Recognition, the basic definition and working of Image Recognition, Edge Detection, Neural Networks, use of Convolutional Neural Network in Image Recognition, Applications, Future Scope and Conclusion
This document discusses face detection and recognition techniques. It introduces the problems of detecting where a face is located in an image (face detection) and identifying who the face belongs to (face recognition). It then describes Viola and Jones' approach which uses AdaBoost learning on Haar-like features computed quickly using integral images to build a classifier cascade that can discard non-face regions and focus on potential face areas. Key steps involve using integral images and Haar-like features for fast computation, AdaBoost for feature selection, and a classifier cascade for efficient scanning.
This document summarizes a face recognition attendance system project. The project uses face recognition technology to take attendance by comparing captured images to stored student records. It has a completed status. The methodology follows a waterfall model. System diagrams include context, data flow, and architecture diagrams. The database stores student data like name, roll number, attendance, and captured images. The system allows for student registration by capturing images, training the model, and recognizing faces to mark attendance. Developing this project provided experience with real-world software development processes.
Face recognition is a type of biometric software that uses analysis of facial patterns to identify individuals. It has various applications including security, law enforcement, and social media photo tagging. The technology works by measuring nodal points on faces like eye and nose position to create unique numerical faceprints for identification and verification. While effective, face recognition depends on clear images and has limitations with expressions, lighting, or obscured faces. It is increasingly being implemented in areas like access control, immigration, and banking due to lower costs.
The document discusses using facial recognition for attendance tracking in a school setting. It proposes developing a system that uses real-time face detection and Principal Component Analysis to match detected faces to staff members and automatically record their attendance. This would eliminate the manual and time-consuming process of logging attendance. The system would enroll staff faces during a one-time process and then identify and update their attendance in a database system in real-time. Research shows this type of automatic attendance tracking outperforms manual systems and provides more efficient leave and interface management.
A facial recognition system uses computer applications to identify or verify a person from images or video by comparing facial features to a database. It can be used for security systems and is similar to other biometrics like fingerprints. Some key parts of faces used for comparison include the distance between the eyes, width of the nose, and structure of cheek bones. Algorithms continue improving to account for challenges like changes in lighting or facial expressions. Facial recognition has various applications and is expected to become more widespread and integrated into security and social networks in the future.
Automatic Attendance system using Facial RecognitionNikyaa7
It is a boimetric based App,which is gradually evolving in the universal boimetric solution with a virtually zero effort from the user end when compared with other boimetric options.
This document outlines a research project on designing an automatic system to distinguish facial expressions. It presents an introduction discussing the importance and challenges of facial expression recognition. It provides an outline of the proposed system including aims to use programming for design and implementation. It discusses the basic structure of facial expression analysis and concludes the objective is to analyze facial expressions through steps like feature extraction and expression classification.
This document provides a software requirements specification for a Smart Attendance System application. The application will use facial recognition technology to mark attendance for students present in class lectures. It will capture faces from existing cameras in the classroom and identify students in real-time video feeds. The system will allow administrators to retrieve and modify attendance records. The document outlines requirements, interfaces, functionalities, constraints, and design diagrams for the application.
Presentation on Face detection and recognition - Credits goes to Mr Shriram, "https://www.hackster.io/sriram17ei/facial-recognition-opencv-python-9bc724"
Hand Written Character Recognition Using Neural Networks Chiranjeevi Adi
This document discusses a project to develop a handwritten character recognition system using a neural network. It will take handwritten English characters as input and recognize the patterns using a trained neural network. The system aims to recognize individual characters as well as classify them into groups. It will first preprocess, segment, extract features from, and then classify the input characters using the neural network. The document reviews several existing approaches to handwritten character recognition and the use of gradient and edge-based feature extraction with neural networks. It defines the objectives and methods for the proposed system, which will involve preprocessing, segmentation, feature extraction, and classification/recognition steps. Finally, it outlines the hardware and software requirements to implement the system as a MATLAB application.
The document summarizes an OpenCV based image processing attendance system. It discusses using OpenCV to detect faces in images and recognize faces by comparing features to a database. The key steps are face detection using Viola-Jones detection, face recognition using eigenfaces generated by principal component analysis to project faces into "face space", and measuring similarity by distance between projections.
This document presents a proposed approach for biometric authentication using facial vein patterns captured in thermal images. The system aims to address limitations of existing identification systems. It involves extracting thermal signatures from registered infrared face images using morphological operators and directional/diffusional filters. Signatures are matched against templates for authentication. The proposed approach is intended to provide a more secure and accurate identification system compared to alternatives that are sensitive to factors like lighting. It involves modules for image registration, face segmentation, signature extraction, and feature matching between extracted signatures and stored templates. Results show an average accuracy of 87.16-94.63% for the matching algorithm.
This document provides an overview of facial recognition technology. It discusses the history of facial recognition, how the technology works by detecting nodal points on faces and creating faceprints for identification. It also covers implementations, comparing images to templates to verify or identify individuals, and applications in security and surveillance. Strengths are its non-invasive nature, but it can be impacted by changes in appearance.
1. The document discusses face recognition using an eigenface approach, which uses principal component analysis to extract features from a database of faces to generate eigenfaces that can be used to identify unknown faces.
2. The eigenface approach takes into account the entire face for recognition and is relatively insensitive to small changes in faces. It is faster, simpler, and has better learning capabilities compared to other approaches.
3. Some limitations are that accuracy is affected if lighting and face position vary greatly, it only works with grayscale images, and noisy or partially occluded faces decrease recognition performance.
Face recognition technology may help solve problems with identity verification by analyzing facial features instead of passwords or pins. The document outlines the key stages of face recognition systems including data acquisition, input processing, and image classification. It also discusses advantages like convenience and ease of use, as well as limitations such as an inability to distinguish identical twins. Potential applications are identified in government, security, and commercial sectors.
Recon Outpost system is designed to make available tools for home security and investigators that need to research surrounding ambient with video data in real time. The system can analyse and identify biometric faces in live video, and provide real time surveillance in adverse weather conditions.
This document summarizes a student presentation on a face recognition lecture attendance system. The system uses image processing and comparison to recognize students' faces from a high-definition camera feed and compare them to a database to take attendance. It is controlled by the faculty member, who instructs the system to start and end recording. The system is intended to smartly track that students remain for the entire lecture session and also function as surveillance. At the end, it reverts a full attendance report back to a central database. Diagrams including class, activity, sequence and use case diagrams are also presented to depict the system workflow and actors.
The document proposes developing Android applications to sense emotions using smartphones for better health and human-machine interactions. It discusses detecting emotions through passive sensors like cameras, microphones, and accelerometers that can capture facial expressions, speech, heart rate without interpreting input. Recognition involves extracting meaningful patterns from sensor data using techniques like speech recognition, facial expression detection to produce labels or inference algorithms. Specific techniques are discussed for recognizing emotions from speech, facial expressions based on the Facial Action Coding System, and heart rate variability. The conclusion states that understanding emotions with smartphones can help people succeed and make research easier.
This document provides an overview of facial recognition technology. It discusses the history of facial recognition, how the technology works, its implementation which involves image acquisition, processing, distinctive characteristic location and template matching. It also outlines the strengths and weaknesses of facial recognition as well as its applications in areas like border control, computer security, and banking. While facial recognition provides advantages like convenience and easy use, it also has disadvantages such as being impacted by changes in user appearance.
A presentation on Image Recognition, the basic definition and working of Image Recognition, Edge Detection, Neural Networks, use of Convolutional Neural Network in Image Recognition, Applications, Future Scope and Conclusion
This document discusses face detection and recognition techniques. It introduces the problems of detecting where a face is located in an image (face detection) and identifying who the face belongs to (face recognition). It then describes Viola and Jones' approach which uses AdaBoost learning on Haar-like features computed quickly using integral images to build a classifier cascade that can discard non-face regions and focus on potential face areas. Key steps involve using integral images and Haar-like features for fast computation, AdaBoost for feature selection, and a classifier cascade for efficient scanning.
This document summarizes a face recognition attendance system project. The project uses face recognition technology to take attendance by comparing captured images to stored student records. It has a completed status. The methodology follows a waterfall model. System diagrams include context, data flow, and architecture diagrams. The database stores student data like name, roll number, attendance, and captured images. The system allows for student registration by capturing images, training the model, and recognizing faces to mark attendance. Developing this project provided experience with real-world software development processes.
Face recognition is a type of biometric software that uses analysis of facial patterns to identify individuals. It has various applications including security, law enforcement, and social media photo tagging. The technology works by measuring nodal points on faces like eye and nose position to create unique numerical faceprints for identification and verification. While effective, face recognition depends on clear images and has limitations with expressions, lighting, or obscured faces. It is increasingly being implemented in areas like access control, immigration, and banking due to lower costs.
The document discusses using facial recognition for attendance tracking in a school setting. It proposes developing a system that uses real-time face detection and Principal Component Analysis to match detected faces to staff members and automatically record their attendance. This would eliminate the manual and time-consuming process of logging attendance. The system would enroll staff faces during a one-time process and then identify and update their attendance in a database system in real-time. Research shows this type of automatic attendance tracking outperforms manual systems and provides more efficient leave and interface management.
A facial recognition system uses computer applications to identify or verify a person from images or video by comparing facial features to a database. It can be used for security systems and is similar to other biometrics like fingerprints. Some key parts of faces used for comparison include the distance between the eyes, width of the nose, and structure of cheek bones. Algorithms continue improving to account for challenges like changes in lighting or facial expressions. Facial recognition has various applications and is expected to become more widespread and integrated into security and social networks in the future.
Automatic Attendance system using Facial RecognitionNikyaa7
It is a boimetric based App,which is gradually evolving in the universal boimetric solution with a virtually zero effort from the user end when compared with other boimetric options.
This document outlines a research project on designing an automatic system to distinguish facial expressions. It presents an introduction discussing the importance and challenges of facial expression recognition. It provides an outline of the proposed system including aims to use programming for design and implementation. It discusses the basic structure of facial expression analysis and concludes the objective is to analyze facial expressions through steps like feature extraction and expression classification.
This document provides a software requirements specification for a Smart Attendance System application. The application will use facial recognition technology to mark attendance for students present in class lectures. It will capture faces from existing cameras in the classroom and identify students in real-time video feeds. The system will allow administrators to retrieve and modify attendance records. The document outlines requirements, interfaces, functionalities, constraints, and design diagrams for the application.
Presentation on Face detection and recognition - Credits goes to Mr Shriram, "https://www.hackster.io/sriram17ei/facial-recognition-opencv-python-9bc724"
Hand Written Character Recognition Using Neural Networks Chiranjeevi Adi
This document discusses a project to develop a handwritten character recognition system using a neural network. It will take handwritten English characters as input and recognize the patterns using a trained neural network. The system aims to recognize individual characters as well as classify them into groups. It will first preprocess, segment, extract features from, and then classify the input characters using the neural network. The document reviews several existing approaches to handwritten character recognition and the use of gradient and edge-based feature extraction with neural networks. It defines the objectives and methods for the proposed system, which will involve preprocessing, segmentation, feature extraction, and classification/recognition steps. Finally, it outlines the hardware and software requirements to implement the system as a MATLAB application.
This document outlines the syllabus for a Software Engineering course, including 11 topics that will be covered over several hours: Introduction to Software Engineering, Software Design, Using APIs, Software Tools and Environments, Software Processes, Software Requirements and Specifications, Software Validation, Software Evolution, Software Project Management, Formal Methods, and Specialized Systems Development. The main texts to be used are listed as two Software Engineering books by Sommerville and Pressman.
The document proposes a reliable fingerprint matching system using filter-based and Euclidean distance algorithms. It aims to improve accuracy of fingerprint matching by addressing issues caused by fingertip surface conditions and image quality. The proposed system extracts minutiae points using Gabor filters and matches fingerprints based on minutiae configuration and pore distances calculated using k-nearest neighbors algorithm. Testing on 20 fingerprints showed an average matching accuracy of 95-99% using this approach.
Artificial Neural Network / Hand written character RecognitionDr. Uday Saikia
1. Overview
2.Development of System
3.GCR Model
4.Proposed model
5.Back ground Information
6. Preprocessing
7.Architecture
8.ANN(Artificial Neural Network)
9.How the Human Brain Learns?
10.Synapse
11.The Neuron Model
12.A typical Feed-forward neural network model
13.The neural Network
14.Training of characters using neural networks
15.Regression of trained neural networks
16.Training state of neural networks
17.Graphical user interface….
The document describes the design of an embedded iris recognition system based on an ARM processor. The hardware system includes an ARM S3C2410 processor, OV7620 USB camera, IR LEDs for illumination, and TFT LCD. The system can acquire iris images, store them in NAND flash memory, and perform iris recognition processing on the ARM to output results to the LCD. It uses embedded Linux for a small footprint and runs identification mode to search pre-stored templates for a match.
- Domino's and Pizza Hut entered the Indian market in 1996. Both are US-based fast food chains that faced competition from local street food, restaurants, and other chains.
- Between 1996-2000, Domino's grew rapidly from 1 outlet to over 100, gaining a majority of the market share, while Pizza Hut grew more slowly to just 19 outlets.
- Domino's focused on fast home delivery within 30 minutes and establishing brand reputation, while Pizza Hut emphasized the dining experience.
Facial recognition systems use computer algorithms to identify or verify people from digital images or video by analyzing patterns in their faces. The document traces the development of these systems from early work in the 1970s to modern applications. It describes different types of facial recognition techniques and provides examples of software using the technology. The document also summarizes the results of an online survey about public awareness and interest in using facial recognition. It concludes by noting improvements in accuracy over time but also ongoing challenges regarding error rates, privacy, and changes to facial features.
Face recognition technology uses physiological biometrics to uniquely identify individuals based on measurements and data derived from their faces. It works by enrolling users through facial image capture and template generation, then performing matching of live facial images against stored templates for identification or verification. While fast and convenient, face recognition has limitations in accuracy depending on lighting, facial expressions, and angle of capture. It has applications in security, law enforcement, and commercial identity verification.
Face recognition technology uses biometrics to automatically recognize individuals or verify their identity based on unique measurable characteristics of the human face. It analyzes 80 landmarks on the face such as distance between eyes, width of nose, cheekbones, and jawline. Face recognition is commonly used for identification from large crowds, verification for credit cards and passports, and does not require physical contact or specialized interpretation of results. Common methods of face recognition include eigenface analysis using principal component analysis to extract features from faces and match new images to those in a database. Recent applications include uses for immigration, security, and targeted advertising based on facial analysis.
User interface design: definitions, processes and principlesDavid Little
This document provides an overview of user interface design, including definitions, processes, and principles. It defines a user interface as the part of a computer system that users interact with to complete tasks. User-centered design is discussed as an approach that focuses on research into user behaviors and goals in order to design appropriate tools to enable users to achieve their objectives. Design principles like simplicity, structure, visibility, consistency, tolerance, and feedback are outlined.
This document discusses human-computer interfaces (HCI). It defines HCI as the process of information transfer between users and machines, and how users see and interact with computer systems. The document outlines different types of interfaces like command line, menu driven, and graphical user interfaces. It also discusses advances in HCI including wearable, wireless, and virtual devices. Multimodal interfaces that combine multiple input modes are presented as beneficial for disabled users.
The document describes the requirements for an ATM network software system. It allows customers to complete banking transactions through off-premise ATMs. The software must interface with individual bank computers to process transactions. Key requirements include supporting account balance inquiries, withdrawals, and transfers according to each bank's business rules while ensuring security of customer authentication and funds. The system must also have high availability, safety protections, and handle concurrent access to accounts correctly.
The document summarizes a design seminar project on human face identification. The objectives of the project were to develop a computational model for face recognition that can work under varying poses and apply it to problems like criminal identification, security, and image processing. The research methodology used eigenface methods based on information theory. The project involved developing a face identification system with features like adding images to a database, clipping images, updating details, and searching for matches. It provides screenshots of the system interface and discusses the software and hardware requirements and limitations of the approach. The conclusion states that the system can efficiently find faces without exhaustive searching and face recognition will have many applications in smart environments.
Fingerprints have been used for identification since 1882. There are three main fingerprint patterns: loops, whorls, and arches. Loops make up 65% of fingerprints, whorls 30%, and arches 5%. Fingerprints are identified by features called minutiae including bifurcations, endings, and cores. There are two main techniques for fingerprint matching: minutiae-based which matches placement of minutiae points, and correlation-based which can overcome difficulties of minutiae-based matching. Fingerprints are captured using either optical or capacitive sensors and processed using image algorithms. Fingerprint identification has advantages of high accuracy, economy, and standardization but disadvantages of potential intrusiveness and errors from dirty or
This document provides an introduction to using the ithink software. It explains that the software can be used to render, simulate, analyze and communicate mental models. It then walks through an example of modeling a ".com" startup business. The example shows how to:
1. Render the initial model by creating a "Stock" to represent Subscription Customers and generating revenue "Flow" from those customers.
2. Numerate the model by adding variables and equations.
3. Simulate the model to see how the stocks and flows change over time and analyze the results.
4. Communicate insights from the model by creating an interface that allows others to interact with and explore the model.
The
Image Classification and Annotation Using Deep LearningIRJET Journal
This document presents a new deep learning model for jointly performing image classification and annotation. The model uses a convolutional neural network (CNN) to extract features from images and classify semantic objects. It then annotates the images based on the identified objects. The model is evaluated on standard datasets like CIFAR-10, CIFAR-100 as well as a new dataset collected by the authors. Results show the model achieves comparable or better performance than baseline methods, while also enabling fast image annotation. A novel scalable implementation allows annotating large datasets within seconds.
Apple makes it really easy to get started with Machine Learning as a developer. See how you can easily use Create ML and Turi Create to train Machine Learning models and use them in your iOS apps.
Smart Doorbell System Based on Face RecognitionIRJET Journal
1. The document describes a smart doorbell system based on face recognition using a Raspberry Pi board. The system uses OpenCV to perform face detection, feature extraction, and recognition.
2. It compares two face recognition algorithms - Eigenfaces and Independent Component Analysis (ICA). The system is designed for low power consumption, optimized resources, and faster speed.
3. The document outlines the system design, including enrolling faces into a training database, preprocessing images, performing face detection and feature extraction, and recognizing faces by comparing extracted features to the training database. It concludes that ICA provides better recognition accuracy than Eigenfaces.
This document describes a facial recognition and biometric security system called Digiyathra that is intended to streamline airport security checks. It would allow passengers to complete check-in, bag drop, and boarding using only their face as identification. During online ticket booking, passengers would submit a passport photo that would be added to a database and used for verification at various points throughout their journey. This system aims to accelerate passenger throughput while reducing costs by minimizing the need for paper-based ID checks. It provides details on how facial recognition works, describing the five main steps of detection, analysis, template generation, matching, and result determination. Local Binary Patterns Histograms are discussed as the specific method used to recognize and identify faces within this
This document describes an emotion recognition system that analyzes crowd behavior using machine learning and image processing techniques. The system works in three stages: 1) face detection using Haar cascade algorithms, 2) feature extraction and emotion recognition by converting images to grayscale and using CNN models, and 3) sending alerts based on recognized emotions like anger. The system was able to accurately detect faces and recognize emotions like happy, sad, anger, etc. It sent alerts via Twilio if high levels of anger were detected, allowing for analysis of crowd behavior and monitoring of public safety.
IRJET- Computerized Attendance System using Face RecognitionIRJET Journal
1) The document proposes an automated attendance system using face recognition for educational institutions to replace traditional manual attendance marking.
2) The system uses OpenCV with face detection algorithms like Viola-Jones and PCA to detect faces, create face databases, and compare faces to identities to automatically mark attendance in an excel file.
3) During use, faces will be detected in images from a webcam, compared to stored databases to identify individuals, and their attendance marked electronically without needing physical interaction like ID cards.
IRJET- Computerized Attendance System using Face RecognitionIRJET Journal
1. The document describes a computerized attendance system using face recognition for educational institutions. It uses OpenCV with face recognition and detection algorithms like Viola-Jones, PCA, and Eigenfaces.
2. Faces are detected using Viola-Jones algorithm. PCA is used to train detected faces and create a database of known faces. During attendance, faces are compared to the database to identify individuals and mark attendance automatically in an Excel file.
3. This automated system provides benefits over manual attendance systems by saving time, reducing errors, and preventing forgery. It is a more convenient and accurate way to take attendance.
IRJET-Computer Aided Touchless Palmprint Recognition Using SiftIRJET Journal
This document discusses a computer aided touchless palmprint recognition system using Scale Invariant Feature Transform (SIFT). SIFT is used to extract features from touchless palmprint images that are invariant to changes in scale, rotation, and translation. The system involves preprocessing images, extracting SIFT features, and matching features to recognize and authenticate individuals. An experiment was conducted using 16 real palmprint images with varying conditions. The system achieved 93.75% accuracy in recognition using SIFT features, demonstrating its effectiveness for touchless palmprint recognition compared to other approaches. Future work could explore using color information and developing algorithms to handle variations like cosmetics or injuries.
IRJET- 3-D Face Image Identification from Video Streaming using Map Reduc...IRJET Journal
This document summarizes a research paper that proposes using 3D face recognition techniques and Hadoop for large-scale face identification from video streams. It describes extracting faces from video frames, representing faces as 3D models with 15 distinguishing features, and using Hadoop for parallel processing to enable fast matching of input faces against a large database of faces. The Hadoop implementation includes map and reduce processes to distribute the face matching computations across multiple servers for improved performance on large datasets.
IRJET- Facial Expression Recognition using GPA AnalysisIRJET Journal
This document discusses a method for facial expression recognition using geometric feature analysis (GPA). The method involves preprocessing an input face image, extracting the skin pixels and facial features, and then using a support vector machine (SVM) classifier trained on geometric features to recognize the expression. Specifically, it performs skin mapping using a gray level co-occurrence matrix to isolate the face, extracts features like the eyes, nose and lips, and then inputs geometric relationships between these features into the SVM to classify the expression based on previous training data. The goal is to develop an automated system for facial expression recognition using digital image processing techniques.
Feature extraction is becoming popular in face recognition method. Face recognition is the interesting and growing area in real time applications. In last decades many of face recognitions methods has been developed. Feature extraction is the one of the emerging technique in the face recognition methods. In this method an attempt to show best faces recognition method. Here used different descriptors combination like LBP and SIFT, LBP and HOG for feature extraction. Using a single descriptor is difficult to address all variations so combining multiple features in common. Find LBP and SIFT features separately from the images and fuse them with a canonical correlation analysis and same procedure also done using LBP and HOG. The SIFT features have some limitations they don’t work well with lighting changes, quite slow, and mathematically complicated and computationally heavy. The combinations of HOG and LBP features make the system robust against some variations like illumination and expressions. Also, face recognition technique used a different classifier to extract the useful information from images to solve the problems. This paper is organized into four sections. Introduction in the first section. The second section describes feature descriptors and the third section describes proposed methods, final sections describes experiments result and conclusion phase.
IRJET- Face Detection and Recognition using OpenCVIRJET Journal
This document describes a face detection and recognition system using OpenCV and Python. The system has three main modules: detection, training, and recognition. The detection module uses a Haar cascade classifier to detect faces in images or video. In the training module, the detected face images are used to train a classifier using local binary patterns histograms. The recognition module then extracts features from new images and compares them to the trained classifier to recognize faces. Sample code is provided for the training, dataset collection, and face detection steps. The system provides a basic real-time face recognition capability with potential for improvement by adding preprocessing and more advanced features.
Lecture 5 from the COSC 426 Graduate course on Augmented Reality. This lecture talks about AR development tools and interaction styles. Taught by Mark Billinghurst from the HIT Lab NZ at the University of Canterbury. August 9th 2013
IRJET - Face Recognition based Attendance SystemIRJET Journal
This document describes a face recognition-based attendance system. It begins with an introduction to face recognition and the challenges of implementing such a system in real-time. It then reviews related work on algorithms used for face detection (Haar cascade), feature extraction (Histogram of Oriented Gradients), and recognition (Convolutional Neural Networks). The proposed system is described as collecting a student database, extracting encodings from images using CNN, and comparing real-time detected faces to the database using HOG detection and Euclidean distance matching to mark attendance. Experimental results aimed to test recognition under different training, lighting, and pose conditions.
The document discusses the key steps in an AI project cycle:
1) Problem scoping involves understanding the problem, stakeholders, location, and reasons for solving it.
2) Data acquisition collects accurate and reliable structured or unstructured data from various sources.
3) Data exploration arranges and visualizes the data to understand trends and patterns using tools like charts and graphs.
4) Modelling creates algorithms and models by training them on large datasets to perform tasks intelligently.
5) Evaluation tests the project by comparing outputs to actual answers to identify areas for improvement.
This document discusses face detection techniques. It begins with an introduction that defines face detection and discusses why it is important and challenging. It then covers topics like image segmentation, face detection approaches, morphological image processing, and skin color-based face detection. The document analyzes literature on face detection methods and provides descriptions of techniques like thresholding, edge detection, region-based segmentation, and template matching. It also includes a case study on specific face detection software applications and concludes by summarizing the discussed techniques.
FORMALIZATION & DATA ABSTRACTION DURING USE CASE MODELING IN OBJECT ORIENTED ...cscpconf
In object oriented analysis and design, use cases represent the things of value that the system performs for its actors in UML and unified process. Use cases are not functions or features.
They allow us to get behavioral abstraction of the system to be. The purpose of the behavioral abstraction is to get to the heart of what a system must do, we must first focus on who (or what)
will use it, or be used by it. After we do this, we look at what the system must do for those users in order to do something useful. That is what exactly we expect from the use cases as the
behavioral abstraction. Apart from this fact use cases are the poor candidates for the data abstraction. Rather the do not have data abstraction. The main reason is it shows or describes
the sequence of events or actions performed by the actor or use case, it does not take data in to account. As we know in earlier stages of the development we believe in ‘what’ rather than
‘how’. ‘What’ does not need to include data whereas ‘how’ depicts the data. As use case moves around ‘what’ only we are not able to extract the data. So in order to incorporate data in use cases one must feel the need of data at the initial stages of the development. We have developed the technique to integrate data in to the uses cases. This paper is regarding our investigations to take care of data during early stages of the software development. The collected abstraction of data helps in the analysis and then assist in forming the attributes of the candidate classes. This makes sure that we will not miss any attribute that is required in the abstracted behavior using use cases. Formalization adds to the accuracy of the data abstraction. We have investigated object constraint language to perform better data abstraction during analysis & design in unified paradigm. In this paper we have presented our research regarding early stage data abstraction and its formalization.
Formalization & data abstraction during use case modeling in object oriented ...csandit
This document discusses formalization and data abstraction during use case modeling in object-oriented analysis and design. It provides background on use case modeling and describes how data can be abstracted from use case steps. The document then presents a case study on an e-retail system to demonstrate modeling use cases, actors, and their relationships. It also discusses using activity diagrams to represent use case flows and the Object Constraint Language to add formalism and accuracy to data abstraction during analysis and design.
2. 1. Introduction
Humans are very good at recognizing faces and complex patterns. Even a passage of time doesn't
affect this capability and therefore it would help if computers become as robust as humans in
face recognition. Face recognition system can help in many ways:
Checking for criminal records.
Enhancement of security by using surveillance cameras in conjunction with face
recognition system.
Finding lost children's by using the images received from the cameras fitted at public
places.
Knowing in advance if some VIP is entering the hotel.
Detection of a criminal at public place.
Can be used in different areas of science for comparing an entity with a set of entities.
Pattern Recognition.
This project is a step towards developing a face recognition system which can recognize static
images. It can be modified to work with dynamic images. In that case the dynamic images
received from the camera can first be converted in to the static one's and then the same procedure
can be applied on them. But then there are lots of other things that should be considered. Like
distance between the camera and the person, magnification factor, view [top,side, front] etc.
3. 2. Tools/Environment Used
Software Requirements:
Operating System : Windows operating system
Language : Java
Front-end tool : Swing
JDK : JDK 1.5 and above
Hardware Requirements:
Processor :Pentium processor of 400MHz or higher.
RAM : Minimum 64MB primary memory.
Hard disk : Minimum 1GB hard disk space.
Monitor : Preferably color monitor (16 bit color) and above.
Webcamera.
Compact Disk drive.
A keyboard and a mouse.
4. 3. Analysis
Modules
Add Image/Registration
Image Capture
Login
Eigenface Computation
Identification
A module is a small part of our project. This plays a very important role in the project and in
coding concepts. In Software Engineering concept we treat it has a small part of a system but
whereas in our programming language it is a small part of the program, which we also called as
function in, some cases which constitute the main program.
Importance of modules in any software development side is we can easily understand what the
system we are developing and what its main uses are. At the time of project we may create many
modules and finally we combine them to form a system.
Module Description
Add Image/Registration
Add Image is a module that is considered with adding image along with the user id for login
of the person of whom we are taking image. In this we add Image by capturing from web camera
and store them in our system. During registration four images are captured. Each image is stored
four times as minimum of sixteen images are required for the algorithm of comparison.
5. Image Capture Module
This module is used to capture image using web camera. This is written as a separate thread
to avoid system hanging. This module is used to capture image in login module and registration
module.
Login
This modules function is to compare the captured image with stored images in the system. This
module uses Eigenface computation defined in next modules for comparison.
Eigenface Computation
This module is used to compute the "face space" used for face recognition. The recognition is
actually being carried out in the FaceBundle object, but the preparation of such object requires
doing lots of computations. The steps are:
* Compute an average face.
* Build a covariance matrix.
* Compute eigenvalues and eigenvector
* Select only sixteen largest eigenvalues (and its corresponding eigenvectors)
* Compute the faces using our eigenvectors
* Computeeigenspace for our given images.
Identification
This module contains the functionality to take the image from above module and it compares or
searches with the images already there in the database. If any image is matched then a success`
message is shown to the user.
7. Flow Diagram
Start
Login Register
Action
Capture Image Enter Login Id
Capture Image
Compare Image
Store
Success
Success Message Failure Message
Stages of face recognition
Face location detection
Feature extraction
Facial image classification
8. 4. Design
5.1 Mathematical Background
This section will illustrate mathematical algorithm that are the back bone of Principal
Component Analysis. It is less important to remember the exact mechanics of mathematical
techniques than it is to understand the intuition behind them. The topics are covered
independently of each other and examples are given.
Variance, Covariance, Covariance Matrix and Eigenvectors and Eigenvalues are basis of the
design algorithm.
a. Variance
The variance is a measure of the spread of data. Statisticians are usually concerned with taking a
sample of a population. To use election polls as an example, the population is all the people in
the country, whereas a sample is a subset of the population that the statisticians measure. The
great thing about statistics is that by only measuring a sample of the population, we can work out
what is most likely to be the measurement if we used the entire population.
Let's take an example:
X = [1 2 4 6 12 25 45 68 67 65 98]
We could simply use the symbol X to refer to this entire set of numbers. For referring to an
individual number in this data set, we will use subscript on the symbol X to indicate a specific
number. There are number of things that we can calculate about a data set. For example we can
calculate the mean of the sample. It can be given by the formulae:-
mean = sum of all numbers / total no. of numbers
Unfortunately, the mean doesn't tell us a lot about the data except for a sort of middle point. For
example, these two data sets have exactly the same mean (10), but are obviously quite different:
[0 8 12 20] and [8 9 11 12]
9. So what is different about these two sets? It is the spread of the data that is different. The
Variance is a measure of how spread out data is. It’s just like Standard Deviation.
SD is "The average distance from the mean of the data set to a point". The way to calculate it is
to compute the squares of the distance from each data point to the mean of the set, add them all
up, divide by n-1, and take the positive square root.As formulae:
b. Covariance
Variance and SD are purely 1-dimensional.Data sets like this could be: height of all the people in
the room,marks for the last CSC378 exam etc.However many data sets have more than one
dimensions,and the aim of the statistical analysis of these data sets is usually to see if there is any
relationship between the dimensions.For example, we might have as our data set both the height
of all the students in a class,and the mark they received for that paper.We could then perform
statistical analysis to see if the height of a student has any effect on their mark. It is useful to
have measure to find out how much the dimensions vary from the mean with respect o each
other.
Covariance is such a measure. It is always measured between 2 dimensions.If we calculate the
covariance between one dimension and itself, you get the variance.So if we had a three
dimensional data set (x,y,z), then we could measure the covariance between the x and y
dimensions, the x and z dimensions, and the y and z dimensions. Measuring the covariance
between x and x, or y and y, or z and z would give us the variance of the x,y and z dimensions
respectively.
The formula for covariance is very similar to the formulae for variance.
10. How does this work? Let’s use some example data. Imagine we have gone into the world and
collected some 2-dimensional data,say we have asked a bunch of students how many hours in
total that they spent studying CSC309, and the mark that they received. So we have two
dimensions, the first is the H dimension,the hours studied,and the second is the M dimension,the
mark received.
So what does the covariance between H and M tells us? The exact value is not as important as its
sign(ie. positive or negative). if the value is positive, then that indicates that noth dimensions
increase together,meaning that, in general,as the number of hours of study increased, so did the
final mark.
If the value is negative, then as one dimension increase the other decreases. If we had ended up
with a negative covariance then would mean opposite that as the number of hours of study
increased the final mark decreased.
In the last case, if the covariance is zero, it indicates that the two dimensions are independent of
each other.
c. The covariance Matrix
A useful way to get all the possible covariance values between all the different dimensions is to
calculate them all and put them in a matrix. Anexample. We will make up the covariance matrix
for an imaginary 3 dimensional data set, using the usual dimensions x,y and z.Then the
covariance matrix has 3 rows and 3 columns, and the values are this:
cov(x,x) cov(x,y) cov(x,z)
C= cov(y,x) cov(y,y) cov(y,z)
11. cov(z,x) cov(z,y) cov(z,z)
Point to note: Down the main diagonal, we see that the covariance value is between one of the
dimensions and itself. These are the variances for that dimension. The other point is that since
cov(a,b) = cov(b,a), the matrix is symmetrical about the main diagonal.
d. Eigenvectors and Eigenvalues
If we multiply a square matrix with any other vector then we will get another vector that is
transformed from its original position. It is the nature of the transformation that the eigenvectors
arise from. Imagine a transformation matrix that,when multiplied on the left, reflected vectors in
the line y=x. Then we can see that if there were a vector that lay on the line y=x,it is reflection of
itself. This vector (and all multiples of it, because it wouldn't matter how long the vector was),
would be an eigenvector of that transformation matrix. Eigenvectors can only be found for
square matrices. And not every square matrix has eigenvectors. And given an n x n matrix that
does have eigenvectors, there are n of them. Another property of eigenvectors is that even if we
scale the vector by some amount before we multiply it, we will still get the same multiple of it as
a result. This is because if we scale a vector by some amount,all we are doing is making it
longer,
12. Lastly, all the eigenvectors of a matrix are perpendicular,ie. at right angles to each other, no
matter how many dimensions you have. By the way, another word for perpendicular,in math talk,
is orthogonal. This is important because it means that we can express the data in terms of these
perpendicular eigenvectors, instead of expressing them in terms of the x and y axes. Every
eigenvector has a value associated with it,which is called as eigenvalue. Principal eigenvectors
are those which have the highest eigenvalues associated with them.
5.2 PCA Algorithm
a. Eigen faces Approach
Extract relevant information in a face image [Principal Components] and encode that information
in a suitable data structure. For recognition take the sample image and encode it in the same way
and compare it with the set of encoded images. In mathematical terms we want to find eigen
vectors and eigen values of a covariance matrix of images. Where one image is just a single point
13. in high dimensional space [n * n], where n * n are the dimensions of a image. There can be many
eigen vectors for a covariance matrix but very few of them are the principle one's. Though each
eigen vector can be used for finding different amount of variations among the face image. But
we are only interested in principal eigen vectors because these can account for substantial
variations among a bunch of images. They can show the most significant relationship between
the data dimensions.
Eigenvectors with highest eigen values are the principle component of the Image set. We may
lose some information if we ignore the components of lesser significance. But if the eigen values
are small then we won't lose much. Using those set of eigen vectors we can construct eigenfaces.
b. FindingEigenFaces
(1) Collect a bunch [say 15] of sample face images . Dimensions of all images should be same .
An image can be stored in an array of n*n dimensions [ ] which can be considered as a image
vector.
Where M is the number of images.
(2) Find the average image of bunch of images.
(3) Find the deviated [avg - img1 ,avg - img2, ......... , avg - img.n] images .
(4) Calculate the covariance matrix .
14. where
But the problem with this approach is that we may not be able to complete this operation for a
bunch of images because covariance matrix will be very huge. For Example Covariance matrix
,where dimension of a image = 256 * 256, will consist of [256 * 256] rows and same numbers of
columns. So its very hard or may be practically impossible to store that matrix and finding that
matrix will require considerable computational requirements.
So for solving this problem we can first compute the matrix L.
And then find the eigen vectors [v] related to it
Eigen Vectors for Covariance matrix C can be found by
where
are the Eigen Vectors for C.
15. (5) Using these eigenvectors , we can construct eigen faces . But we are interested in the eigen
vectors with high eigenvalues . So eigen vectors with less than a threshold eigen value can be
dropped .So we will keep only those images which correspond to the highest eigen values. This
set of images is called as face space. For doing that in java , we have used colt algebra package.
These are the steps involved in the implementation -->
i) Find [from 4]
Convert it in to a DoubleDenseMatrix2D by using colt matrix class.
ii) Find the eigen vector associated with that by using class :-
cern.colt.matrix.linalg.EigenvalueDecomposition
This will be a M by M [M = number of training images] matrix.
iii) By multiplying that with 'A' [Difference image matrix] we'll be able to get the actual
eigenvector matrix [U] of covariance of 'A'. It will be of M by X [Where X is the total number of
pixels in a image].
c. Classifying Face Images
The eigenfaces derived from the previous section seem adequate for describing face images
under very controlled conditions, we decided to investigate their usefulness as a tool for face
recognition. Since the accurate reconstruction of the image is not a requirement, a smaller
number of eigenfaces are sufficient for the identification process. So identification becomes a
pattern recognition task.
Algorithm:
1. Convert image into a matrix [ ] so that all pixels of the test image are stored in a matrix of
256*256[rows] by 1 [column] size.
16. 2. Find weights associated with each training image. This operation can simply be performed by,
Weight Matrix = TransposeOf (EigenVector-of-CovarianceMatrix) * DifferenceImageMatrix.
This matrix will be of size N by N, where N is the total number of face images. Each entry in the
column will then represent the corresponding weight of that particular image with respect to a
particular eigenvector.
2. Project into "face space" by a simple operation, this operation is same as defined above.
But here we are projecting a single image and hence we will get a matrix of size N [rows] by 1
[columns].Let's call this matrix as 'TestProjection' matrix.
for k=1,2.....N. Where N is the total number of training images.
3. Find the distance between the each element of the testProjection matrix and the corresponding
element of Weight matrix. We will get a new matrix of N [rows] by N [columns].
4. Find the 2-Norm for the above derived matrix. This will be a matrix of 1 [rows] by N
[columns]. Find the minimum value for all the column values. If it is with in some threshold
value then return that column number. That number represents the image number. That number
shows that the test image is nearest to that particular image from the set of training images. If the
minimum value is above the threshold value, then that test image can be considered as a new
image which is not in our training image set. And that can be stored in our training image set by
applying the same procedure [mentioned in section 5.2]. So the system is a kind of learning
system which automatically increases its knowledge if it encounters some unknown image [ the 1
which it couldn't detect ].
17. 5. Testing
Introduction
Software testing is a critical element of software quality assurance and represents the ultimate
service of specification design and coding. The increasing visibility of software as a system
element and the attended costs associated with the software failure and motivating forces for well
planned, thorough testing. It is not unusual for a software development to spend between 30 and
40 percent of total project effort in testing. System Testing Strategies for this system integrate
test case design techniques into a well planned series of steps that result in the successful
construction of this software. It also provides a road map for the developer, the quality assurance
organization and the customer, a roadmap that describes the steps to be conducted as path of
testing, when these steps are planned and then undertaken and how much effort, time and
resources will be required.
The test provisions are follows.
System testing
Software Testing: As the coding is completed according to the requirement we have to test the
quality of the software. Software testing is a critical element of software quality assurance and
represents the ultimate review of specification, design and coding. Although testing is to uncover
the errors in the software but it also demonstrates that software functions appear to be working as
per the specifications, those performance requirements appear to have been met. In addition, data
collected as testing is conducted provide a good indication of software and some indications of
software quality as a whole. To assure the software quality we conduct both White Box Testing
and Black Box Testing.
White Box Testing:
White Box Testing is a test case design method that uses the control structure of the
procedural design to derive test cases. As we are using a non-procedural language, there is very
18. small scope for the White Box Testing. Whenever it is necessary, there the control structure are
tested and successfully passed all the control structure with a very minimum error.
Black Box Testing:
Black Box Testing focuses on the functional requirement of the software. It enables to
derive sets of input conditions that will fully exercise all functional requirements for a program.
The Black Box Testing finds almost all errors. If finds some interface errors and errors in
accessing the database and some performance errors. In Black Box Testing we use mainly two
techniques Equivalence partitioning the Boundary Volume Analysis Technique.
Equivalence Partitions:
In the method we divide input domain of a program into classes of data from which test cases are
derived. An Equivalence class represents a set of valid or invalid of a set of related values or a
Boolean condition.
The equivalence for these is: Input condition requires specific value-specific or non-specific two
classes.
Input condition requires a range or out of range two classes.
Input condition specifies a number of a set-belongs to a set or not belongs to the set two classes.
Input condition is Boolean-valid or invalid Boolean condition two classes.
Boundary Values Analysis:
Number of errors usually occurs at the boundaries of the input domain generally. In this
technique a selection of test cases is exercised using boundary values i.e., around boundaries. By
the above two techniques, we eliminated almost all errors from the software and checked for
numerous test values for each and every input value. The results were satisfactory. Flow of
Testing System testing is designated to uncover weakness that was not detected in the earlier
tests. The total system is tested for recovery and fallback after various major failures to ensure
19. that no data are lost. An accepted test is done to validity and reliability of the system. The
philosophy behind the testing is to find error in project.
There are many test cases designed with this is mind. The flow of testing is as follows.
Code Testing
Specification testing is done to check if the program does with it should do and how it should
behave under various conditions or combinations and submitted for processing in the system and
it’s checked if any overlaps occur during the processing. This strategy examines the logic of the
program. Here only syntax of the code is tested. In code testing syntax errors are corrected, to
ensure that the code is perfect.
Unit Testing:
The first level of testing is called unit testing. Here different modules are tested against the
specifications produced during the design of the modules. Unit testing is done to test the working
of individual modules with test oracles. Unit testing comprises a set of tests preformed by an
individual programmer prior to integration of the units into a large system. A program unit is
small enough that the programmer who developed if can test it in a great detail. Unit testing
focuses first on the modules to locate errors. These errors are verified and corrected so that the
unit perfectly fits to the project.
System Testing
The next level of testing is system testing and acceptance testing. This testing is done to check if
the system has its requirements and to find the external behavior of the system. System testing
involves two kinds of activities:
Integration testing
Acceptance testing
20. Integration Testing
The next level of testing is called the Integration Testing. In this many tested modules are
combined into subsystems, which were tested. Test case data is prepared to check the control
flow of all the modules and to exhaust all possible inputs to the program. Situations like treating
the modules when there is no data entered in the text box is also tested. This testing strategy
dictates the order in which modules must be available, and exerts strong influence on the order in
which the modules must be written, debugged and unit tested. In integration testing, all the
modules / units on which unit testing is performed are integrated together and tested.
Acceptance Testing:
This testing is performed finally by user to demonstrate that the implemented system satisfies its
requirements. The user gives various inputs to get required outputs.
Specification Testing:
Specification testing is done to check if the program does what is should do and how it should
behave under various conditions or combination and submitted for processing in the system and
it is checked if any overlaps occur during the processing.
Testing Objectives:
The following are the testing objectives….
Testing is a process of executing a program with the intent of finding an error.
A good test case is one that has a high probability of finding an as yet undiscovered error.
A successful test is one that uncovers an as yet undiscovered error.
The above objectives imply a dramatic change in view point. They move counter to the
commonly held view that a successful test is one in which no errors are found. Our objective is
to design tests that systematically verify different clauses of errors and do so with minimum
amount of time and effort. If testing is conducted successfully, it will uncover errors in the
21. software. As a secondary benefit, testing demonstrates that software functions appear to be
working according to specification and that performance requirements appear to have been met.
In addition, data collected as testing is conducted provides a good indication of software. Testing
can’t show the absence of defects, it can only show that software errors are present. It is
important to keep this stated in mind as testing is being conducted.
Testing principles:
Before applying methods to design effective test cases, a software engineer must
understand the basic principles that guide software testing.
• All tests should be traceable to customer requirements.
• Tests should be planned long before testing begins.
• Testing should begin “in the small” and progress towards testing “in the large”.
• Exhaustive testing is not possible.
Test Plan:
A test plan is a document that contains a complete set of test cases for a system, along
with other information about the testing process. The test plan should be returned long before the
testing starts.
Test plan identifies
1. A task set to be applied as testing commences,
2. The work products to be produced as each testing task is executed
3. The manner, in which the results of testing are evaluated, recorded and reuse when regression
testing is conducted. In some cases the test plan is indicated with the project plan. In others the
test plan is a separate document. The test report is a record of the testing performed. The testing
22. report enables the acquirer to assess the testing and its results. The test report is a record of the
testing performed. The testing report enables the acquirer to assess the testing and its results.
Test cases
Test cases for login page
Sl no
Task Expected result Obtained result Remarks
1 Using valid username Successful As expected success
and
authentication
password(Image)
2 Using invalid Authentication As expected Invalid user
username name
failed
3 Using invalid Authentication As expected Username and
password(Image) password are
failed
not correct
23. 4 Without giving Authentication Please enter
username and user name and
failed As expected
password password
5 Username and Authentication As expected Password
without password cannot be
failed
empty
Test cases for registration page
Sl no Task Expected result Obtained Remarks
result
1 Capture four images Registration As expected success
and register success
24. Register button is
disabled if less than
2 Capture three Should not allow As expected
four images are
images and register to register
captured.
3 Without giving port Connection failed As expected Please specify port
number number
4 Without selecting IP Connection failed As expected Ip has to be selected
25. 6. Snapshots
Layout
The layout contains two sections. Left section is used for placing web camera window. Right
section is used to show capture images for login and registration.
Web camera Window
This is a separate window which is created using separate thread.
26. Register Window
Four images are shown which are captured during registration.
Login Screen
The image is captured for login is shown in this window. Success message is shown as below.
28. Login Screen and Web camera
Web camera and captured image during login is as shown below.
29. 7. Conclusion
1. The user will be authenticated not only with the username also with the image of the user
2. For the processing, some of the lines on the face will be used so that the image can be
identified with the different angles.
3. The image processing process isgood enough to provide security for the website.
8. Future Enhancements
1. The project can be enhanced for processing 3D images.
30. 2. Authentication can be implemented by capturing video clip of a person.
3. This can also be used to process the signatures of a person for providing the authentication.
4. We can also use this in real time application.
5. Authentication can be embedded into web application which will be an added advantage for
providing the login for the websites.