Heart Disease Prediction using machine learning.pptx
Automated_attendance_system_project.pptx
1. A UTOMATED ATTENDANCE SYSTEM
AS
G. NAVEEN SAI (N180674)
B. MAHESH (N180565)
S. HARSHITA (N180121)
M. LAKSHMI (N180351)
SK. FALAKNAAZ (N180255)
2. Abstract
Title: An Automated Attendance System Using Facial Recognition and Deep Learning
Taking attendance manually in a classroom setting is time-consuming and prone to errors. To
address these challenges, we propose an automated attendance system that utilizes facial recognition
and deep learning algorithms to accurately track students' attendance.
Our system captures a single picture of the classroom using a camera. Each student's face is
detected and extracted from the image using computer vision techniques. Subsequently, a deep
learning model trained on facial recognition algorithms is employed to match the extracted faces with
known student identities. If a match is found with high confidence, the student's attendance is
recorded.
In conclusion, our mini project aims to design and develop an automated attendance system utilizing
facial recognition and deep learning. This system offers a reliable, efficient, and accurate alternative to
manual attendance taking.
3. Introduction
Motivation for the work:
• Manual attendance taking in classrooms is time-consuming and prone to errors.
• Automation can significantly reduce the time and effort required for attendance tracking.
• Facial recognition and deep learning provide a reliable and accurate method for identifying students.
Real-world Applications:
• Educational Institutions:
The system can be implemented in schools, colleges, and universities to streamline attendance
management processes, reduce administrative burdens, and improve overall efficiency.
• Corporate Environments:
Companies can utilize the automated attendance system to track employees' presence, ensuring
accurate payroll management and enhancing security measures.
• Events and Conferences:
The system can be employed in large-scale events to monitor participant attendance, optimize event
planning, and provide valuable data for post-event analysis.
By using facial recognition and deep learning, this project addresses the limitations of manual attendance tracking
methods, offering an automated system that enhances classroom management, reduces errors, and promotes
academic integrity.
4. Related Works/Existing Works:
RFID BIOMETRIC
This system uses RFID tags
attached to students' ID cards to
track attendance. However, it
requires students to physically tap
their cards on a reader, which can
be inconvenient and prone to
errors if students forget to do so.
This system utilizes fingerprint
recognition to track attendance. While
it provides accurate identification, it
requires physical contact with the
fingerprint scanner, making it less
hygienic and time-consuming in larger
classrooms.
This system generates unique QR
codes for each student, and they
scan their codes using a
smartphone app to mark their
attendance. Smartphones and
internet connectivity, which may
not be feasible for all students.
QR CODE
1) Few related works and their limitations:
5. Related Works/Existing Works:
2) Research Gaps identified
• Limited research on the integration of facial recognition and deep learning for attendance management in
educational institutions.
• Insufficient focus on addressing attendance fraud or impersonation attempts using advanced technologies.
• Limited exploration of real-time attendance tracking using facial recognition and deep learning algorithms.
• Inadequate studies on the scalability and performance of facial recognition systems for large classroom
settings.
By addressing these research gaps, we aim to contribute to the development of a robust, scalable, real-time, and
ethically conscious automated attendance system using facial recognition and deep learning techniques.
7. Proposed Method
1)Data Collection:
We have prepared our own dataset specifically tailored for 5 sections. This dataset consists of
images of each student, along with their corresponding unique IDs. The images were captured using and
carefully labeled to associate each image with the respective student's ID. This enables us accurate
identification and attendance tracking of individual students in the classroom. And the images are stored in
train, test, and validation directories for training and testing.
2) Preprocessing:
Before feeding the images into the model, preprocessing steps are performed to ensure that the
data is in a suitable format for training. The preprocessing steps may involve converting the images to
grayscale, resizing them to a consistent size, and normalizing pixel values. Additionally, the images are
processed to detect and extract faces using the dlib library.
3) Building the Model:
A CNN model is built using TensorFlow's Keras. The model architecture consists of convolutional
layers, pooling layers, and fully connected layers. Each layer is added to the model using the add method of
the Sequential class, and the output of each layer serves as the input to the next layer. These layers are
defined to capture and learn the essential features and patterns present in the input images.
II) Components in the flowchart
8. Proposed Method
4) Training:
Once the model is constructed, it is trained using the pre-processed training data. The training data
is passed through the model iteratively for a specified number of epochs. During each epoch, the model
updates its parameters based on the loss calculated between the predicted outputs and the ground truth
labels. The goal of training is to optimize the model's parameters to accurately recognize and classify the input
images.
5) Testing:
After the model is trained, it is evaluated using the pre-processed testing data. The testing phase
aims to assess the model's performance on unseen data. The images are passed through the trained model to
obtain predictions. The predicted output is then compared to the known labels to measure the model's
accuracy and performance.
9. Proposed Method
1)Preprocessing
• Import the required libraries (dlib, cv2, os).
• Load the face detector and shape predictor models.
• Define the input directories for preprocessing.
• Loop over the input directories.
• For each image file in the directories:
• Load the image.
• Convert the image to grayscale.
• Detect faces in the image.
• For each detected face:
• Get the face landmarks.
• Extract the face region.
• Convert the face region to grayscale.
• Save the preprocessed image with a new filename
III) Algorithms
10. Proposed Method
2)Training
• Import the required libraries (tensorflow, keras).
• Define the parameters for the model (image width, image height, num_channels, num_classes,
batch_size, num_epochs).
• Define the CNN model architecture using Sequential and various layers.
• Compile the model with the desired optimizer, loss function, and metrics.
• Define the paths to the training and validation directories.
• Create data generators for training and validation data using ImageDataGenerator.
• Train the model using the fit() function, specifying the training and validation data.
• Save the trained model.
3)Testing
• Extract each face from the group image and preprocess(face).
• Iterate over the saved face images.
• Load each image and resize it.
• Predict the person in the test image using the loaded model.
• Output the predicted person's name and ID.
III) Algorithms
11. Experimental setup
I) Technologies and libraries used:
• Python
• OpenCV
• Dlib
• TensorFlow
• Keras
II)Datasets
The code assumes that the dataset is organized in the following directory structure:
dataset/train/: Directory containing training images divided into subdirectories for each class/person.
dataset/test/: Directory containing test images divided into subdirectories for each class/person.
dataset/validation/: Directory containing validation images divided into subdirectories for each class/person.
The dataset is labeled, with each subdirectory name (ID numbers) representing a different class/person.
12. Experimental setup
III) System Hardware used
● Processor: 64-bit, quad-core, 2.5 GHz minimum per core
● RAM: 4 GB or more.
● HDD: 10 GB of available space or more.
● Display: Dual XGA (1024 x 768) or higher resolution monitors.
● Camera: A detachable webcam.
● Keyboard: A standard keyboard
IV) Results
In Jupyter Notebook
16. Experimental setup
Our Project
Deep-learning based group-photo
attendance System using One
Shot Learning
1)Accuracy 97% 80%
2)User Interface Yes No
3)Future Scope More Less
VI) Comparison
17. Conclusion
In this project, we developed a person classifier using deep learning techniques. We performed
preprocessing on the input images to detect faces using the dlib library and extracted the face
regions. The preprocessed images were then used to train a Convolutional Neural Network
(CNN) model. The model achieved a certain level of accuracy in classifying the persons
present in the images.
18. Future Scope
The future scope of our project includes implementing a system that captures pictures from a CCTV camera at
regular intervals, such as every 5 minutes. The objective is to identify the persons present in the classroom
based on their appearance in these images. To determine attendance, we can set a threshold of 85%
appearance for a person to be considered present.
Benefits
● Efficiency
● Accuracy
● Real-Time Monitoring
● Data Analysis