IJEEE - MACHINE LEARNING APPROACHES FOR IRIS IDENTIFICATION.pdf
1.
A MAJOR PROJECTREPORT
ON
MACHINE LEARNING APPROACHES FOR IRIS IDENTIFICATION
Submitted to
SRI INDU COLLEGE OF ENGINEERING AND TECHNOLOGY
In partial fulfillment of the requirements for the award of degree of
BACHELOR OF TECHNOLOGY
IN
COMPUTER SCIENCE AND ENGINEERING
By
P. SUKRUTHA REDDY (20D41A05F9)
P. TANAY REDDY (20D41A05F3)
N. AAKASH (20D41A05E9)
M. CHARAN TEJA REDDY (20D41A05C5)
UNDER THE ESTEEMED GUIDENCE OF
Mrs. P. HYMAVATHI
(Assistant Professor)
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
SRI INDU COLLEGE OF ENGINEERING AND TECHNOLOGY
(An Autonomous Institution under UGC,accredited by NBA, Affiliated to JNTUH)
Sheriguda, Ibrahimpatnam -501510
(2023 – 2024)
2.
SRI INDU COLLEGEOF ENGINEERING AND TECHNOLOGY
(An Autonomous Institution under UGC, Accredited by NBA, Affiliated to JNTUH)
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
CERTIFICATE
Certified that the Major Project entitled “MACHINE LEARNING APPROACHES FOR IRIS
IDENTIFICATION” is a bonafied work carried out by P. SUKRUTHA REDDY (20D41A05F9),
P. TANAY REDDY (20D41A05F3), N. AAKASH (20D41A05E9), M. CHARAN TEJA REDDY
(20D41A05C5) in partial fulfillment for the award of Bachelor of technology in Computer Science
and Engineering of SICET, Hyderabad for the academic year 2023 – 2024. The Project has been
approved as it satisfies academic requirements in respect of the work prescribed for IV YEAR , II-
SEMESTER of B.TECH course.
INTERNAL GUIDE HEAD OF THE DEPARTMENT
Mrs. P.HYMAVATHI PROF .CH.G.V.N. PRASAD
(Assistant Professor)
EXTERNAL EXAMINER
3.
ACKNOWLEDGEMENT
The satisfaction thataccompanies the successful completion of the task would be put incomplete
without the mention of the people who made it possible, whose constant guidance and
encouragement crown all the efforts with success. We are thankful to Principal Dr. G. SURESH for
giving us the permission to carry out this project. We are highly indebted to
PROF.CH.G.V.N.PRASAD, Head of the Department of Computer Science Engineering, for
providing necessary infrastructure andlabs and also valuable guidance at every stageof this project.
We are grateful to our internal project guide Mrs. P. HYMAVATHI, Assistant Professor for her
constant motivation and guidance given by her during the execution of this project work. We would
like to thank the Teaching and Non-Teaching staff of Department of Computer Science and
engineering for sharing their knowledge with us, last but not least we express our sincere thanks to
everyone who helped directly or indirectly for the completion of this project.
P. SUKRUTHA REDDY (20D41A05F9)
P. TANAY REDDY (20D41A05F3)
N. AAKASH (20D41A05E9)
M. CHARAN TEJA REDDY (20D41A05C5)
4.
ABSTRACT
One of themain results of the validation system is based on the fingerprint based iris recognition
system and respective technology.
The entire biometric process is very much authentic and unique than the other types of recognition
system and validation process. This has provided innovative ideas in the daily lives of human beings.
The multimodal biometric process has generally applied various types of applications for properly
dealing with the appropriate and most significant limitations of the “unimodal biometric system”.
The entire process has been generally included with the proper sensitivity of noise, the population
coverage areas, variability cases of the inter class and intra class issues, vulnerability cases of
possible hacking and the non universality criteria.
The entire research paper has been mainly focused on the deep learning oriented machine learning
system. The fingerprint based iris recognition system to do the proper validation of human beings
has been mainly done by convolutional neural network (CNN) technique. In the existing data
validation process, the iris recognition system has been mainly done with respect to the “high
security protection system with actual fingerprints''. The entire paper has been briefly elaborated on
the best uniqueness, reliability process and the proper “validity of the iris biometric validation
system” for the actual purpose of the person identification.
5.
CONTENTS
S.NO PAGE NO
i.List of Figures................................................................................................................i
ii. List of Screenshots...................................................................................................ii
1. INTRODUCTION
1.1 INTRODUCTION OF PROJECT ....................................................................... 1-4
1.2 LITERATURE SURVEY....................................................................................5-6
1.3 MODULES………………………………………………………………………7-8
2. SYSTEM ANALYSIS
2.1 EXISTING SYSTEM & ITS DISADVANTAGES ...............................................9
2.2 PROPOSED SYSTEM & ITS ADVANTAGES................................................. 10
2.3 SYSTEM REQUIREMENTS............................................................................11
3. SYSTEM STUDY
FEASIBILITY STUDY......................................................................................12-16
4. SYSTEM DESIGN
4.1 ARCHITECTURE................................................................................................17
4.2 UML DIAGRAMS ...............................................................................................18
4.2.1 USECASE DIAGRAM................................................................................ 19
4.2.2 CLASS DIAGRAM……………………………………………………..….20
4.2.3 SEQUENCE DIAGRAM………………………...…………………………21
4.2.4 ACTIVITY DIAGRAM…………………………….……………………………22
5. TECHNOLOGIES USED
5.1 WHAT IS PYTHON ?..……………………………………………………...........23
5.1.1 ADVANTAGES & DISADVANTAGES OF PYTHON........................23-26
5.1.2 HISTORY ......................................................................................................27
5.2 WHAT IS MACHINE LEARNING ?....................................................................27
6.
5.2.1 CATEGORIES OFML……………………………………………………28
5.2.2 NEED FOR ML……………………………………………………………28 -29
5.2.3 CHALLENGES IN ML…………………………………………………….29
5.2.4 APPLICATIONS ......................................................................................29-31
5.2.5 HOW TO START LEARNING ML?...................................................... 32-35
5.3 PYTHON DEVELOPMENT STEPS ................................................................36-37
5.4 MODULES USED IN PYTHON ..................................................................... 37-39
5.5 INSTALL PYTHON STEP BY STEP IN WINDOWS & MAC ..................... 41-49
6. IMPLEMENTATION
6.1 SOFTWARE ENVIRONMENT............................................................................ 50
6.1.1 PYTHON .......................................................................................................50
6.1.2 SAMPLE CODE.....................................................................................51-60
7. SYSTEM TESTING
7.1 INTRODUCTION TO TESTING ..................................................................... 61-63
7.2 TESTING STRATEGIES........................................................................................ 64
8. SCREENSHOTS....................................................................................................65-70
9. CONCLUSION ........................................................................................................... 71
10.REFERENCES...................................................................................................... 72-73
7.
i
LIST OF FIGURES
FigNo Name Page No
Fig.1 ARCHITECTURE DIAGRAM 17
Fig.2 USECASE DIAGRAM 19
Fig.3 CLASS DIAGRAM 20
Fig.4 SEQUENCE DIAGRAM 21
Fig.5 ACTIVITY DIAGRAM 22
Fig.6 – Fig16 INSTALLATION OF PYTHON 42-48
Fig - 17
CASIA IRIS Dataset 52
8.
ii
LIST OF SCREENSHOTS
FigNo Name Page No
Fig.18
Generating the Convolutional Neural Network (CNN) Model
from the provided dataset
65
Fig.19 Generation of the LOSS Graph and accuracy check 66
Fig.20
Recognition process
67
Fig.21 Recognition process 68
Fig - 22 Recognition process 69
Fig.23 Accurate recognition image
70
9.
1
1. INTRODUCTION
1.1. INTRODUCTION
Thebiometric process has been mainly used to recognize individual types of physical aspects and
features. For this purpose, a tremendous amount of acknowledgement technologies have been
generally provided with the actual fingerprint, iris procedures and voice acknowledgement. The
biometric mainly deals with the proper technical and technological fields for the body controls and
body dimensions. The authentication system is based on the appropriate biometric security system
that has increased the actual importance within all countries. The used system has been shown the
proper valid and best impressive performance based on all these procedures and aspects. For this
purpose, the fingerprint is the only procedure for providing the proper security techniques to provide
the true uniqueness and the strong privacy properties of the entire system. The exceptional fingerprint
assurance or the proper kind of imprint approval has been mainly insinuating the automated methods
and procedures to ensure similarity between the two people fingerprints. The entire chapter has been
generally provided with the actual purpose of the fundamental research that is overall dependent on
the research objectives and respective research questions. In this chapter, the research framework of
the entire study has also been provided. The fundamental research has described all the factors that
are responsible for this recognition process.
Background of the study
In this particular recognition system, the outer and the inner boundaries of the iris area have been
mainly detected by the different types of integro differential operators. The real success of the
biometric system and the biometric process is totally based on the proper classification and proposed
recognition system. The entire process mainly depends on the proper robustness and efficiency of the
"feature extraction and classification stages". In this case, most choices of the fingerprint game-
planning images have been proposed for the various types of bunch fingerprints in four to five classes
between the four types. Among all these four aspects, the primary vital priorities and the initial step
are the AFIS. The particular types of the biometric process use the unique certified cations for
gathering informative data from the various estimations. This kind of data is very much necessary
and essential for the various cases of individual priorities. It still remains important and essential for
this recognition system. The Iris recognition of the entire development of enthusiasm has been found
within the sound stage of biometrics for human ID. For the proper discussion of the entire recognition
10.
2
system, there areused the "Bayesian graphical models" match the respective images of such types of
tests. Among all the classifiers, "convolution neural networks" have been mainly considered the most
robust and straightforward aspects to overcome all the obstacles within this system. This entire
research study has been proposed as the "integrated approach to the proper iris recognition validation
system" for the retention process of a human fingerprint.
Problem statement
There are various types of problems and significant issues that have been mainly faced by the
biometric security system. The central and foremost issue is the biometric authentication process,
and technologies have been mainly raised in the various types of privacy concerns and security
concerns (Hamd & Ahmed, 2018). During the processing time of the biometric data, there is no other
option to undo or retrieve the respective information from the damage. For the case of the
compromised passwords, anyone can modify it with fingerprint, iris scanner and the ear image effects.
So for all these aspects, the simple working performance of the biometrics remains within the security
risks and privacy risks. There are various types of problems that have been shown in the different
slides of the iris recognition system, such as the sensor module, preprocessor module and feature
extraction process. All these security and privacy issues can be adequately solved by the appropriate
types of technologies and modern and advanced techniques. The security process should also be
secured with the help of a strong password and robust system process.
Motivation
For this purpose, several types of publications have been mainly documented with respect to the high
accuracy states and the excellent reliability of the neural networks like the multilayer perceptions
(MLP). This is mainly provided between the present times patterned recognition and accurate
classifier applications. This research study there mainly used the particular machine learning
technique "convolution neural network (CNN)" for improving the privacy security process within
the validation system. The input image is mainly needed for reducing the size of the processed data
and to achieve satisfactory working performance (Herbadji et al. 2020). The respective working
performance has been done within several image processing states like image enlargement, image
partitioning and factor extraction.
11.
3
Research Aim
The entireresearch paper aims to mainly initiate the removal of the particular gap among the existing
study notes with respect to the various types of validation system. This section also has been used to
present the best overview of the entire validation system that is totally based on the iris fingerprint
method. The system is also used for properly enhancing the better approach for the excess proactive
cases of the security system and privacy system such as the human fingerprints. With the help of the
research topic, it has been easily understood that for this case, the entire validation system is totally
dependent on the "fingerprint iris recognition" methods and procedures.
Methodology
Biometrics systems have been one of the safest ways to secure and verify any system. In recent times,
multimodal biometric techniques are widely implemented in several real-world applications. Due to
the lack of validation processes in the unimodal systems, the multimodal biometric system was
introduced with the help of deep learning algorithms. The “Convolution Neural Network (CNN)” is
nothing but an algorithm that uses deep learning architecture. Validation through biometric systems
is evolving day by day and has become a much promising technology that can be used for the
identification and authentication of any person. Peer technologies are recently used in the system for
solving the validation difficulties through the biometric system. In this part of the research, different
analyzing methods will be discussed. For every research work, there is a particular approach which
is obtained to reach the final outcomes that will be accomplished in this chapter. Being one of the
latest and safest technologies in the history of validation, there are several limitations that are faced
while processing the entire task. Some of the systems and software are required to improve so that
better services are provided to the clients. So the limitations of the research have also been attached
in part. Here the analysis will be done on the basis of the software and technologies that have been
used for developing the entire software work. The fingerprint and iris recognition system requires an
exemplary user interface as the validation and verification process is given significant priority.
Through the wireless communication model, the software work will be done. Deep learning
algorithms are another vital part that would be used for this purpose. Through the implementation of
the “convolution neural network (CNN)” architecture, reshaping the biometric system would be done.
Research Significance
The iris biometric authentication process is very complex, and it has been mainly shown the various
types of recognition techniques. The recognition technique has been included with the "mathematical
12.
4
pattern recognition technique"for proper identification of the unique video images and stable video
images of both the individual or proper distanced ranged authentication process. The proper types of
the various characteristics of the iris recognition process with respect to the "convolution neural
network (CNN)" are efficiently unique and accurate for each recognition method (Iula & Micucci,
2019). The most significant advantage of the fingerprint-based iris recognition system is the best
accuracy status, proper scalability system, accurate hyphenation, and stability, and live detection, the
most secure and fastest matching criteria. In this case, the iris authentication process has been shown
the best types of similarity in the case of the most complicated mathematical patterns for performing
the uniqueness of the entire validation process. In the proposed system, many types of layers are
applied in the "convolution neural network (CNN)" for the purpose of the "multimodal biometric
human authentication" process. The authentication process mainly has been done with respect to the
face, veins, iris scanner, fingerprints and palm for increasing the robustness and visibility of the entire
recognition system (Jain & Kumar, 2019). The entire recognition process is very much tricky for
hacking and copying.
CONCLUSION
This iris recognition system is the popular type of robust and fast multimodal biometric system for
adequately identifying the construction of the deep learning method. The particular types of iris
patterns included the proper fingerprint extraction method, are mainly used for the purpose of the
invention of the high authenticity and highly accuracy valued recognition process. This particular
chapter has been provided a detailed description of the entire research topics. The respective research
aims and research objectives have also been discussed to provide the primary goals of fundamental
research. This section also mentioned the factual background of the entire study that has elaborately
discussed the fingerprint-based iris recognition based methods. In this part, different types of
challenges and problems are also discussed for a better understanding of the entire topic. The research
framework has also been presented in this chapter.
13.
5
1.2 LITERATURE SURVEY:
Theliterature review chapter has been mainly provided with a detailed description of the various
problems and different types of recognition aspects that has been mainly associated with the entire
area of the research study. The fundamental research has been conducted with the help of the different
types of research notes of different authors and researchers. The entire process is also evaluated by
the brief description of the research from the different online articles, journals and various websites.
The fundamental research has been conducted with respect to the in-depth analysis process of the
entire validation based recognition system. Including all of these, this particular chapter has also
demonstrated the particular models and theories of the proposed topic for evaluating the entire
description process. In this part, there are also described the literature gaps that are generally missing
in the existing research notes of various authors.
Empirical Study
According to the author Alrahawe (2018), a biometric system is one of the safest ways to work with
the digital world. Since biometrics such as fingerprints, face, and iris recognition are different for
different persons, these are safer compared to any other processes to secure confidential data
(Alrahawe, 2018). However, in the olden days, there was a lack of technology for which there was
less security provided for any confidential information. With the advancement in technology in recent
times, biometric security has been an integral part of any system. Moreover, the author states that
these kinds of processes for security in digitalization have become error-free, for which this system
is getting implemented in the latest systems (Singh & Kant, 2021). Due to minor errors in the system,
this is pretty reliable for security purposes. The biometric system has used various types of
recognition processes, among which it also uses the finger-knuckle recognition system.
Reliability of the validation of the recognition system by ultrasound images
According to the author Derman et al. (2017), the ultrasound process has been tried to be used in the
"biometric recognition system" for many years. But in the current situation, the proper creativity and
integrating of the appropriate variety of the "ultrasound fingerprint readers'' has been transformed
into a portable device for better usage of the entire system. In this case, the most essential and
necessary qualification of the ultrasound is to provide the paramount ability and capability for the
visualization of the various types of internal structures. This entire working procedure has been used
for the purpose of improving the actual recognition rates and the proper resistance for the case of
spoofing attacks. Including all of these, there are also used external and ambient noise and
14.
6
environment for thebetter modification of the entire system. All these external ambient are the change
in the illumination rates, proper humidity and the environmental temperature. For this particular
system, the palm print-oriented recognition system is totally dependent on the ultrasound images that
have been mainly proposed and validated adequately for the development of the entire systematic
process. The main strength of ultrasonic technologies has been better than the different types of
technologies to show the best quality and capability within the internal parts of the human body. To
reduce the fundamental issues within this system, the modification should be mainly provided by the
"adequate acoustic coupling" sources among the human hand and the ultrasonic probe
(jicrjournal.com, 2017).
Existence
According to the author Hansley (2018), the particular type of information technology (IT) artifact
has been done based on the main focal point of the entire research. The actual types of the biometric
process are very much unique for providing all the physical features and accurate behaviors in the
validation process. The proper concept of the study is mainly based on information technology, and
the entire system has been done in the proper context of socio-economic values. The main challenge
for these community stages has to be generally engaged with the multiple, dynamic aspects and all
respective factors and the emergency values purpose (Hansley, 2018). The information technology
(IT) strategy has been mainly categorized into various points such as the proper construction, accurate
model, proper method and the various types of instantiation position. The entire process is very much
useful for providing the best service as the digital identifiers that the respective system can be easily
able to utilize the identity oriented different applications.
Literature gap
The convolution neural technique for the fingerprint-based iris recognition system is very much
effective and more authentic for providing an accurate validation system. Within the research note,
the authors are unable to describe various types of techniques and strategies for this recognition
system. This advanced technique-oriented biometric process is very much cost-effective and
expensive (Gonzalez-Sosa et al., 2019). It is very much costly with respect to the other biometric
modalities and factors. To use the entire system, the respective person can be able to tackle the iris
scanners. All the authors are not able to explain the proper location and the transformation of the best
process to the particular recognition system.
15.
7
1.3 MODULES:
Image Acquisition:
Imageacquisition involves capturing high-quality iris images from individuals using specialized iris
imaging devices such as iris scanners or cameras.
Machine learning approaches can be used to enhance image acquisition by developing algorithms for
quality assessment, image alignment, and artifact removal.
Deep learning techniques can automate the process of identifying and discarding poor-quality images,
ensuring that only reliable iris data are used for subsequent processing.
Segmentation:
Iris segmentation is the process of isolating the iris region from the rest of the eye image.
Machine learning methods, particularly CNNs, can be utilized for iris segmentation by training
models to accurately detect and localize the boundaries of the iris.
CNNs can learn to distinguish between iris and non-iris regions based on features such as texture,
color, and edge information, leading to robust segmentation results.
Techniques such as Hough Transform, edge detection, and region growing can also be integrated into
the segmentation process to improve accuracy and efficiency.
Normalization:
Normalization is essential for standardizing iris images to ensure consistency and comparability
across different samples.
Machine learning approaches can be employed for iris normalization by developing algorithms that
correct for variations in pupil size, iris deformation, and illumination conditions.
CNNs can learn to perform image transformation tasks, such as scaling, rotation, and warping, to
align iris images to a standardized template.
Statistical methods, such as histogram equalization and Zernike moments, can also be utilized for iris
normalization to enhance image quality and feature extraction.
Matching:
Matching involves comparing the features extracted from the enrolled iris image (template) with the
features extracted from the query iris image for identification or verification purposes.
16.
8
Machine learning techniques,including CNNs and traditional classification algorithms, can be
utilized for iris matching by learning discriminative features from iris images and performing
similarity assessment.
CNN-based siamese networks can be trained to learn a similarity metric that compares pairs of iris
images and outputs a similarity score indicating the degree of similarity between them.
Distance-based metrics such as Hamming distance, Euclidean distance, or cosine similarity can be
used to quantify the similarity between feature vectors extracted from iris images.
Ensemble methods, such as random forests or gradient boosting, can also be employed to combine
multiple matching algorithms and improve overall accuracy and robustness.
By leveraging machine learning approaches for image acquisition, segmentation, normalization, and
matching, iris detection systems can achieve high levels of accuracy, reliability, and security in
biometric authentication applications. These techniques enable automated processing of iris images,
reducing manual intervention and improving efficiency in real-world deployment scenarios.
17.
9
2. SYSTEM ANALYSIS
2.1EXISTING SYSTEM
The iris is a part of the eye that controls the pupil's size, regulating the amount of light that enters
the eye. It is the part of the eye with coloring based on the amount of melatonin pigment within the
muscle. The individual's irises patterns are unique and structurally distinct, which remains stable
throughout adult life and makes it suitable to be used for reliable automatic recognition of persons
as an attractive goal. Iris recognition is employed as the most reliable and accurate biometric
identification system, compared with other biometric technologies, such as speech, finger, and face
recognition.
2.1.1 DISADVANTAGES OF EXISTING SYSTEM:
Lower accuracy: If iris recognition is performed without machine learning techniques, the
accuracy rates may be lower. Traditional methods of iris recognition may not be as effective
at identifying unique iris patterns or ignoring noisy or distorted patterns caused by variations
in lighting, occlusions, or other factors.
Invasive and contact-based: Other biometric identification technologies such as
fingerprinting or DNA analysis may require invasive or contact-based methods, which can
be less hygienic and more intrusive than iris recognition.
Vulnerability to fraud: Other biometric identification technologies may be vulnerable to
fraud, such as the use of fake fingerprints or DNA samples. Iris recognition using machine
learning is less vulnerable to fraud as iris patterns are unique to each individual and difficult
to replicate.
2.2 PROPOSED SYSTEM
The proposed approach consists of two main modules. The first of them is connected with iris
preprocessing and segmentation, whilst the second one is responsible for classification (identity
recognition). However, before the system will be presented, the authors would like to show their
motivation to work under iris-based human identity recognition. As it was presented in the second
chapter, in the literature one can observe different ideas and algorithms connected with biometrics
18.
10
and especially withthe main topic of this paper. However, in some approaches descriptions there are
no significant details provided. For example, in part of them it is really hard to understand how iris
image was preprocessed and what kind of algorithms were used to extract the most important
information. Sometimes there is even no answer on the vital question—how the feature vector was
constructed? Moreover, in the case of artificial intelligence-based approaches, in part of them there
are no details regarding the way in which such tools were tuned or how they were learned (number
of epochs or even the structure of neural network is missing). At times, it is also hard to understand
why the results were so precise. In these works, there are no premises that can lead to high recognition
rates. (It means that there are no scientific reasons why such algorithms gain high accuracy values.)
By these reasons, the authors would like to propose their own idea regarding iris based human identity
recognition.
2.2.1 ADVANTAGES OF PROPOSED SYSTEM:
Accuracy rates, typically above 99%. This is because machine learning algorithms can learn
to extract relevant features from the iris patterns and identify unique patterns that are specific
to each individual.
Non-invasive and contactless: Unlike other biometric identification technologies such as
fingerprinting or DNA analysis, iris recognition is non-invasive and contactless. This means
that it is less intrusive and more hygienic, making it suitable for use in various applications
such as airport security, border control, and access control.
Robustness: Iris patterns are unique to each individual and are stable over time, making iris
recognition a robust technology for biometric identification. Machine learning techniques can
also improve the robustness of the system by learning to identify and ignore noisy or distorted
iris patterns caused by variations in lighting, occlusions, or other factors.
19.
11
2.3 SYSTEM REQUIREMENTS
HARDWAREREQUIREMENTS:
System Intel i7
SSD 512GB
RAM 16GB
SOFTWARE REQUIREMENTS:
Operating System Windows 11
Development Software Python 3.7
Programming Language Python
Integrated Development Environment Visual Studio Code
Front End Technologies HTML5,CSS3,JavaScript
Back End Technologies or Framework Django
Database Language SQL
Database Software WAMP or XAMPP Server
20.
12
3. SYSTEM STUDY
FEASABILITYSTUDY
The feasibility of the project is analyzed in this phase and business proposal is put forth witha very
general plan for the project and some cost estimates. During system analysis the feasibility study of
the proposed system is to be carried out. This is to ensure that the proposedsystem is not a burden to
the company. For feasibility analysis, some understanding of the major requirements for the system
is essential.
1. Technical Feasibility:
Algorithm Availability: Convolutional Neural Networks (CNNs) are readily available and
widely used in the field of image recognition, making them technically feasible for iris
detection.
Hardware Requirements: CNNs can be computationally intensive, requiring powerful
hardware such as GPUs or TPUs for efficient training and inference. However, with
advancements in hardware technology, these resources are becoming more accessible and
affordable.
Data Availability: Adequate datasets containing labeled iris images are essential for training
the CNN model. If such datasets are available or can be acquired, it enhances the technical
feasibility of the approach.
Computational Resources:CNNs can be computationally intensive, requiring significant
processing power and memory resources for training and inference.Assessing the
computational requirements and ensuring access to sufficient hardware resources (e.g., GPUs,
TPUs) or cloud computing platforms is necessary for implementing the system effectively.
Integration with Existing Systems:Integrating the fingerprint-based iris recognition system
with existing security systems or authentication processes may require compatibility testing
and interface development.Ensuring seamless integration with hardware devices, databases,
and application programming interfaces (APIs) is critical for successful deployment.
Security Considerations:Addressing security concerns, such as data encryption, access
control, and protection against adversarial attacks, is paramount for ensuring the reliability
and integrity of the biometric validation system.Implementing best practices for secure
software development and adhering to industry standards and regulations (e.g., GDPR, ISO
27001) are essential for mitigating security risks.
21.
13
2. Financial Feasibility:
Cost of Hardware: The cost of hardware required for training and inference, such as GPUs or
TPUs, can vary depending on the specifications and vendor. However, there are cost-effective
options available, including cloud-based services, which can reduce upfront investment.
Cost of Data Acquisition: Acquiring labeled iris image datasets, if not already available, may
incur costs. However, open datasets or collaborations with institutions could mitigate these
expenses.
Return on Investment: We need to think about the long-term benefits compared to the initial
costs. If the system saves more money or brings in more revenue over time, it's a good
investment.
Scalability: We need to make sure it can handle more users and transactions without costing
too much extra.
Market Demand: Is there a demand for this technology? If other businesses or industries need
it, there could be opportunities to make money from it.
Development and Maintenance Costs: Developing and maintaining the CNN model, including
software development and updates, may require investment in human resources and expertise.
3. Social Feasibility:
Acceptance and Trust: People need to trust this technology for it to work. If they understand
how it works and believe it's secure and reliable, they're more likely to accept it.
Privacy and Ethics: People care about their privacy and how their personal information is
used. It's important to make sure this system respects their privacy and follows ethical
guidelines.
Inclusivity and Fairness: This system should be fair to everyone and work well for people of
all backgrounds. It shouldn't discriminate or leave anyone out.
Laws and Regulations: There are rules about how biometric data can be used. It's essential to
follow these laws to make sure the system is legal and trustworthy.
Equity and Inclusivity: The system should be designed and implemented in a way that
promotes equity and inclusivity. This includes ensuring that the technology is accessible to
all individuals regardless of socioeconomic status, ethnicity, or ability. Addressing potential
biases in the system's algorithms and datasets can help prevent discrimination and promote
fairness.
22.
14
User Experience:The user experience plays a significant role in determining the social
feasibility of the system. It should be intuitive, user-friendly, and non-intrusive to encourage
adoption and acceptance. Providing clear instructions, minimizing inconvenience, and
addressing user concerns can contribute to a positive user experience and enhance social
feasibility.
Education and Awareness: Educating the public about the benefits, risks, and limitations of
biometric technology can foster understanding and acceptance. Increasing awareness about
the potential uses of fingerprint-based iris recognition, its implications for security and
convenience, and how it differs from other identification methods can help dispel
misconceptions and build support for the technology.
4. Operational Feasibility:
Integration with Existing Systems: Integrating the iris detection system with existing security
systems or authentication processes may require adjustments and compatibility checks.
However, modern APIs and frameworks facilitate such integrations.
User Acceptance: The usability and acceptance of the iris detection system by end-users, such
as security personnel or individuals undergoing authentication, are crucial for its operational
success. User training and feedback mechanisms can help address usability issues.
Scalability: The system should be scalable to accommodate a growing number of users or
transactions without significant degradation in performance. Efficient algorithms and
infrastructure planning can ensure scalability.
Technical Infrastructure: The system requires appropriate hardware infrastructure, including
iris scanners or cameras, servers for storing and processing biometric data, and network
connectivity for communication.Assessing the existing technical infrastructure and
determining if any upgrades or modifications are needed to support the system is essential for
operational feasibility.
Security Measures: Implementing robust security measures to protect biometric data from
unauthorized access, tampering, or hacking is critical.Encryption, access controls, audit trails,
and regular security audits are necessary to ensure the system's security and compliance with
regulations.
Maintenance and Support: Establishing maintenance procedures and support mechanisms to
address technical issues, software updates, and hardware maintenance is essential.Having a
23.
15
dedicated support teamor outsourcing support services can ensure timely resolution of
operational issues and minimize downtime.
Regulatory Compliance: Ensuring compliance with relevant laws, regulations, and industry
standards governing biometric data usage, privacy protection, and security is essential for
operational feasibility.Regular audits and assessments to verify compliance and address any
non-compliance issues are necessary.
5. Legal and Ethical Feasibility:
Data Privacy and Security: Ensuring compliance with data privacy regulations and implementing
robust security measures to protect sensitive biometric data is imperative. Legal frameworks such as
GDPR (General Data Protection Regulation) need to be considered.
Ethical Considerations: Ethical concerns related to biometric data usage, consent, and potential
misuse need to be addressed transparently. Clear policies and protocols should be established to
govern the ethical use of the iris detection system.
Biometric Data Regulations: Many countries have specific regulations governing the collection and
use of biometric data. These regulations often require informed consent from individuals and impose
strict security measures to safeguard biometric information.
Fairness and Non-Discrimination: The system should be designed and implemented in a way that
avoids bias and discrimination against certain demographic groups. Care should be taken to ensure
that the system's algorithms are trained on diverse datasets to prevent unfair outcomes.
Social Impacts: The system should be evaluated for its potential social impacts, including
implications for privacy, security, and individual rights. Ethical considerations should be integrated
into the system's design and implementation to minimize negative consequences.
6. Environmental Feasibility:
Energy Consumption: While training CNN models can be resource-intensive, efforts should
be made to optimize energy consumption and minimize environmental impact. Utilizing
energy-efficient hardware and implementing efficient algorithms can contribute to
environmental sustainability.
Materials and Resources: Consideration should be given to the materials and resources used
in manufacturing the hardware components of the system. Choosing environmentally friendly
materials and adopting sustainable manufacturing practices can mitigate environmental
impact.
24.
16
E-Waste Management:End-of-life management of electronic waste (e-waste) generated by
the system's hardware components is a significant environmental concern. Implementing e-
waste recycling programs or ensuring that hardware components are designed for easy
disassembly and recycling can promote sustainability.
Carbon Footprint: Assessing the system's carbon footprint, including emissions associated
with manufacturing, transportation, and operation, is essential for understanding its
environmental impact. Implementing carbon offset initiatives or using renewable energy
sources can help mitigate greenhouse gas emissions.
Data Center Efficiency: If the system relies on cloud-based servers or data centers for
processing and storage, optimizing data center efficiency can reduce energy consumption and
environmental impact. Utilizing energy-efficient cooling systems, virtualization, and
renewable energy sources for powering data centers can enhance environmental
sustainability.
Sustainability Practices: Incorporating sustainability principles into the system's development
and operation, such as minimizing paper usage, reducing waste, and promoting energy
conservation, can contribute to its environmental feasibility. Implementing environmental
management systems and obtaining relevant certifications (e.g., ISO 14001) demonstrate a
commitment to sustainability.
Lifecycle Assessment: Conducting a lifecycle assessment of the system, from manufacturing
to disposal, can identify environmental hotspots and opportunities for improvement.
Considering the environmental impact at each stage of the system's lifecycle enables informed
decision-making and sustainability planning.
Overall, while machine learning approaches, particularly CNNs, offer promising capabilities for iris
detection and validation systems, careful consideration of technical, financial, operational, legal,
ethical, and environmental factors is essential to ensure the feasibility and success of such initiatives.
18
4.2 UML DIAGRAMS
UMLstands for Unified Modeling Language. UML is a standardized general-purpose
modeling language in the field of object-oriented software engineering. The standard is managed, and
was created by, the Object Management Group.
The goal is for UML to become a common language for creating models of object oriented
computer software. In its current form UML is comprised of two major components: a Meta-model
and a notation. In the future, some form of method or process may also be added to; or associated
with, UML.
The Unified Modeling Language is a standard language for specifying, Visualization,
Constructing and documenting the artifacts of software system, as well as for business modeling and
other non-software systems.
The UML represents a collection of best engineering practices that have proven successful in
the modeling of large and complex systems.
The UML is a very important part of developing objects oriented software and the software
development process. The UML uses mostly graphical notations to express the design of software
projects.
GOALS:
The Primary goals in the design of the UML are as follows:
1. Provide users a ready-to-use, expressive visual modeling Language so that they can develop and
exchange meaningful models.
2. Provide extendibility and specialization mechanisms to extend the core concepts.
3. Be independent of particular programming languages and development process.
4. Provide a formal basis for understanding the modeling language.
5. Encourage the growth of OO tools market.
6. Support higher level development concepts such as collaborations, frameworks, patterns and
components.
7. Integrate best practices.
4.2.1 USE CASE DIAGRAM:
Ause case diagram in the Unified Modeling Language (UML) is a type of behavioral diagram defined
by and created from a Use-case analysis. Its purpose is to present a graphical overview of the
functionality provided by a system in terms of actors, their goals (represented as use cases), and any
dependencies between those use cases. The main purpose of a use case diagram is to show what
27.
19
system functions areperformed for which actor. Roles of the actors in the system can be depicted.
These are external entities (such as users, systems, or other elements) interacting with the system.
Actors are represented as stick figures or blocks outside the system boundary and are associated with
use cases that define their interactions with the system.Use cases represent specific functionalities or
actions that the system performs. They depict the system's behavior based on the interactions between
the actors and the system itself. Use cases are displayed as ovals within the system boundary. Lines
or connectors (usually solid lines) show the relationships between actors and use cases. Each actor is
associated with the use cases it interacts with, showing how users access and utilize the system's
functionalities.
Fig - 2
28.
20
4.2.2 CLASS DIAGRAM:
Insoftware engineering, a class diagram in the Unified Modeling Language (UML) is a type of static
structure diagram that describes the structure of a system by showing the system's classes, their
attributes, operations (or methods), and the relationships among the classes. It explains which class
contains information. Classes are represented as rectangles with three sections. The top section
contains the class name, the middle section includes the class attributes, and the bottom section shows
the class methods or operations. Class diagrams are instrumental in modeling the structure of object-
oriented systems, providing a clear understanding of the system's structure and the relationships
between different classes. They serve as a foundation for coding and software development, enabling
developers to plan and understand the structure of the system before implementation.
Fig - 3
29.
21
4.2.3 SEQUENCE DIAGRAM:
Asequence diagram in Unified Modeling Language (UML) is a kind of interaction diagram that
shows how processes operate with one another and in what order. It is a construct of a Message
Sequence Chart. Sequence diagrams are sometimes called event diagrams, event scenarios, and
timing diagrams. Sequence diagrams help in visualizing the runtime behavior of a system, showcasing
the flow of messages and interactions between different components over time. They are particularly
useful in analyzing and designing systems, especially for understanding the timing and order of events
within a specific scenario or process. These diagrams aid in communication between developers,
designers, and stakeholders, providing a clear visualization of how objects collaborate to accomplish
certain functionalities in a system.
Fig - 4
30.
22
4.2.4 ACTIVITY DIAGRAM:
An activity diagram is a type of UML (Unified Modeling Language) diagram that visually represents
the flow of activities and actions within a system or a business process. It is commonly used to
model the dynamic aspects of a system, showing the sequence of activities and the flow of control
between them. Activity diagrams are particularly useful for capturing the behavior of a system or a
process over time.
Fig - 5
31.
23
5.TECHNOLOGIES
5.1 WHAT ISPYTHON
Below are some facts about Python.
Python is currently the most widely used multi-purpose, high-level programming language.
Python allows programming in Object-Oriented and Procedural paradigms. Python programs
generally are smaller than other programming languages like Java.
Programmers have to type relatively less and indentation requirement of the language,makes
them readable all the time.
Python language is being used by almost all tech-giant companies like – Google, Amazon,Facebook,
Instagram, Dropbox, Uber… etc.
The biggest strength of Python is huge collection of standard library which can be used forthe
following –
• Machine Learning
• GUI Applications (like Kivy, Tkinter, PyQt etc. )
• Web frameworks like Django (used by YouTube, Instagram, Dropbox)
• Image processing (like Opencv, Pillow)
• Web scraping (like Scrapy, BeautifulSoup, Selenium)
• Test frameworks
• Multimedia
5.1.1 ADVANTAGES & DISADVANTAGES OF PYTHON
Advantages of Python :-
Let’s see how Python dominates over other languages.
1.Extensive Libraries
Python downloads with an extensive library and it contain code for various purposes like regular
expressions, documentation-generation, unit-testing, web browsers, threading, databases, CGI, email,
image manipulation, and more. So, we don’t have to write the complete code for that manually.
32.
24
2. Extensible
As wehave seen earlier, Python can be extended to other languages. You can write some of your
code in languages like C++ or C. This comes in handy, especially in projects.
3. Embeddable
Complimentary to extensibility, Python is embeddable as well. You can put your Python code in your
source code of a different language, like C++. This lets us add scripting capabilities to our code in
the other language.
4. Improved Productivity
The language’s need to be in simplicity and extensive libraries render programmers more productive
than languages like Java and C++ do. Also, the fact that you need to write less and get more things
done.
5. IOT Opportunities
Since Python forms the basis of new platforms like Raspberry Pi, it finds the future bright for the
Internet Of Things. This is a way to connect the language with the real world.
6. Simple and Easy
When working with Java, you may have to create a class to print ‘Hello World’. But in Python, just
a print statement will do. It is also quite easy to learn, understand, and code. This is why when people
pick up Python, they have a hard time adjusting to other more verbose languages like Java.
7. Readable
Because it is not such a verbose language, reading Python is much like reading English. This is the
reason why it is so easy to learn, understand, and code. It also does not need curly braces to define
blocks, and indentation is mandatory. This further aids the readability of the code.
8. Object-Oriented
This language supports both the procedural and object-oriented programming paradigms. While
functions help us with code reusability, classes and objects let us model the real world. A class allows
the encapsulation of data and functions into one.
33.
25
9. Free andOpen-Source
Like we said earlier, Python is freely available. But not only can you download Python for free, but
you can also download its source code, make changes to it, and even distribute it. It downloads with
an extensive collection of libraries to help you with your tasks.
10. Portable
When you code your project in a language like C++, you may need to make some changes to it if you
want to run it on another platform. But it isn’t the same with Python. Here, you need to code only
once, and you can run it anywhere. This is called Write Once Run Anywhere (WORA). However,
you need to be careful enough not to include any system dependent features.
11. Interpreted
Lastly, we will say that it is an interpreted language. Since statements are executed one by one,
debugging is easier than in compiled languages.
Advantages of Python Over Other Languages
1. Less Coding
Almost all of the tasks done in Python requires less coding when the same task is done in other
languages. Python also has an awesome standard library support, so you don’t have to search for any
third-party libraries to get your job done. This is the reason that many people suggest learning Python
to beginners.
2. Affordable
Python is free therefore individuals, small companies or big organizations can leverage the free
available resources to build applications. Python is popular and widely used so it gives you better
community support.
3. Python is for Everyone
Python code can run on any machine whether it is Linux, Mac or Windows. Programmers need to
learn different languages for different jobs but with Python, you can professionally build web apps,
perform data analysis and machine learning, automate things, do web scraping and also build games
and powerful visualizations. It is an all-rounder programming language.
34.
26
Disadvantages of Python
Sofar, we’ve seen why Python is a great choice for your project. But if you choose it, you should be
aware of its consequences as well. Let’s now see the downsides of choosing Python over another
language.
1. Speed Limitations
We have seen that Python code is executed line by line. But since Python is interpreted, it often results
in slow execution. This, however, isn’t a problem unless speed is a focal point for the project. In other
words, unless high speed is a requirement, the benefits offered by Python are enough to distract us
from its speed limitations.
2. Weak in Mobile Computing and Browsers
While it serves as an excellent server-side language, Python is much rarely seen on the client-side.
Besides that, it is rarely ever used to implement smartphone-based applications. One such application
is called Carbonnelle.
The reason it is not so famous despite the existence of Brython is that it isn’t that secure.
3. Design Restrictions
As you know, Python is dynamically-typed. This means that you don’t need to declare thetype of
variable while writing the code. It uses duck-typing. But wait, what’s that? Well, it just means that
if it looks like a duck, it must be a duck. While this is easy on the programmers during coding, it can
raise run-time errors.
4. Underdeveloped Database Access Layers
Compared to more widely used technologies like JDBC (Java DataBase
Connectivity) and ODBC (Open DataBase Connectivity), Python’s database access layersare a
bit underdeveloped. Consequently, it is less often applied in huge enterprises.
5. Simple
No, we’re not kidding. Python’s simplicity can indeed be a problem. Take my example. I don’t do
Java, I’m more of a Python person. To me, its syntax is so simple that the verbosityof Java code seems
unnecessary.
This was all about the Advantages and Disadvantages of Python Programming Language.
35.
27
5.1.2 HISTORY OFPYTHON
What do the alphabet and the programming language Python have in common? Right, both start with
ABC. If we are talking about ABC in the Python context, it's clear that the programming language
ABC is meant. ABC is a general-purpose programming language and programming environment,
which had been developed in the Netherlands, Amsterdam, at the CWI (Centrum Wiskunde
&Informatica). The greatest achievement of ABC was to influence the design of Python. Python was
conceptualized in the late 1980s. Guido van Rossum worked that time in a project at the CWI, called
Amoeba, a distributed operating system. In an interview with Bill Venners1, Guido van Rossum said:
"In the early 1980s, I worked as an implementer on a team building a language called ABC at Centrum
Wiskunde en Informatica (CWI). I don't know how well people know ABC's influence on Python. I
try to mention ABC's influence because I'm indebted to everything I learned during that project and
to the people who worked on it. Later on in the same Interview, Guido van Rossum continued: "I
remembered all my experience and some of my frustration with
ABC. I decided to try to design a simple scripting language that possessed some of ABC's better
properties, but without its problems. So I started typing. I created a simple virtual machine, a simple
parser, and a simple runtime. I made my own version of the various ABC parts that I liked. I created
a basic syntax, used indentation for statement grouping instead of curly braces or begin-end blocks,
and developed a small number of powerful data types: a hash table (or dictionary, as we call it), a list,
strings, and numbers."
5.2 WHAT IS MACHINE LEARNING
Before we take a look at the details of various machine learning methods, let's start by looking at
what machine learning is, and what it isn't. Machine learning is often categorized as a subfield of
artificial intelligence, but I find that categorization can often be misleading at first brush. The study
of machine learning certainly arose from research in this context, but in the data science application
of machine learning methods, it's more helpful to think of machine learning as a means of building
models of data.
Fundamentally, machine learning involves building mathematical models to help understand data.
"Learning" enters the fray when we give these models tunable parameters that can be adapted to
observed data; in this way the program can be considered to be "learning" from the data. Once these
36.
28
models have beenfit to previously seen data, they can be used to predict and understand aspects of
newly observed data. I'll leave to the reader the more philosophical digression regarding the extent to
which this type of mathematical, model-based "learning" is similar to the "learning" exhibited by the
human brain. Understanding the problem setting in machine learning is essential to using these tools
effectively, and so we will start with some broad categorizations of the types of approaches we'll
discuss here.
5.2.1 Categories Of Machine Leaning
At the most fundamental level, machine learning can be categorized into two main types: supervised
learning and unsupervised learning.
Supervised learning involves somehow modeling the relationship between measured features of data
and some label associated with the data; once this model is determined, it can be used to apply labels
to new, unknown data. This is further subdivided into
classification tasks and regression tasks: in classification, the labels are discrete categories, while in
regression, the labels are continuous quantities. We will see examples of both types of supervised
learning in the following section.
Unsupervised learning involves modeling the features of a dataset without reference to any label, and
is often described as "letting the dataset speak for itself." These models include tasks such as
clustering and dimensionality reduction. Clustering algorithms identify distinct groups of data, while
dimensionality reduction algorithms search for more succinct representations of the data. We will see
examples of both types of unsupervised learning in the following section.
5.2.2 Need for Machine Learning
Human beings, at this moment, are the most intelligent and advanced species on earth because they
can think, evaluate and solve complex problems. On the other side, AI is still in its initial stage and
haven’t surpassed human intelligence in many aspects. Then the question is that what is the need to
make machine learn? The most suitable reason for doing this is, “to make decisions, based on data,
with efficiency and scale”.
Lately, organizations are investing heavily in newer technologies like Artificial Intelligence, Machine
37.
29
Learning and DeepLearning to get the key information from data to perform several real-world tasks
and solve problems. We can call it data-driven decisions taken by machines, particularly to automate
the process. These data-driven decisions can be used, instead of using programing logic, in the
problems that cannot be programmed inherently. The fact is that we can’t do without human
intelligence, but other aspect is that we all need to solve real-world problems with efficiency at a huge
scale. That is why the need for machine learning arises.
5.2.3 Challenges in Machines Learning
While Machine Learning is rapidly evolving, making significant strides with cybersecurity and
autonomous cars, this segment of AI as whole still has a long way to go. The reason behind is that
ML has not been able to overcome number of challenges. The challenges that ML is facing currently
are −
Quality of data − Having good-quality data for ML algorithms is one of the biggest challenges. Use
of low-quality data leads to the problems related to data preprocessing and feature extraction.
Time-Consuming task − Another challenge faced by ML models is the consumption of time
especially for data acquisition, feature extraction and retrieval.
Lack of specialist persons − As ML technology is still in its infancy stage, availability of expert
resources is a tough job.
No clear objective for formulating business problems − Having no clear objective and well -
defined goal for business problems is another key challenge for ML because this technology is not
that mature yet.
Issue of overfitting & underfitting − If the model is overfitting or underfitting, it cannot be
represented well for the problem.
Curse of dimensionality − Another challenge ML model faces is too many features of data points.
This can be a real hindrance.
Difficulty in deployment − Complexity of the ML model makes it quite difficult to be deployed in
real life.
5.2.4 Applications of Machines Learning :-
Machine Learning is the most rapidly growing technology and according to researchers we are in the
golden year of AI and ML. It is used to solve many real-world complex problems which cannot be
solved with traditional approach. Following are some real-world applications of ML −
38.
30
• Emotion analysis
•Sentiment analysis
• Error detection and prevention
• Weather forecasting and prediction
• Stock market analysis and forecasting
• Speech synthesis
• Speech recognition
• Customer segmentation
• Object recognition
• Fraud detection
• Fraud prevention
• Recommendation of products to customer in online shopping
Healthcare: Machine learning is used for medical image analysis, including MRI scans, X-rays, and
pathology slides, to assist in disease detection and diagnosis. It also powers predictive analytics for
patient outcomes, personalized treatment recommendations, and drug discovery.
Finance: Machine learning algorithms are employed in fraud detection, credit scoring, algorithmic
trading, and risk management. They analyze vast amounts of financial data to identify patterns,
anomalies, and trends, aiding decision-making processes in banking, insurance, and investment
sectors.
E-commerce and Recommendation Systems: Online retailers leverage machine learning to analyze
customer behavior, preferences, and purchase history to provide personalized product
recommendations, optimize pricing strategies, and enhance customer experience. Recommendation
systems are also used in streaming services, social media platforms, and content websites.
Natural Language Processing (NLP): NLP techniques enable machines to understand, interpret,
and generate human language. Applications include sentiment analysis, chatbots, virtual assistants,
language translation, text summarization, and voice recognition systems.
Autonomous Vehicles: Machine learning algorithms power self-driving cars and autonomous
39.
31
vehicles to perceivetheir surroundings, navigate through traffic, and make real-time driving
decisions. They process sensor data from cameras, lidar, radar, and GPS to detect obstacles,
pedestrians, road signs, and traffic signals.
Manufacturing and Industry 4.0: Machine learning is applied in predictive maintenance, quality
control, supply chain optimization, and process automation within manufacturing industries. It
enables predictive maintenance by analyzing sensor data to anticipate equipment failures and
minimize downtime.
Cybersecurity: Machine learning algorithms detect and mitigate security threats, malware, and cyber
attacks by analyzing network traffic, identifying anomalies, and classifying malicious patterns. They
also enhance authentication systems, intrusion detection, and fraud prevention mechanisms.
Smart Home and IoT: Machine learning algorithms enable smart home devices and IoT (Internet of
Things) sensors to learn user behaviors, adapt to preferences, and automate household tasks. They
control smart thermostats, lighting systems, security cameras, and appliances for energy efficiency
and convenience.
Environmental Monitoring and Climate Modeling: Machine learning models analyze
environmental data from satellites, sensors, and weather stations to predict climate patterns, assess
environmental risks, and optimize resource management strategies for agriculture, conservation, and
disaster response.
Marketing and Advertising: Machine learning algorithms analyze customer demographics, online
behavior, and social media interactions to optimize marketing campaigns, target specific audiences,
and personalize content delivery. They improve ad targeting, campaign performance, and customer
segmentation for advertisers and marketers.
These are just a few examples of the diverse applications of machine learning across various sectors.
As the field continues to advance, machine learning is expected to play an increasingly significant
role in addressing complex challenges and driving innovation in numerous industries.
40.
32
5.2.5 How toStart Learning Machine Learning?
Arthur Samuel coined the term “Machine Learning” in 1959 and defined it as a “Field of study that
gives computers the capability to learn without being explicitly programmed”.
And that was the beginning of Machine Learning! In modern times, Machine Learning is one
of the most popular (if not the most!) career choices. According to Indeed, Machine Learning
Engineer Is The Best Job of 2019 with a 344% growth and an average base salary of $146,085 per
year.
But there is still a lot of doubt about what exactly is Machine Learning and how to start learning it?
So this article deals with the Basics of Machine Learning and also the path you can follow to
eventually become a full-fledged Machine Learning Engineer. Now let’s get started!!!
This is a rough roadmap you can follow on your way to becoming an insanely talented Machine
Learning Engineer. Of course, you can always modify the steps according to your needs to reach your
desired end-goal!
Step 1 – Understand the Prerequisites
In the case, you are a genius, you could start ML directly but normally, there are some prerequisites
that you need to know which include Linear Algebra, Multivariate Calculus, Statistics, and Python.
And if you don’t know these, never fear! You don’t need Ph.D.degree in these topics to get started
but you do need a basic understanding.
(a) Learn Linear Algebra and Multivariate Calculus
Both Linear Algebra and Multivariate Calculus are important in Machine Learning. However, the
extent to which you need them depends on your role as a data scientist. If you are more focused on
application heavy machine learning, then you will not be that heavily focused on maths as there are
many common libraries available. But if you want to focus on R&D in Machine Learning, then
mastery of Linear Algebra and Multivariate Calculus is very important as you will have to implement
many ML algorithms from scratch.
(b) Learn Statistics
Data plays a huge role in Machine Learning. In fact, around 80% of your time as an ML expert will
be spent collecting and cleaning data. And statistics is a field that handles the collection, analysis, and
presentation of data. So it is no surprise that you need to learn it!!! Some of the key concepts in
41.
33
statistics that areimportant are Statistical Significance, Probability Distributions, Hypothesis Testing,
Regression, etc. Also, Bayesian Thinking is also a very important part of ML which deals with various
concepts like Conditional Probability, Priors, and Posteriors, Maximum Likelihood, etc.
(c) Learn Python
Some people prefer to skip Linear Algebra, Multivariate Calculus and Statistics and learn them as
they go along with trial and error. But the one thing that you absolutely cannot skip is Python! While
there are other languages you can use for Machine Learning like R, Scala, etc. Python is currently the
most popular language for ML. In fact, there are many Python libraries that are specifically useful for
Artificial Intelligence and Machine Learning such as Keras, TensorFlow, Scikit-learn, etc. So if you
want to learn ML, it’s best if you learn Python! You can do that using various online resources and
courses such as Fork Python available Free on Geeks for Geeks.
Step 2 – Learn Various ML Concepts
Now that you are done with the prerequisites, you can move on to actually learning ML (Which is
the fun part!!!) It’s best to start with the basics and then move on to more complicated stuff. Some of
the basic concepts in ML are:
(a) Terminologies of Machine Learning
• Model – A model is a specific representation learned from data by applying some machine
learning algorithm. A model is also called a hypothesis.
• Feature – A feature is an individual measurable property of the data. A set of numeric features
can be conveniently described by a feature vector. Feature vectors are fed as input to the model. For
example, in order to predict a fruit, there may be features like color, smell, taste, etc.
• Target (Label) – A target variable or label is the value to be predicted by our model. For the fruit
example discussed in the feature section, the label with each set of input would be the name of the
fruit like apple, orange, banana, etc.
• Training – The idea is to give a set of inputs(features) and it’s expected outputs(labels), so after
training, we will have a model (hypothesis) that will then map new data to one of the categories
trained on.
• Prediction – Once our model is ready, it can be fed a set of inputs to which it will provide a
42.
34
predicted output(label).
(b) Typesof Machine Learning
• Supervised Learning – This involves learning from a training dataset with labeled data using
classification and regression models. This learning process continues until the required level of
performance is achieved.
• Unsupervised Learning – This involves using unlabelled data and then finding the underlying
structure in the data in order to learn more and more about the data itself using factor and cluster
analysis models.
• Semi-supervised Learning – This involves using unlabelled data like Unsupervised Learning
with a small amount of labeled data. Using labeled data vastly increases the learning accuracy and is
also more cost-effective than Supervised Learning.
• Reinforcement Learning – This involves learning optimal actions through trial and error. So the
next action is decided by learning behaviors that are based on the current state and that will maximize
the reward in the future.
ADVANTAGES & DISADVANTAGES OF ML
Advantages of Machine learning :-
1. Easily identifies trends and patterns
Machine Learning can review large volumes of data and discover specific trends and patterns that
would not be apparent to humans. For instance, for an e-commerce website like Amazon, it serves to
understand the browsing behaviors and purchase histories of its users to help cater to the right
products, deals, and reminders relevant to them. It uses the results to reveal relevant advertisements
to them.
2. No human intervention needed (automation)
With ML, you don’t need to babysit your project every step of the way. Since it means giving
machines the ability to learn, it lets them make predictions and also improve the algorithms on their
own. A common example of this is anti-virus softwares. they learn to filter new threats as they are
recognized. ML is also good at recognizing spam.
43.
35
3. Continuous Improvement
AsML algorithms gain experience, they keep improving in accuracy and efficiency. This lets them
make better decisions. Say you need to make a weather forecast model. As the amount of data you
have keeps growing, your algorithms learn to make more accurate predictions faster.
4. Handling multi-dimensional and multi-variety data
Machine Learning algorithms are good at handling data that are multi-dimensional and multivariety,
and they can do this in dynamic or uncertain environments.
5. Wide Applications
You could be an e-tailer or a healthcare provider and make ML work for you. Where it does apply, it
holds the capability to help deliver a much more personal experience to customers while also targeting
the right customers.
Disadvantages of Machine Learning :-
1. Data Acquisition
Machine Learning requires massive data sets to train on, and these should be inclusive/unbiased, and
of good quality. There can also be times where they must wait for new data to be generated.
2. Time and Resources
ML needs enough time to let the algorithms learn and develop enough to fulfill their purpose with a
considerable amount of accuracy and relevancy. It also needs massive resources to function. This can
mean additional requirements of computer power for you.
3. Interpretation of Results
Another major challenge is the ability to accurately interpret results generated by the algorithms. You
must also carefully choose the algorithms for your purpose.
4. High error-susceptibility
Machine Learning is autonomous but highly susceptible to errors. Suppose you train an algorithm
with data sets small enough to not be inclusive. You end up with biased predictions coming from a
biased training set. This leads to irrelevant advertisements being displayed to customers. In the case
44.
36
of ML, suchblunders can set off a chain of errors that can go undetected for long periods of time.
And when they do get noticed, it takes quite some time to recognize the source of the issue, and even
longer to correct it.
5.3 PYTHON DEVELOPMENT STEPS
Guido Van Rossum published the first version of Python code (version 0.9.0) at alt.sources in
February 1991. This release included already exception handling, functions, and the core data types
of list, dict, str and others. It was also object oriented and had a module system. Python version 1.0
was released in January 1994. The major new features included in this release were the functional
programming tools lambda, map, filter and reduce, which Guido Van Rossum never liked.Six and a
half years later in October 2000, Python 2.0 was introduced. This release included list
comprehensions, a full garbage collector and it was supporting Unicode Python flourished for another
8 years in the versions 2.x before the next major release as Python 3.0 (also known as "Python 3000"
and "Py3K") was released. Python 3 is not backwards compatible with Python 2.x. The emphasis in
Python 3 had been on the removal of duplicate programming constructs and modules, thus fulfilling
or coming close to fulfilling the 13th law of the Zen of Python: "There should be one -- and preferably
only one -- obvious way to do it. Some changes in Python 7.3:
• Print is now a function
• Views and iterators instead of lists
• The rules for ordering comparisons have been simplified. E.g. a heterogeneous list cannot be
sorted, because all the elements of a list must be comparable to each other.
• There is only one integer type left, i.e. int. long is int as well.
• The division of two integers returns a float instead of an integer. "//" can be used to have the
"old" behaviour.
• Text Vs. Data Instead Of Unicode Vs. 8-bit
Purpose :-
We demonstrated that our approach enables successful segmentation of intra-retinal layers—even
with low-quality images containing speckle noise, low contrast, and different intensity ranges
throughout—with the assistance of the ANIS feature.
Python:
Python is an interpreted high-level programming language for general-purpose programming.
45.
37
Created by Guidovan Rossum and first released in 1991, Python has a design philosophy that
emphasizes code readability, notably using significant whitespace.
Python features a dynamic type system and automatic memory management. It supports multiple
programming paradigms, including object-oriented, imperative, functional and procedural, and has a
large and comprehensive standard library.
• Python is Interpreted − Python is processed at runtime by the interpreter. You do not need to
compile your program before executing it. This is similar to PERL and PHP.
• Python is Interactive − you can actually sit at a Python prompt and interact with the interpreter
directly to write your programs.
• Python also acknowledges that speed of development is important. Readable and terse code is
part of this, and so is access to powerful constructs that avoid tedious repetition of code.
Maintainability also ties into this may be an all but useless metric, but it does say something about
how much code you have to scan, read and/or understand to troubleshoot problems or tweak
behaviors. This speed of development, the ease with which a programmer of other languages can pick
up basic Python skills and the huge standard library is key to another area where Python excels. All
its tools have been quick to implement, saved a lot of time, and several of them have later been
patched and updated by people with no Python background - without breaking.
5.4 MODULES USED IN PROJECT
Tensorflow:
TensorFlow is a free and open-source software library for dataflow and differentiable programming
across a range of tasks. It is a symbolic math library, and is also used for machine learning applications
such as neural networks. It is used for both research and production at Google.
TensorFlow was developed by the Google Brain team for internal Google use. It was released under
the Apache 2.0 open-source license on November 9, 2015.
TensorFlow is an open-source machine learning framework developed by Google Brain for building
and training deep learning models. It provides a flexible and comprehensive ecosystem of tools,
libraries, and resources for developing artificial intelligence (AI) applications. TensorFlow is a
powerful and versatile framework for building and training deep learning models, offering a rich set
of features, tools, and resources for machine learning and AI development. Its flexibility, scalability,
46.
38
and comprehensive ecosystemmake it a popular choice for researchers, developers, and organizations
working on various machine learning tasks and applications.
Numpy:
Numpy is a general-purpose array-processing package. It provides a high-performance
multidimensional array object, and tools for working with these arrays.
It is the fundamental package for scientific computing with Python. It contains various features
including these important ones:
• A powerful N-dimensional array object
• Sophisticated (broadcasting) functions
• Tools for integrating C/C++ and Fortran code
• Useful linear algebra, Fourier transform, and random number capabilities
• Besides its obvious scientific uses, Numpy can also be used as an efficient multidimensional
container of generic data. Arbitrary data-types can be defined using Numpy which allows Numpy to
seamlessly and speedily integrate with a wide varieties. NumPy, short for Numerical Python, is a
fundamental package for numerical computing in Python. It provides support for multi-dimensional
arrays, matrices, mathematical functions, and operations, making it essential for scientific computing,
data analysis, and machine learning applications.NumPy is a powerful and essential library for
numerical computing in Python, providing efficient data structures, mathematical functions, and array
operations for scientific computing, data analysis, and machine learning applications. Its simplicity,
flexibility, and performance make it a cornerstone of the Python ecosystem for numerical computing.
Pandas:
Pandas is an open-source Python Library providing high-performance data manipulation and analysis
tool using its powerful data structures. Python was majorly used for data munging and preparation. It
had very little contribution towards data analysis. Pandas solved this problem. Using Pandas, we can
accomplish five typical steps in the processing and analysis of data, regardless of the origin of data
load, prepare, manipulate, model, and analyze. Python with Pandas is used in a wide range of fields
including academic and commercial domains including finance, economics, Statistics, analytics, etc.
Pandas is a popular open-source Python library used for data manipulation and analysis. It provides
data structures and functions for efficiently working with structured data, such as tabular, time-series,
and heterogeneous data.Pandas is a powerful and versatile library for data manipulation and analysis
in Python, providing intuitive data structures, extensive functionality, and seamless integration with
47.
39
other libraries. Itsflexibility, ease of use, and performance make it a preferred choice for data
scientists, analysts, and developers working with structured data in Python.
Matplotlib:
Matplotlib is a Python 2D plotting library which produces publication quality figures in a variety of
hardcopy formats and interactive environments across platforms. Matplotlib can be used in Python
scripts, the Python and IPython shells, the Jupyter Notebook, web application servers, and four
graphical user interface toolkits. Matplotlib tries to make easy things easy and hard things possible.
You can generate plots, histograms, power spectra, bar charts, error charts, scatter plots, etc., with
just a few lines of code. For examples, see the sample plots and thumbnail gallery.
For simple plotting the pyplot module provides a MATLAB-like interface, particularly when
combined with IPython. For the power user, you have full control of line styles, font properties, axes
properties, etc. via an object oriented interface or via a set of functions familiar to MATLAB users.
Scikit – learn:
Scikit-learn provides a range of supervised and unsupervised learning algorithms via a consistent
interface in Python. It is licensed under a permissive simplified BSD license and is distributed under
many Linux distributions, encouraging academic and commercial use. Scikit-learn is a popular open-
source machine learning library for Python. It provides simple and efficient tools for data mining and
data analysis, as well as building and evaluating machine learning models. scikit-learn is a
comprehensive and user-friendly machine learning library for Python, offering a wide range of
algorithms and tools for various machine learning tasks. Its simplicity, consistency, and versatility
make it a valuable resource for both beginners and experienced practitioners in the field of machine
learning and data science.
Django:
Django is a high-level Python web framework that encourages rapid development and clean,
pragmatic design. It follows the Model-View-Controller (MVC) architectural pattern, but with a
slight variation known as Model-View-Template (MVT), where templates represent the "V" or view
layer.Django's design principles, conventions, and built-in components make it a popular choice for
building robust, maintainable, and scalable web applications with Python. Its extensive
documentation, active community, and ecosystem of reusable packages further contribute to its
appeal for developers.
48.
40
json:
JSON (JavaScript ObjectNotation) is a lightweight data interchange format commonly used for
transmitting data between a server and a web application. It is human-readable, easy to parse, and
supported by many programming languages.JSON is a versatile and widely adopted data format for
representing structured data in web applications. Its simplicity, readability, interoperability, and
standardization make it a preferred choice for data interchange in modern web development.
Python:
Python is an interpreted high-level programming language for general-purpose programming.
Created by Guido van Rossum and first released in 1991, Python has a design philosophy that
emphasizes code readability, notably using significant whitespace.
Python features a dynamic type system and automatic memory management. It supports multiple
programming paradigms, including object-oriented, imperative, functional and procedural, and has a
large and comprehensive standard library.
• Python is Interpreted − Python is processed at runtime by the interpreter. You do not need to
compile your program before executing it. This is similar to PERL and PHP.
• Python is Interactive − you can actually sit at a Python prompt and interact with the interpreter
directly to write your programs.
Python also acknowledges that speed of development is important. Readable and terse code is part of
this, and so is access to powerful constructs that avoid tedious repetition of code. Maintainability also
ties into this may be an all but useless metric, but it does say something about how much code you
have to scan, read and/or understand to troubleshoot problems or tweak behaviors. This speed of
development, the ease with which a programmer of other languages can pick up basic Python skills
and the huge standard library is key to another area where Python excels. All its tools have been quick
to implement, saved a lot of time, and several of them have later been patched and updated by people
with no Python background - without breaking.
49.
41
5.5 INSTALL PYTHONSTEP-BY-STEP IN WINDOWS AND MAC
Python a versatile programming language doesn’t come pre-installed on your computer devices.
Python was first released in the year 1991 and until today it is a very popular high-level programming
language. Its style philosophy emphasizes code readability with its notable use of great whitespace.
The object-oriented approach and language construct provided by Python enables programmers to
write both clear and logical code for projects. This software does not come pre-packaged with
Windows.
How to Install Python on Windows and Mac :
There have been several updates in the Python version over the years. The question is how to install
Python? It might be confusing for the beginner who is willing to start learning Python but this tutorial
will solve your query. The latest or the newest version of Python is version
3.7.4 or in other words, it is Python 3.
Note: The python version 3.7.4 cannot be used on Windows XP or earlier devices.
Before you start with the installation process of Python. First, you need to know about your System
Requirements. Based on your system type i.e. operating system and based processor, you must
download the python version. My system type is a Windows 64-bit operating system. So the steps
below are to install python version 3.7.4 on Windows 7 device or to install Python 3. Download the
Python Cheat sheet here. The steps on how to install Python on Windows 10, 8 and 7 are divided into
4 parts to help understand better.
Download the Correct version into the system
Step 1: Go to the official site to download and install python using Google Chrome or any other web
browser. OR Click on the following link: https://www.python.org
50.
42
Fig - 6
Now,check for the latest and the correct version for your operating system.
Step 2: Click on the Download Tab.
51.
43
Fig - 7
Step3: You can either select the Download Python for windows 3.7.4 button in Yellow Color or you
can scroll further down and click on download with respective to their version. Here, we are
downloading the most recent python version for windows 3.7.4
Fig - 8
52.
44
Step 4: Scrolldown the page until you find the Files option.
Step 5: Here you see a different version of python along with the operating system.
Fig- 9
• To download Windows 32-bit python, you can select any one from the three options: Windows
x86 embeddable zip file, Windows x86 executable installer or Windows x86 webbased installer.
•To download Windows 64-bit python, you can select any one from the three options: Windows x86-
64 embeddable zip file, Windows x86-64 executable installer or Windows x8664 web-based installer.
Here we will install Windows x86-64 web-based installer. Here your first part regarding which
version of python is to be downloaded is completed. Now we move ahead with the second part in
installing python i.e. Installation
Note: To know the changes or updates that are made in the version you can click on the Release Note
Option.
Installation of Python
Step 1: Go to Download and Open the downloaded python version to carry out the installation
process.
53.
45
Fig - 10
Step2: Before you click on Install Now, Make sure to put a tick on Add Python 3.7 to PATH.
Fig - 11
54.
46
Step 3: Clickon Install NOW After the installation is successful. Click on Close.
Fig - 12
With these above three steps on python installation, you have successfully and correctly installed
Python. Now is the time to verify the installation. Note: The installation processmight take a couple
of minutes.
Verify the Python Installation
Step 1: Click on Start
Step 2: In the Windows Run Command, type “cmd”.
55.
47
Fig - 13
Step3: Open the Command prompt option.
Step 4: Let us test whether the python is correctly installed. Type python –V and press Enter.
Fig - 14
Step 5: You will get the answer as 3.7.4
Note: If you have any of the earlier versions of Python already installed. You must
firstuninstall the earlier version and then install the new one.
56.
48
Check how thePython IDLE works
Step 1: Click on Start
Step 2: In the Windows Run command, type “python idle”.
Fig - 15
Step 3: Click on IDLE (Python 3.7 64-bit) and launch the program
Step 4: To go ahead with working in IDLE you must first save the file. Click on File > Click
on Save
Fig - 16
57.
49
Step 5: Namethe file and save as type should be Python files. Click on SAVE. Here I
havenamed the files as Hey World.
Step 6: Now for e.g. enter print
58.
50
6. IMPLEMENTATIONS
6.1 SOFTWAREENVIRONMENT
6.1.1 PYTHON
Python is a general-purpose interpreted, interactive, object-oriented, and high-level programming
language. An interpreted language, Python has a design philosophy that emphasizes code readability
(notably using whitespace indentation to delimit code blocks ratherthan curly brackets or keywords),
and a syntax that allows programmers to express concepts infewer lines of code than might be used
in languages such as C++or Java. It provides constructsthat enable clear programming on both small
and large scales. Python interpreters are availablefor many operating systems. C, Python, the
reference implementation of Python, is open source software and has a community-based
development model, as do nearly all of its variant implementations. C, Python is managed by the
non-profit Python Software Foundation. Pythonfeatures a dynamic type system and automatic
memory management. Interactive Mode Programming.
6.1.2 Code for design of iris recognition system through machine learning process
main = tkinter.Tk()
main.title("Iris Recognition using Machine Learning Technique") #designing main screen
main.geometry("1300x1200")
global filename
global model
def getIrisFeatures(image):
global count
img = cv2.imread(image,0)
img = cv2.medianBlur(img,5)
cimg = cv2.cvtColor(img,cv2.COLOR_GRAY2BGR)
circles=cv2.HoughCircles(img,cv2.HOUGH_GRADIENT,1,10,param1=63,param2=70,minRadius=0,
maxRadius=0)
if circles is not None:
height,width = img.shape
r = 0
mask = np.zeros((height,width), np.uint8)
59.
51
for i incircles[0,:]:
cv2.circle(cimg,(i[0],i[1]),int(i[2]),(0,0,0))
cv2.circle(mask,(i[0],i[1]),int(i[2]),(255,255,255),thickness=0)
blank_image = cimg[:int(i[1]),:int(i[1])]
masked_data = cv2.bitwise_and(cimg, cimg, mask=mask)
_,thresh = cv2.threshold(mask,1,255,cv2.THRESH_BINARY)
contours=cv2.findContours(thresh,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
x,y,w,h = cv2.boundingRect(contours[0][0])
crop = img[y:y+h,x:x+w]
r = i[2]
cv2.imwrite ("test.png",crop)
else:
count = count + 1
miss.append(image)
return cv2.imread("test.png")
def uploadDataset():
global filename
filename = filedialog.askdirectory(initialdir=".")
text.delete('1.0', END)
text.insert(END,filename" loadednn");
def loadModel():
global model
text.delete('1.0', END)
X_train = np.load('model/X.txt.npy')
Y_train = np.load('model/Y.txt.npy')
print(X_train.shape)
print(Y_train.shape)
text.insert(END,'Dataset contains total '+str(X_train.shape[0])+' iris images from
'+str(Y_train.shape[1])+"n")
if os.path.exists('model/model.json'):
with open('model/model.json', "r") as json_file:
loaded_model_json = json_file.read()
model = model_from_json(loaded_model_json)
model.load_weights("model/model_weights.h5")
60.
52
model._make_predict_function()
print(model.summary())
f = open('model/history.pckl','rb')
data = pickle.load(f)
f.close()
acc = data['accuracy']
accuracy = acc[59] * 100
text.insert(END,"CNN Model Prediction Accuracy = "+str(accuracy)+"nn")
text.insert(END,"See Black Console to view CNN layersn")
For the software part, that is one of the most important sections of the entire research, the CASIA IRIS
Dataset will be uploaded. It contains iris images from 108 people. Furthermore, the dataset will be used
for training the “convolutional neural model (CNN)” model (Liang et al. 2020). With the help of the
Hough Circles algorithm, the features from the iris images will be extracted. Once the dataset is
uploaded the CNN Models are generated from that. This helps to find the hits and misses. Here, the
hits are the rate of accuracy i.e. the matched images or features.
Determination of the information through the CASIA IRIS Dataset
Fig - 17
The above figure has been shown in the dataset with the particular user id of 108 persons. The entire
project has been mainly done for the best purpose of recognizing all people with respect to Iris. The
61.
53
entire implementation hasbeen mainly done by the “CASIA iris dataset”. The dataset must contain
the particular images of 108 numbers of peoples of the place (Luz, 2019). With respect to this
particular dataset the system user can be able to attend the proper training on the “convolutional neural
network (CNN)” model for all the members of the organization. With the help of this “convolutional
neural network (CNN)” model the system user can predict and recognize all the persons. To provide
the appropriate training on the “convolutional neural network (CNN)” model for extracting the
respective iris factors through the Hough Circles algorithm to describe all the iris cycles from the
respective eye images.
Implementation of the Hough Circles algorithm for extracting the features
The Hough circles transform is the standard values of the “computer vision algorithm” for
determination of the various types of parameters within the system. The system has determined
various types of aspects based on the simple types of geometric objects like the circles and lines for
representation of the image. The “circular Hough transform” has been mainly employed for deducing
all the centre coordinates and radius of the pupil regions and the iris regions. The automatic
segmentation algorithm has been mainly based on the proper notification of the Hough transformation
through Wildes (Lv et al. 2018). The “circular Hough transform” is mainly involved for generating
the particular edge map with respect to the canny edge detection process. This method is very much
applicable and necessary for completing the entire task of finding out the respective iris from the
image.
from tkinter import messagebox
from tkinter import *
from tkinter import simpledialog
import tkinter
from tkinter import filedialog
from tkinter.filedialog import askopenfilename
import numpy as np
import matplotlib.pyplot as plt
import os
from keras.utils.np_utils import to_categorical
from keras.layers import MaxPooling2D
from keras.layers import Dense, Dropout, Activation, Flatten
62.
54
from keras.layers importConvolution2D
from keras.models import Sequential
from keras.models import model_from_json
import pickle
import cv2
from keras.preprocessing import image
from skimage import data, color
from skimage.transform import hough_circle, hough_circle_peaks
from skimage.feature import canny
from skimage.draw import circle_perimeter
from skimage.util import img_as_ubyte
main = tkinter.Tk()
main.title("Iris Recognition using Machine Learning Technique") #designing main screen
main.geometry("1300x1200")
global filename
global model
def getIrisFeatures(image):
global count
img = cv2.imread(image,0)
img = cv2.medianBlur(img,5)
cimg = cv2.cvtColor(img,cv2.COLOR_GRAY2BGR)
circles
cv2.HoughCircles(img,cv2.HOUGH_GRADIENT,1,10,param1=63,param2=70,minRadius=0,max
Radius=0)
if circles is not None:
height,width = img.shape
r = 0
mask = np.zeros((height,width), np.uint8)
for i in circles[0,:]:
cv2.circle(cimg,(i[0],i[1]),int(i[2]),(0,0,0))
cv2.circle(mask,(i[0],i[1]),int(i[2]),(255,255,255),thickness=0)
63.
55
blank_image = cimg[:int(i[1]),:int(i[1])]
masked_data= cv2.bitwise_and(cimg, cimg, mask=mask)
_,thresh = cv2.threshold(mask,1,255,cv2.THRESH_BINARY)
contours =
cv2.findContours(thresh,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
x,y,w,h = cv2.boundingRect(contours[0][0])
crop = img[y:y+h,x:x+w]
r = i[2]
cv2.imwrite("test.png",crop)
else:
count = count + 1
miss.append(image)
return cv2.imread("test.png")
def uploadDataset():
global filename
filename = filedialog.askdirectory(initialdir=".")
text.delete('1.0', END)
text.insert(END,filename+" loadednn");
def loadModel():
global model
text.delete('1.0', END)
X_train = np.load('model/X.txt.npy')
Y_train = np.load('model/Y.txt.npy')
print(X_train.shape)
print(Y_train.shape)
text.insert(END,'Dataset contains total '+str(X_train.shape[0])+' iris images from
'+str(Y_train.shape[1])+"n")
if os.path.exists('model/model.json'):
with open('model/model.json', "r") as json_file:
loaded_model_json = json_file.read()
model = model_from_json(loaded_model_json)
60
f = open('model/history.pckl','wb')
pickle.dump(hist.history, f)
f.close()
f = open('model/history.pckl', 'rb'
data = pickle.load(f)
f.close()
acc = data['accuracy']
accuracy = acc[9] * 100
print("Training Model Accuracy = "+str(accuracy))
The entire code has described the entire process for detecting all the informative data within the
“convolutional neural model (CNN)” model. With the help of the process the actual values have been
properly generated on the screen in the actual form of graph. In the screen the actual accuracy vs. loss
graph has been generated. The predictable value of the curve and the generated accuracy value are
100% correct.
69.
61
7. SYSTEM TESTING
7.1INTRODUCTION TO TESTING
The purpose of testing is to discover errors. Testing is the process of trying to discover every
conceivable fault or weakness in a work product. It provides a way to check the functionality of
components, sub assemblies, assemblies and/or a finished product It is the process of exercising
software with the intent of ensuring that the Software system meets its requirements and user
expectations and does not fail in an unacceptable manner. There are various types of test. Each test
type addresses a specific testing requirement.
TYPES OF TESTS
UNIT TESTING
Unit testing involves the design of test cases that validate that the internal program logic is functioning
properly, and that program inputs produce valid outputs. All decision branches and internal code flow
should be validated. It is the testing of individual software units of the application .it is done after the
completion of an individual unit before integration. This is a structural testing, that relies on
knowledge of its construction and is invasive. Unit tests perform basic tests at component level and
test a specific business process, application, and/or system configuration. Unit tests ensure that each
unique path of a business process performs accurately to the documented specifications and contains
clearly defined inputs and expected results.
INTEGRATION TESTING
Integration tests are designed to test integrated software components to determine if they actually run
as one program. Testing is event driven and is more concerned with the basic outcome of screens or
fields. Integration tests demonstrate that although the components were individually satisfaction, as
shown by successfully unit testing, the combination of components is correct and consistent.
Integration testing is specifically aimed at exposing the problems that arise from the combination of
components. This type of testing focuses on verifying the interactions and interfaces between
integrated components and detecting defects related to integration issues.
System Integration Theory: Integration testing is based on the premise that the behavior of a system
can be understood and validated by testing its integrated components. This theory assumes that if
each component behaves correctly in isolation and interacts properly when combined, the system as
a whole will also function correctly.
70.
62
Component Interaction Theory:Integration testing theory emphasizes the importance of testing
interactions between components, including data exchange, control flow, and communication
protocols. By thoroughly testing these interactions, integration testing aims to uncover issues such as
incorrect data passing, communication failures, or incompatible interfaces.
FUNCTIONAL TEST
Functional tests provide systematic demonstrations that functions tested are available as specified
by the business and technical requirements, system documentation, and user manuals. Functional
testing is centered on the following items:
Valid Input : identified classes of valid input must be accepted.
Invalid Input : identified classes of invalid input must be rejected.
Functions : identified functions must be exercised.
Output : identified classes of application outputs
Systems/Procedures : interfacing systems or procedures must be invoked.
Organization and preparation of functional tests is focused on requirements, key functions, or special
test cases. In addition, systematic coverage pertaining to identify Business process flows; data fields,
predefined processes, and successive processes must be considered for testing. Before functional
testing is complete, additional tests are identified and the effective value of current tests is determined.
SYSTEM TEST
System testing ensures that the entire integrated software system meets requirements. It tests a
configuration to ensure known and predictable results. An example of system testing is the
configuration oriented system integration test. System testing is based on process descriptions and
flows, emphasizing pre-driven process links and integration points.This type of testing is conducted
after integration testing and before acceptance testing and aims to verify that the entire system meets
specified requirements and functions correctly in its intended environment.
System Performance Theory: System testing theory includes considerations of system performance,
such as responsiveness, scalability, reliability, and resource utilization. This theory acknowledges
that the performance of a software system can significantly impact user experience and overall system
effectiveness. System testing evaluates performance metrics under various conditions, including
normal usage, peak loads, and stress scenarios, to ensure that the system meets performance
requirements and can handle expected workloads efficiently.
System Reliability and Stability Theory: System testing theory addresses the reliability and stability
of the software system under test. This theory recognizes that software failures, crashes, or
71.
63
unexpected behavior canundermine user confidence and disrupt business operations. System testing
assesses the system's reliability by subjecting it to rigorous testing scenarios, including error handling,
boundary conditions, and fault tolerance mechanisms, to identify and rectify potential stability issues
before deployment.
WHITE BOX TESTING
White Box Testing is a testing in which in which the software tester has knowledge of the inner
workings, structure and language of the software, or at least its purpose. It is purpose. It is used to
test areas that cannot be reached from a black box level. White-box testing, also known as clear-box
testing, glass-box testing, or structural testing, is a software testing technique that examines the
internal structure and implementation details of a software application. Unlike black-box testing,
which focuses on testing the functionality of a system without considering its internal workings,
white-box testing requires access to the source code and relies on understanding the internal logic,
control flow, and data flow of the software under test.
BLACK BOX TESTING
Black Box Testing is testing the software without any knowledge of the inner workings, structure or
language of the module being tested. Black box tests, as most other kinds of tests, must be written
from a definitive source document, such as specification or requirements document, such as
specification or requirements document. It is a testing in which the software under test is treated, as
a black box .you cannot “see” into it. The test provides inputs and responds to outputs without
considering how the software works.
UNIT TESTING
Unit testing is usually conducted as part of a combined code and unit test phase of the software
lifecycle, although it is not uncommon for coding and unit testing to be conducted as two distinct
phases.
72.
64
7.2 TESTING STRATEGIES
Fieldtesting will be performed manually and functional tests will be written in detail.
Test objectives
• All field entries must work properly.
• Pages must be activated from the identified link.
• The entry screen, messages and responses must not be delayed.
Features to be tested
• Verify that the entries are of the correct format
• No duplicate entries should be allowed
• All links should take the user to the correct page.
INTEGRATION TESTING
Software integration testing is the incremental integration testing of two or more integrated software
components on a single platform to produce failures caused by interface defects.
The task of the integration test is to check that components or software applications, e.g. components
in a software system or – one step up – software applications at the company level
– interact without error.
Test Results: All the test cases mentioned above passed successfully. No defects encountered.
Acceptance Testing
User Acceptance Testing is a critical phase of any project and requires significant participation by
the end user. It also ensures that the system meets the functional requirements.
Test Results: All the test cases mentioned above passed successfully. No defects encountered.
73.
65
8. SCREENSHOTS
Generating theConvolutional Neural Network (CNN) Model from the provided dataset
Figure 18: Screenshot of generation of the Convolutional Neural Network (CNN) model
The above figure has been mainly described the loading process of the respective dataset. From this
particular dataset the iris image can be easily generated by the “convolutional neural model (CNN)”.
74.
66
Generation of theLOSS Graph and accuracy check
Figure 19: Generation of the LOSS Graph and accuracy check
All the under attached image have been described the actual process of the iris recognition system of
all the 108 peoples of the organization. The screenshots are attached below.
75.
67
Figure 20: Recognitionprocess
The above figure has been provided the accurate loss graph from the provided dataset and the check
the accuracy level through the “convolutional neural model (CNN)”. In this case the dotted red line
has been represented as the loss value and factors of “convolutional neural model (CNN)”. And from
the graph it has been properly proved that the iteration loss in the initial cases is greater than 3.9%.
But with respect to the increased value of epoch, the loss value has been reduced to the value of zero.
In the figure, the green line has been represented with the proper value of accuracy. In the graph, the
X-axis has been shown the actual value of epoch and the Y-axis has been shown the proper value of
accuracy and respective loss values. With the help of all these values, the system user can be easily
able to easily recognize the proper ID from the iris picture through the “convolutional neural model
(CNN)” model. The results that have been generally out from the model are totally correct for
recognition of human identification.
70
Figure 23: Accuraterecognition image
With respect to all these screenshots the possible way for getting the accurate iris recognition process
of employee can be easily get.
79.
71
9. CONCLUSION
9.1. Introduction
Thisis the final chapter in the assignment that discusses the entire research work and also analyses
the software work that has been conducted for obtaining the expected outcomes. This chapter mainly
focuses on the expected outcomes, findings and analysis, which will be compared with the actual
outcomes. This chapter compares both the actual and expected outcomes. This chapter also discusses
the limitations that were faced while conducting the research, as well as. It also provides how this
research work can be extended in future. In order to determine the effects of the research and the
software study, it is essential to know the fundamental objectives and aims of the study. For
conducting the software work, more emphasis has been given on how the implications of different
kinds of software and technology will be carried out so that the actual results are achieved. In this
particular chapter, the connections between the prime objectives and the results have been built.
Future recommendations on the software work will be made so that this research work can be
expanded further.
5.2. Future Work
The entire work has been mainly used for the purpose of attempting various strategies for creating
the fingerprint-based iris recognition system. This particular system has been proposed various types
of derivation quality with respect to the different types of features and aspects of the iris patterns.
This process can be easily modified and developed by the renowned "convolution neural networking
(CNN)" model with the actual numbers of few layers (Hava et al., 2019). The operational
performance can be easily verified by the accuracy plots and the loads' case plot. The best
effectiveness of the entire proposed approach should be easily tested with the help of the two types
of challenging databases, such as the site databases and the CASIA database. The particular
"convolution neural networking (CNN)" model has generally been ready to quickly developed and
improved to measure all values of the challenges and problematic iris recognition based datasets
(Hofer, 2020). For this purpose, various types of datasets and experiments have been conducted
within two types of categories.
80.
72
10. REFERENCES
Journals
Adamu, A.,2019. Attendance management system using fingerprint and iris biometric. FUDMA
Journal of Sciences (FJS), 3(4), pp.427-433.
Akbar, M.J., 2019. A Overview of Spoof Speech Detection for Automatic Speaker Verification.
Albakri, G. and Alghowinem, S., 2019. The effectiveness of depth data in liveness face authentication
using 3D sensor cameras. Sensors, 19(8), p.1928.
ahawe, E. A., Humbe, V. T., & Shinde, G. N. An Analysis on Biometric TraitRecognition.
Arora, S. and Bhatia, M.P.S., 2018, July. A robust approach for gender recognition using deep
learning. In 2018 9th International Conference on Computing, Communication and Networking
Technologies (ICCCNT) (pp. 1-6). IEEE.
Arteaga Falconi, J.S., 2020. Towards an Accurate ECG Biometric Authentication System with Low
Acquisition Time (Doctoral dissertation, Université d'Ottawa/University of Ottawa).
Ashraf, A. and Vats, I., The Survey of Architecture of Multi-Modal (Fingerprint and Iris Recognition)
Biometric Authentication System.
Attia, A., Akhtar, Z., Chalabi, N.E., Maza, S. and Chahir, Y., 2020. Deep rule-based classifier for
finger knuckle pattern recognition system. Evolving Systems, pp.1-15.
Cardia Neto, J.B., 2020. 3D face recognition with descriptor images and shallow convolutional neural
networks.
Cortès Sebastià, G., 2018. End-to-End photoplethysmography-based biometric authentication system
by using deep neural networks (Bachelor's thesis, Universitat Politècnica de Catalunya).
Derman, E., Galdi, C. and Dugelay, J.L., 2017, April. Integrating facial makeup detection into
multimodal biometric user verification system. In 2017 5th International Workshop on Biometrics
and Forensics (IWBF) (pp. 1-6). IEEE.
Elhoseny, M., Elkhateb, A., Sahlol, A. and Hassanien, A.E., 2018. Multimodal biometric personal
identification and verification. In Advances in Soft Computing and Machine Learning in Image
Processing (pp. 249-276). Springer, Cham.
Folorunso, C.O., Asaolu, O.S. and Popoola, O.P., 2019. A Review of Voice-Base Person
Identification: State-of-the-Art. Covenant Journal of Engineering Technology, 3(1).
81.
73
Garg, S.N., Vig,R. and Gupta, S., 2017. A Critical Study and Comparative Analysis of
Multibiometric Systems using Iris and Fingerprints. International Journal of Computer Science and
Information Security, 15(1), p.549.
Gogate, G. and Azad, V., Iris Biometric Recognition for Person Identification in Security Society
System.
Gonzalez-Sosa, E., Vera-Rodriguez, R., Fierrez, J. and Patel, V.M., 2018, February. Person
recognition beyond the visible spectrum: combining body shape and texture from mmW images.
In 2018 International Conference on Biometrics (ICB) (pp. 241-246). IEEE.
Gonzalez-Sosa, E., Vera-Rodriguez, R., Fierrez, J., Alonso-Fernandez, F. and Patel, V.M., 2019.
Exploring Body Texture From mmW Images for Person Recognition. IEEE Transactions on
Biometrics, Behavior, and Identity Science, 1(2), pp.139-151.
Guerra-Segura, E., Ortega-Pérez, A. and Travieso, C.M., 2020. In-air signature verification system
using Leap Motion. Expert Systems with Applications, 165, p.113797.
Hamd, M.H. and Ahmed, S.K., 2018. Biometric system design for iris recognition using intelligent
algorithms. International Journal of Modern Education and Computer Science, 10(3), p.9.
Hansley, E.E., 2018. Identification of individuals from ears in real world conditions.
Hava, V., Kale, S., Bairagi, A., Prasad, C., Chatterjee, S. and Varghese, A., 2019. Free & Generic
Facial Attendance System using Android.
Haytom, A., Rosenberger, C., Charrier, C., Zhu, C. and Régnier, C., 2019, May. Biometric
Application for authentication and management of online exams. In Summer School on Biometrics
and Forensics.
Herbadji, A., Guermat, N., Ziet, L., Akhtar, Z., Cheniti, M. and Herbadji, D., 2020. Contactless Multi-
biometric System Using Fingerprint and Palmprint Selfies. Traitement du Signal, 37(6), pp.889-897.
Hernández-García, R., Barrientos, R.J., Rojas, C., Soto-Silva, W.E., Mora, M., Gonzalez, P. and Frati,
F.E., 2019. Fast finger vein recognition based on sparse matching algorithm under a multicore
platform for real-time individuals identification. Symmetry, 11(9), p.1167.
Hofer, P., 2020. Gait recognition using neural networks/Author Philipp Hofer (Doctoral dissertation,
Universität Linz).
Hofer, P., 2020. Gait recognition using neural networks/Author Philipp Hofer (Doctoral dissertation,
Universität Linz).
Hong, C.P., 2020. A Study of Machine Learning based Face Recognition for User
Authentication. Journal of the Semiconductor & Display Technology, 19(2), pp.96-99.